diff --git a/.github/ISSUE_TEMPLATE/new-feature.md b/.github/ISSUE_TEMPLATE/new-feature.md index 299510091..fd89b149e 100644 --- a/.github/ISSUE_TEMPLATE/new-feature.md +++ b/.github/ISSUE_TEMPLATE/new-feature.md @@ -6,6 +6,7 @@ labels: '' assignees: '' --- +**ETA:** _Provide an estimated completion date for the feature (if available)_ **PR:** _Provide PR URL(s) for this feature (if available)_ _Describe the new feature here._ diff --git a/api-docs/README.md b/api-docs/README.md index ecd1d2b56..fb1095529 100755 --- a/api-docs/README.md +++ b/api-docs/README.md @@ -1,43 +1,85 @@ -## Generate InfluxDB API docs -InfluxDB uses [Redoc](https://github.com/Redocly/redoc/), +# How to generate InfluxDB API docs + +InfluxData uses [Redoc](https://github.com/Redocly/redoc/), [redoc-cli](https://github.com/Redocly/redoc/blob/master/cli/README.md), and Redocly's [OpenApi CLI](https://redoc.ly/docs/cli/) to generate -API documentation from the InfluxDB openapi contracts. +API documentation from the [InfluxDB OpenAPI (aka Swagger) contracts](https://github.com/influxdata/openapi). -To minimize repo size, the generated API documentation HTML is gitignored, therefore -not committed directly to the docs repo. -The InfluxDB docs deployment process uses swagger files in the `api-docs` directory -to generate version-specific API documentation. +To minimize the size of the `docs-v2` repository, the generated API documentation HTML is gitignored, therefore +not committed to the docs repo. +The InfluxDB docs deployment process uses OpenAPI specification files in the `api-docs` directory +to generate version-specific (Cloud, OSS v2.1, OSS v2.0, etc.) API documentation. -### Versioned swagger files -The structure versions swagger files using the following pattern: +## How we version OpenAPI contracts -``` +The `api-docs` directory structure versions OpenAPI files using the following pattern: + +```md api-docs/ ├── v2.0/ │ └── ref.yml + │ └── swaggerV1Compat.yml ├── v2.1/ │ └── ref.yml + │ └── swaggerV1Compat.yml ├── v2.2/ │ └── ref.yml + │ └── swaggerV1Compat.yml └── etc... ``` -### Configure OpenAPI CLI linting and bundling -`.redoc.yaml` sets linting and bundling options for `openapi` CLI. -`./openapi/plugins` contains custom OpenAPI CLI plugins composed of *rules* (for linting) and *decorators* (for bundle customization). +### InfluxDB Cloud version -### Custom content -`./openapi/content` contains custom OAS (OpenAPI Spec) content in YAML files. The content structure and Markdown must be valid OAS. +Because InfluxDB Cloud releases are frequent, we make no effort to version the +Cloud API spec. We regenerate API reference docs from `influxdata/openapi` +**master** as features are released. -`./openapi/plugins` use `./openapi/plugins/decorators` to apply the content to the contracts. -`.yml` files in `./openapi/content/` set content for sections (nodes) in the contract. To update the content for those nodes, you only need to update the YAML files. +### InfluxDB OSS version -To add new YAML files for other nodes in the openapi contracts, configure the new content YAML file in `./openapi/content/content.js`. Then, write a decorator module for the node and configure the decorator in the plugin, e.g. `./openapi/plugins/docs-plugin.js`. See the [complete list of OAS v3.0 nodes](https://github.com/Redocly/openapi-cli/blob/master/packages/core/src/types/oas3.ts#L529). + Given that + `influxdata/openapi` **master** may contain OSS spec changes not implemented + in the current OSS release, we (Docs team) maintain a release branch, `influxdata/openapi` +**docs-release/influxdb-oss**, used to generate OSS reference docs. -`openapi` CLI requires that modules use CommonJS `require` syntax for imports. +To update this branch to a new OSS release, (re)base on the commit or tag for the [latest release of InfluxDB OSS](#how-to-find-the-api-spec-used-by-an-influxdb-oss-version). + +```sh +git checkout docs-release/influxdb-oss +git rebase -i influxdb-oss-v2.2.0 +git push -f origin docs-release/influxdb-oss +``` + +To update this branch with documentation changes between OSS releases, cherry-pick your documentation commits into the release branch. + +```sh +git checkout docs-release/influxdb-oss +git cherry-pick +git push -f origin docs-release/influxdb-oss +``` + +### How to find the API spec used by an InfluxDB OSS version + +`influxdata/openapi` does not version the InfluxData API. +To find the `influxdata/openapi` commit SHA used in a specific version of InfluxDB OSS, +see `/scripts/fetch-swagger.sh` in `influxdata/influxdb`--for example, +for the `influxdata/openapi` commit used in OSS v2.2.0, see https://github.com/influxdata/influxdb/blob/v2.2.0/scripts/fetch-swagger.sh#L13=. +For convenience, we tag `influxdata/influxdb` (OSS) release points in `influxdata/openapi` as +`influxdb-oss-v[OSS_VERSION]`. See . + +## How to fetch and process influxdata/openapi contracts + +Update the contracts in `api-docs` to the latest from `influxdata/openapi`. + +```sh +# In your terminal, go to the `docs-v2/api-docs` directory: +cd api-docs + +# Fetch the contracts and run @redocly/openapi-cli to customize and bundle them. +sh getswagger.sh oss; sh getswagger.sh cloud +``` + +## How to generate API docs locally -### Generate API docs locally Because the API documentation HTML is gitignored, you must manually generate it to view the API docs locally. @@ -50,11 +92,71 @@ npx --version If `npx` returns errors, [download](https://nodejs.org/en/) and run a recent version of the Node.js installer for your OS. -In your terminal, from the root of the docs repo, run: - ```sh +# In your terminal, go to the `docs-v2/api-docs` directory: cd api-docs -# Generate the API docs +# Generate the API docs with Redocly sh generate-api-docs.sh ``` + +## How to use custom OpenAPI spec processing + +Generally, you should manage API content in `influxdata/openapi`. +In some cases, however, you may want custom processing (e.g. collecting all Tags) +or additional content (e.g. describing the reference documentation) +specifically for the docs. + +When you run `getswagger.sh`, it executes `@redocly/openapi-cli` and the plugins listed in `.redocly.yaml`. +[`./openapi/plugins`](./openapi/plugins) use +[`./openapi/plugins/decorators`](./openapi/plugins/decorators) to apply custom +processing to OpenAPI specs. + +`.yml` files in [`./openapi/content`](./openapi/content) set content for sections (nodes) in the contract. +To update the content for those nodes, you only need to update the YAML files. +To add new YAML files for other nodes in the contracts, +configure the new content YAML file in [`./openapi/content/content.js`](./openapi/content/content.js). +The content structure and Markdown must be valid OAS. + +Then, you'll need to write or update a decorator module for the node and configure the decorator in the plugin, +e.g. [`./openapi/plugins/docs-plugin.js`](`./openapi/plugins/docs-plugin.js). +See the [complete list of OAS v3.0 nodes](https://github.com/Redocly/openapi-cli/blob/master/packages/core/src/types/oas3.ts#L529). + +`@redocly/openapi-cli` requires that modules use CommonJS `require` syntax for imports. + +### How to add tag content or describe a group of paths + +In API reference docs, we use OpenAPI `tags` elements for navigation and the +`x-traitTag` vendor extension to define custom content. + +| Example | OpenAPI field | | +|:-------------------------------------------------------------------------------------------------------|-------------------------------------------------------|--------------------------------------------| +| [Add supplementary documentation](https://docs.influxdata.com/influxdb/cloud/api/#tag/Quick-start) | `tags: [ { name: 'Quick start', x-traitTag: true } ]` | [Source](https://github.com/influxdata/openapi/master/src/cloud/tags.yml) | +| [Group and describe related paths](https://docs.influxdata.com/influxdb/cloud/api/#tag/Authorizations) | `tags: [ { name: 'Buckets', description: '...' } ]` | [Source](https://github.com/influxdata/openapi/master/src/cloud/tags-groups.yml)) | + +## How to test your spec or API reference changes + +You can use `getswagger.sh` to fetch contracts from any URL. +For example, if you've made changes to spec files and generated new contracts in your local `openapi` repo, run `getswagger.sh` to fetch and process them. + +To fetch contracts from your own `openapi` repo, pass the +`-b` `base_url` option and the full path to your `openapi` directory. + +```sh +# Use the file:/// protocol to pass your openapi directory. +sh getswagger.sh oss -b file:///Users/me/github/openapi +``` + +After you fetch them, run the linter or generate HTML to test your changes before you commit them to `influxdata/openapi`. +By default, `getswagger.sh` doesn't run the linter when bundling +the specs. +Manually run the [linter rules](https://redoc.ly/docs/cli/resources/built-in-rules/) to get a report of errors and warnings. + +```sh +npx @redocly/openapi-cli lint v2.1/ref.yml +``` + +### Configure OpenAPI CLI linting and bundling + +The `.redoc.yaml` configuration file sets options for the `@redocly/openapi-cli` [`lint`](https://redoc.ly/docs/cli/commands/lint/) and [`bundle`](https://redoc.ly/docs/cli/commands/bundle/) commands. +`./openapi/plugins` contains custom InfluxData Docs plugins composed of *rules* (for validating and linting) and *decorators* (for customizing). For more configuration options, see `@redocly/openapi-cli` [configuration file documentation](https://redoc.ly/docs/cli/configuration/configuration-file/). diff --git a/api-docs/cloud/ref.yml b/api-docs/cloud/ref.yml index eaa8b7811..76a133627 100644 --- a/api-docs/cloud/ref.yml +++ b/api-docs/cloud/ref.yml @@ -2,8 +2,8 @@ components: parameters: After: description: > - The last resource ID from which to seek from (but not including). This - is to be used instead of `offset`. + Resource ID to seek from. Results are not inclusive of this ID. Use + `after` instead of `offset`. in: query name: after required: false @@ -169,8 +169,8 @@ components: status: default: active description: >- - If inactive the token is inactive and requests using the token will - be rejected. + Status of the token. If `inactive`, requests using the token will be + rejected. enum: - active - inactive @@ -197,10 +197,10 @@ components: - 'y' type: object Axis: - description: The description of a particular axis for a visualization. + description: Axis used in a visualization. properties: base: - description: Base represents the radix for formatting axis values. + description: Radix for formatting axis values. enum: - '' - '2' @@ -208,23 +208,23 @@ components: type: string bounds: description: >- - The extents of an axis in the form [lower, upper]. Clients determine - whether bounds are to be inclusive or exclusive of their limits + The extents of the axis in the form [lower, upper]. Clients + determine whether bounds are inclusive or exclusive of their limits. items: type: string maxItems: 2 minItems: 0 type: array label: - description: Label is a description of this Axis + description: Description of the axis. type: string prefix: - description: Prefix represents a label prefix for formatting axis values. + description: Label prefix for formatting axis values. type: string scale: $ref: '#/components/schemas/AxisScale' suffix: - description: Suffix represents a label suffix for formatting axis values. + description: Label suffix for formatting axis values. type: string type: object AxisScale: @@ -391,22 +391,22 @@ components: properties: labels: $ref: '#/components/schemas/Link' - description: URL to retrieve labels for this bucket + description: URL to retrieve labels for this bucket. members: $ref: '#/components/schemas/Link' - description: URL to retrieve members that can read this bucket + description: URL to retrieve members that can read this bucket. org: $ref: '#/components/schemas/Link' - description: URL to retrieve parent organization for this bucket + description: URL to retrieve parent organization for this bucket. owners: $ref: '#/components/schemas/Link' description: URL to retrieve owners that can read and write to this bucket. self: $ref: '#/components/schemas/Link' - description: URL for this bucket + description: URL for this bucket. write: $ref: '#/components/schemas/Link' - description: URL to write line protocol for this bucket + description: URL to write line protocol to this bucket. readOnly: true type: object name: @@ -596,7 +596,10 @@ components: readOnly: true type: string latestCompleted: - description: Timestamp of latest scheduled, completed run, RFC3339. + description: >- + Timestamp (in RFC3339 date/time + format](https://datatracker.ietf.org/doc/html/rfc3339)) of the + latest scheduled and completed run. format: date-time readOnly: true type: string @@ -828,24 +831,24 @@ components: DBRP: properties: bucketID: - description: the bucket ID used as target for the translation. + description: ID of the bucket used as the target for the translation. type: string database: description: InfluxDB v1 database type: string default: description: >- - Specify if this mapping represents the default retention policy for - the database specificed. + Mapping represents the default retention policy for the database + specified. type: boolean id: - description: the mapping identifier + description: ID of the DBRP mapping. readOnly: true type: string links: $ref: '#/components/schemas/Links' orgID: - description: the organization ID that owns this mapping. + description: ID of the organization that owns this mapping. type: string retention_policy: description: InfluxDB v1 retention policy @@ -861,21 +864,21 @@ components: DBRPCreate: properties: bucketID: - description: the bucket ID used as target for the translation. + description: ID of the bucket used as the target for the translation. type: string database: description: InfluxDB v1 database type: string default: description: >- - Specify if this mapping represents the default retention policy for - the database specificed. + Mapping represents the default retention policy for the database + specified. type: boolean org: - description: the organization that owns this mapping. + description: Name of the organization that owns this mapping. type: string orgID: - description: the organization ID that owns this mapping. + description: ID of the organization that owns this mapping. type: string retention_policy: description: InfluxDB v1 retention policy @@ -1134,80 +1137,6 @@ components: - start - stop type: object - DemoDataBucket: - properties: - createdAt: - format: date-time - readOnly: true - type: string - description: - type: string - id: - readOnly: true - type: string - labels: - $ref: '#/components/schemas/Labels' - links: - example: - labels: /api/v2/buckets/1/labels - members: /api/v2/buckets/1/members - org: /api/v2/orgs/2 - owners: /api/v2/buckets/1/owners - self: /api/v2/buckets/1 - write: /api/v2/write?org=2&bucket=1 - properties: - labels: - $ref: '#/components/schemas/Link' - description: URL to retrieve labels for this bucket - members: - $ref: '#/components/schemas/Link' - description: URL to retrieve members that can read this bucket - org: - $ref: '#/components/schemas/Link' - description: URL to retrieve parent organization for this bucket - owners: - $ref: '#/components/schemas/Link' - description: URL to retrieve owners that can read and write to this bucket. - self: - $ref: '#/components/schemas/Link' - description: URL for this bucket - write: - $ref: '#/components/schemas/Link' - description: URL to write line protocol for this bucket - readOnly: true - type: object - name: - type: string - orgID: - type: string - retentionRules: - $ref: '#/components/schemas/RetentionRules' - rp: - type: string - schemaType: - $ref: '#/components/schemas/SchemaType' - default: implicit - type: - default: demodata - readOnly: true - type: string - updatedAt: - format: date-time - readOnly: true - type: string - required: - - name - - retentionRules - DemoDataBuckets: - properties: - buckets: - items: - $ref: '#/components/schemas/DemoDataBucket' - type: array - links: - $ref: '#/components/schemas/Links' - readOnly: true - type: object Dialect: description: >- Dialect are options to change the default CSV output format; @@ -1315,23 +1244,22 @@ components: type: string err: description: >- - err is a stack of errors that occurred during processing of the - request. Useful for debugging. + Stack of errors that occurred during processing of the request. + Useful for debugging. readOnly: true type: string message: - description: message is a human-readable message. + description: Human-readable message. readOnly: true type: string op: description: >- - op describes the logical code operation during error. Useful for - debugging. + Describes the logical code operation when the error occurred. Useful + for debugging. readOnly: true type: string required: - code - - message Expression: oneOf: - $ref: '#/components/schemas/ArrayExpression' @@ -2232,6 +2160,9 @@ components: deleteRequestsPerSecond: description: Allowed organization delete request rate. type: integer + queryTime: + description: Query Time in nanoseconds + type: integer readKBs: description: Query limit in kb/sec. 0 is unlimited. type: integer @@ -2240,6 +2171,7 @@ components: type: integer required: - readKBs + - queryTime - concurrentReadRequests - writeKBs - concurrentWriteRequests @@ -2376,30 +2308,27 @@ components: type: string err: description: >- - Err is a stack of errors that occurred during processing of the - request. Useful for debugging. + Stack of errors that occurred during processing of the request. + Useful for debugging. readOnly: true type: string line: - description: First line within sent body containing malformed data + description: First line in the request body that contains malformed data. format: int32 readOnly: true type: integer message: - description: Message is a human-readable message. + description: Human-readable message. readOnly: true type: string op: description: >- - Op describes the logical code operation during error. Useful for - debugging. + Describes the logical code operation when the error occurred. Useful + for debugging. readOnly: true type: string required: - code - - message - - op - - err LineProtocolLengthError: properties: code: @@ -2409,7 +2338,7 @@ components: readOnly: true type: string message: - description: Message is a human-readable message. + description: Human-readable message. readOnly: true type: string required: @@ -2498,12 +2427,13 @@ components: - note type: object MeasurementSchema: - description: The schema definition for a single measurement + description: Definition of a measurement schema. example: bucketID: ba3c5e7f9b0a0010 columns: - - name: time - type: timestamp + - format: unix timestamp + name: time + type: integer - name: host type: tag - name: region @@ -2514,17 +2444,17 @@ components: - dataType: float name: usage_user type: field - createdAt: 2021-01-21T00:48:40.993Z + createdAt: '2021-01-21T00:48:40.993Z' id: 1a3c5e7f9b0a8642 name: cpu orgID: 0a3c5e7f9b0a0001 - updatedAt: 2021-01-21T00:48:40.993Z + updatedAt: '2021-01-21T00:48:40.993Z' properties: bucketID: description: ID of the bucket that the measurement schema is associated with. type: string columns: - description: An ordered collection of column definitions + description: Ordered collection of column definitions. items: $ref: '#/components/schemas/MeasurementSchemaColumn' type: array @@ -2539,7 +2469,9 @@ components: nullable: false type: string orgID: - description: ID of organization that the measurement schema is associated with. + description: >- + ID of the organization that the measurement schema is associated + with. type: string updatedAt: format: date-time @@ -2553,10 +2485,11 @@ components: - updatedAt type: object MeasurementSchemaColumn: - description: Definition of a measurement column + description: Definition of a measurement schema column. example: + format: unix timestamp name: time - type: timestamp + type: integer properties: dataType: $ref: '#/components/schemas/ColumnDataType' @@ -2569,11 +2502,12 @@ components: - type type: object MeasurementSchemaCreateRequest: - description: Create a new measurement schema + description: Create a new measurement schema. example: columns: - - name: time - type: timestamp + - format: unix timestamp + name: time + type: integer - name: host type: tag - name: region @@ -2587,7 +2521,7 @@ components: name: cpu properties: columns: - description: An ordered collection of column definitions + description: Ordered collection of column definitions. items: $ref: '#/components/schemas/MeasurementSchemaColumn' type: array @@ -2602,23 +2536,23 @@ components: example: measurementSchemas: - bucketID: ba3c5e7f9b0a0010 - createdAt: 2021-01-21T00:48:40.993Z + createdAt: '2021-01-21T00:48:40.993Z' id: 1a3c5e7f9b0a8642 name: cpu orgID: 0a3c5e7f9b0a0001 - updatedAt: 2021-01-21T00:48:40.993Z + updatedAt: '2021-01-21T00:48:40.993Z' - bucketID: ba3c5e7f9b0a0010 - createdAt: 2021-01-21T00:48:40.993Z + createdAt: '2021-01-21T00:48:40.993Z' id: 1a3c5e7f9b0a8643 name: memory orgID: 0a3c5e7f9b0a0001 - updatedAt: 2021-01-21T00:48:40.993Z + updatedAt: '2021-01-21T00:48:40.993Z' - bucketID: ba3c5e7f9b0a0010 - createdAt: 2021-01-21T00:48:40.993Z + createdAt: '2021-01-21T00:48:40.993Z' id: 1a3c5e7f9b0a8644 name: disk orgID: 0a3c5e7f9b0a0001 - updatedAt: 2021-01-21T00:48:40.993Z + updatedAt: '2021-01-21T00:48:40.993Z' properties: measurementSchemas: items: @@ -2631,8 +2565,9 @@ components: description: Update an existing measurement schema example: columns: - - name: time - type: timestamp + - format: unix timestamp + name: time + type: integer - name: host type: tag - name: region @@ -2923,7 +2858,10 @@ components: readOnly: true type: string latestCompleted: - description: Timestamp of latest scheduled, completed run, RFC3339. + description: >- + Timestamp (in RFC3339 date/time + format](https://datatracker.ietf.org/doc/html/rfc3339)) of the + latest scheduled and completed run. format: date-time readOnly: true type: string @@ -3337,6 +3275,13 @@ components: type: string name: type: string + users: + description: >- + An optional list of email address's to be invited to the + organization + items: + type: string + type: array required: - name type: object @@ -3428,7 +3373,7 @@ components: Ready: properties: started: - example: 2019-03-13T10:09:33.891196-04:00 + example: '2019-03-13T10:09:33.891196-04:00' format: date-time type: string status: @@ -3958,6 +3903,7 @@ components: ScriptInvocationParams: properties: params: + additionalProperties: true type: object type: object ScriptLanguage: @@ -4319,8 +4265,8 @@ components: properties: authorizationID: description: >- - The ID of the authorization used when this task communicates with - the query engine. + ID of the authorization used when the task communicates with the + query engine. type: string createdAt: format: date-time @@ -4328,17 +4274,27 @@ components: type: string cron: description: >- - A task repetition schedule in the form '* * * * * *'; parsed from - Flux. + [Cron expression](https://en.wikipedia.org/wiki/Cron#Overview) that + defines the schedule on which the task runs. Cron scheduling is + based on system time. + + Value is a [Cron + expression](https://en.wikipedia.org/wiki/Cron#Overview). type: string description: - description: An optional description of the task. + description: Description of the task. type: string every: - description: A simple task repetition schedule; parsed from Flux. + description: >- + Interval at which the task runs. `every` also determines when the + task first runs, depending on the specified time. + + Value is a [duration + literal](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals)). + format: duration type: string flux: - description: The Flux script to run for this task. + description: Flux script to run for this task. type: string id: readOnly: true @@ -4356,7 +4312,11 @@ components: readOnly: true type: string latestCompleted: - description: Timestamp of latest scheduled, completed run, RFC3339. + description: >- + Timestamp of the latest scheduled and completed run. + + Value is a timestamp in [RFC3339 date/time + format](https://docs.influxdata.com/flux/v0.x/data-types/basic/time/#time-syntax). format: date-time readOnly: true type: string @@ -4384,29 +4344,31 @@ components: readOnly: true type: object name: - description: The name of the task. + description: Name of the task. type: string offset: description: >- - Duration to delay after the schedule, before executing the task; - parsed from flux, if set to zero it will remove this option and use - 0 as the default. + [Duration](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals) + to delay execution of the task after the scheduled time has elapsed. + `0` removes the offset. + + The value is a [duration + literal](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals). + format: duration type: string org: - description: The name of the organization that owns this Task. + description: Name of the organization that owns the task. type: string orgID: - description: The ID of the organization that owns this Task. + description: ID of the organization that owns the task. type: string ownerID: - description: The ID of the user who owns this Task. + description: ID of the user who owns this Task. type: string status: $ref: '#/components/schemas/TaskStatusType' type: - description: >- - The type of task, this can be used for filtering tasks on list - actions. + description: Type of the task, useful for filtering a task list. type: string updatedAt: format: date-time @@ -5829,6 +5791,8 @@ components: - stacked - bar - monotoneX + - stepBefore + - stepAfter type: string XYViewProperties: properties: @@ -5992,7 +5956,7 @@ info: with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint. openapi: 3.0.0 paths: - /api/v2/: + /api/v2: get: operationId: GetRoutes parameters: @@ -7272,9 +7236,7 @@ paths: required: true schema: type: string - - description: >- - Includes the cell view properties in the response if set to - `properties` + - description: If `properties`, includes the cell view properties in the response. in: query name: include required: false @@ -7303,7 +7265,7 @@ paths: schema: $ref: '#/components/schemas/Error' description: Unexpected error - summary: Retrieve a Dashboard + summary: Retrieve a dashboard tags: - Dashboards patch: @@ -8181,69 +8143,6 @@ paths: summary: Delete data tags: - Delete - /api/v2/experimental/sampledata/buckets: - get: - operationId: GetDemoDataBuckets - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/DemoDataBuckets' - description: A list of demo data buckets - default: - $ref: '#/components/responses/ServerError' - description: Unexpected error - summary: List of Demo Data Buckets - tags: - - DemoDataBuckets - /api/v2/experimental/sampledata/buckets/{bucketID}/members: - delete: - operationId: DeleteDemoDataBucketMembers - parameters: - - description: bucket id - in: path - name: bucketID - required: true - schema: - type: string - responses: - '200': - description: if sampledata route is not available gateway responds with 200 - '204': - description: A list of demo data buckets - default: - content: - application/json: - schema: - $ref: '#/components/schemas/Error' - description: Unexpected error - summary: List of Demo Data Buckets - tags: - - DemoDataBuckets - post: - operationId: GetDemoDataBucketMembers - parameters: - - description: bucket id - in: path - name: bucketID - required: true - schema: - type: string - responses: - '200': - description: if sampledata route is not available gateway responds with 200 - '204': - description: A list of demo data buckets - default: - content: - application/json: - schema: - $ref: '#/components/schemas/Error' - description: Unexpected error - summary: List of Demo Data Buckets - tags: - - DemoDataBuckets /api/v2/flags: get: operationId: GetFlags @@ -8419,13 +8318,14 @@ paths: application/json: schema: $ref: '#/components/schemas/Token' - description: A temp token for Mapbox + description: Temporary token for Mapbox. '401': $ref: '#/components/responses/ServerError' '500': $ref: '#/components/responses/ServerError' default: $ref: '#/components/responses/ServerError' + summary: Get a mapbox token /api/v2/me: get: operationId: GetMe @@ -9250,8 +9150,9 @@ paths: - Organizations /api/v2/orgs/{orgID}/limits: get: + operationId: GetOrgLimitsID parameters: - - description: The identifier of the organization. + - description: ID of the organization. in: path name: orgID required: true @@ -9269,7 +9170,7 @@ paths: links: $ref: '#/components/schemas/Links' type: object - description: The Limits defined for the organization. + description: Limits defined for the organization. default: $ref: '#/components/responses/ServerError' description: unexpected error @@ -9581,25 +9482,38 @@ paths: - Secrets /api/v2/orgs/{orgID}/usage: get: + operationId: GetOrgUsageID parameters: - - description: The identifier of the organization. + - description: ID of the organization. in: path name: orgID required: true schema: type: string - - description: start time + - description: > + Earliest time to include in results. + + For more information about timestamps, see [Manipulate timestamps + with + Flux](https://docs.influxdata.com/influxdb/cloud/query-data/flux/manipulate-timestamps/). in: query name: start required: true schema: - type: timestamp - - description: stop time + format: unix timestamp + type: integer + - description: > + Latest time to include in results. + + For more information about timestamps, see [Manipulate timestamps + with + Flux](https://docs.influxdata.com/influxdb/cloud/query-data/flux/manipulate-timestamps/). in: query name: stop required: false schema: - type: timestamp + format: unix timestamp + type: integer - description: return raw usage data in: query name: raw @@ -9643,10 +9557,13 @@ paths: - Usage /ping: get: + description: Returns the status and InfluxDB version of the instance. operationId: GetPing responses: '204': - description: OK + description: | + OK. + Headers contain InfluxDB version information. headers: X-Influxdb-Build: description: The type of InfluxDB build. @@ -9656,16 +9573,18 @@ paths: description: The version of InfluxDB. schema: type: integer - servers: - - url: '' - summary: Checks the status of InfluxDB instance and version of InfluxDB. + servers: [] + summary: Get the status and version of the instance tags: - Ping head: + description: Returns the status and InfluxDB version of the instance. operationId: HeadPing responses: '204': - description: OK + description: | + OK. + Headers contain InfluxDB version information. headers: X-Influxdb-Build: description: The type of InfluxDB build. @@ -9675,9 +9594,8 @@ paths: description: The version of InfluxDB. schema: type: integer - servers: - - url: '' - summary: Checks the status of InfluxDB instance and version of InfluxDB. + servers: [] + summary: Get the status and version of the instance tags: - Ping /api/v2/query: @@ -9728,16 +9646,16 @@ paths: - application/vnd.flux type: string - description: >- - Specifies the name of the organization executing the query. Takes - either the ID or Name. If both `orgID` and `org` are specified, - `org` takes precedence. + Name of the organization executing the query. Accepts either the ID + or Name. If you provide both `orgID` and `org`, `org` takes + precedence. in: query name: org schema: type: string - description: >- - Specifies the ID of the organization executing the query. If both - `orgID` and `org` are specified, `org` takes precedence. + ID of the organization executing the query. If you provide both + `orgID` and `org`, `org` takes precedence. in: query name: orgID schema: @@ -9758,10 +9676,6 @@ paths: responses: '200': content: - application/vnd.influx.arrow: - schema: - format: binary - type: string text/csv: schema: example: > @@ -9779,28 +9693,33 @@ paths: schema: default: identity description: > - The content coding: `gzip` for compressed data or `identity` - for unmodified, uncompressed data. + Content coding: `gzip` for compressed data or `identity` for + unmodified, uncompressed data. enum: - gzip - identity type: string Trace-Id: - description: >- - The Trace-Id header reports the request's trace ID, if one was - generated. + description: If generated, trace ID of the request. schema: - description: Specifies the request's trace ID. + description: Trace ID of a request. type: string '429': - description: >- - Token is temporarily over quota. The Retry-After header describes - when to try the read again. + description: | + #### InfluxDB Cloud: + - returns this error if a **read** or **write** request exceeds your + plan's [adjustable service quotas](https://docs.influxdata.com/influxdb/cloud/account-management/limits/#adjustable-service-quotas) + or if a **delete** request exceeds the maximum + [global limit](https://docs.influxdata.com/influxdb/cloud/account-management/limits/#global-limits) + - returns `Retry-After` header that describes when to try the write again. + + #### InfluxDB OSS: + - doesn't return this error. headers: Retry-After: description: >- - A non-negative decimal integer indicating the seconds to delay - after the response is received. + Non-negative decimal integer indicating seconds to wait before + retrying the request. schema: format: int32 type: integer @@ -9988,7 +9907,7 @@ paths: description: Unexpected error summary: List scripts tags: - - Invocable Scripts + - Invokable Scripts post: operationId: PostScripts requestBody: @@ -10010,7 +9929,7 @@ paths: description: Unexpected error summary: Create a script tags: - - Invocable Scripts + - Invokable Scripts /api/v2/scripts/{scriptID}: delete: description: Deletes a script and all associated records. @@ -10030,9 +9949,9 @@ paths: description: Unexpected error summary: Delete a script tags: - - Invocable Scripts + - Invokable Scripts get: - description: Uses script ID to retrieve details of an invocable script. + description: Uses script ID to retrieve details of an invokable script. operationId: GetScriptsID parameters: - description: The script ID. @@ -10053,10 +9972,10 @@ paths: description: Unexpected error summary: Retrieve a script tags: - - Invocable Scripts + - Invokable Scripts patch: description: > - Updates properties (`name`, `description`, and `script`) of an invocable + Updates properties (`name`, `description`, and `script`) of an invokable script. operationId: PatchScriptsID parameters: @@ -10085,7 +10004,7 @@ paths: description: Unexpected error summary: Update a script tags: - - Invocable Scripts + - Invokable Scripts /api/v2/scripts/{scriptID}/invoke: post: description: >- @@ -10115,7 +10034,7 @@ paths: description: Unexpected error summary: Invoke a script tags: - - Invocable Scripts + - Invokable Scripts /api/v2/setup: get: description: >- @@ -11653,6 +11572,22 @@ paths: summary of the run. The summary contains newly created resources. The diff compares the initial state to the state after the package applied. This corresponds to `"dryRun": true`. + '422': + content: + application/json: + schema: + allOf: + - $ref: '#/components/schemas/TemplateSummary' + - properties: + code: + type: string + message: + type: string + required: + - message + - code + type: object + description: Template failed validation default: content: application/json: @@ -12104,108 +12039,129 @@ paths: Writes data to a bucket. - To write data into InfluxDB, you need the following: + Use this endpoint to send data in [line + protocol](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/) + format to InfluxDB. + + InfluxDB parses and validates line protocol in the request body, + + responds with success or failure, and then handles the write + asynchronously. - - **organization name or ID** – _See [View - organizations](https://docs.influxdata.com/influxdb/cloud/organizations/view-orgs/#view-your-organization-id) - for instructions on viewing your organization ID._ - - - **bucket** – _See [View - buckets](https://docs.influxdata.com/influxdb/cloud/organizations/buckets/view-buckets/) - for - instructions on viewing your bucket ID._ - - **API token** – _See [View - tokens](https://docs.influxdata.com/influxdb/cloud/security/tokens/view-tokens/) - for instructions on viewing your API token._ - - **InfluxDB URL** – _See [InfluxDB - URLs](https://docs.influxdata.com/influxdb/cloud/reference/urls/)_. - - - data in [line - protocol](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol) - format. + #### Required permissions - InfluxDB Cloud enforces rate and size limits different from InfluxDB - OSS. For details, see Responses. + - `write-buckets` or `write-bucket BUCKET_ID` - For more information and examples, see the following: + `BUCKET_ID` is the ID of the destination bucket. + + + #### Rate limits (with InfluxDB Cloud) + + + `write` rate limits apply. + + For more information, see [limits and adjustable + quotas](https://docs.influxdata.com/influxdb/cloud/account-management/limits/). + + + #### Related guides + - [Write data with the InfluxDB API](https://docs.influxdata.com/influxdb/cloud/write-data/developer-tools/api). - [Optimize writes to InfluxDB](https://docs.influxdata.com/influxdb/cloud/write-data/best-practices/optimize-writes/). + + - [Troubleshoot issues writing + data](https://docs.influxdata.com/influxdb/cloud/write-data/troubleshoot/) operationId: PostWrite parameters: - $ref: '#/components/parameters/TraceSpan' - - description: > - The value tells InfluxDB what compression is applied to the line - protocol in the request payload. - - To make an API request with a GZIP payload, send `Content-Encoding: - gzip` as a request header. + - description: | + The compression applied to the line protocol in the request payload. + To send a GZIP payload, pass `Content-Encoding: gzip` header. in: header name: Content-Encoding schema: default: identity - description: >- - The content coding. Use `gzip` for compressed data or `identity` - for unmodified, uncompressed data. + description: > + Content coding. + + Use `gzip` for compressed data or `identity` for unmodified, + uncompressed data. enum: - gzip - identity type: string - - description: >- - The header value indicates the format of the data in the request - body. + - description: > + The format of the data in the request body. + + To send a line protocol payload, pass `Content-Type: text/plain; + charset=utf-8`. in: header name: Content-Type schema: default: text/plain; charset=utf-8 description: > - `text/plain` specifies line protocol. `UTF-8` is the default - character set. + `text/plain` is the content type for line protocol. `UTF-8` is the + default character set. enum: - text/plain - text/plain; charset=utf-8 - - application/vnd.influx.arrow type: string - - description: >- - The header value indicates the size of the entity-body, in bytes, - sent to the database. If the length is greater than the database's - `max body` configuration option, the server responds with status - code `413`. + - description: | + The size of the entity-body, in bytes, sent to InfluxDB. + If the length is greater than the `max body` configuration option, + the server responds with status code `413`. in: header name: Content-Length schema: description: The length in decimal number of octets. type: integer - - description: The header value specifies the response format. + - description: | + The content type that the client can understand. + Writes only return a response body if they fail--for example, + due to a formatting problem or quota limit. + + #### InfluxDB Cloud + + - Returns only `application/json` for format and limit errors. + - Returns only `text/html` for some quota limit errors. + + #### InfluxDB OSS + + - Returns only `application/json` for format and limit errors. + + #### Related guides + - [Troubleshoot issues writing data](https://docs.influxdata.com/influxdb/cloud/write-data/troubleshoot/). in: header name: Accept schema: default: application/json - description: The response format for errors. + description: Error content type. enum: - application/json type: string - - description: >- - The parameter value specifies the destination organization for - writes. The database writes all points in the batch to this - organization. If you provide both `orgID` and `org` parameters, - `org` takes precedence. + - description: > + The destination organization for writes. + + The database writes all points in the batch to this organization. + + If you provide both `orgID` and `org` parameters, `org` takes + precedence. in: query name: org required: true schema: - description: Organization name or ID. + description: The organization name or ID. type: string - - description: >- - The parameter value specifies the ID of the destination organization - for writes. If both `orgID` and `org` are specified, `org` takes - precedence. + - description: | + The ID of the destination organization for writes. + If both `orgID` and `org` are specified, `org` takes precedence. in: query name: orgID schema: @@ -12215,9 +12171,9 @@ paths: name: bucket required: true schema: - description: All points within batch are written to this bucket. + description: InfluxDB writes all points in the batch to this bucket. type: string - - description: The precision for the unix timestamps within the body line-protocol. + - description: The precision for unix timestamps in the line protocol batch. in: query name: precision schema: @@ -12225,24 +12181,59 @@ paths: requestBody: content: text/plain: + examples: + plain-utf8: + value: > + airSensors,sensor_id=TLM0201 + temperature=73.97038159354763,humidity=35.23103248356096,co=0.48445310567793615 + 1630424257000000000 + + airSensors,sensor_id=TLM0202 + temperature=75.30007505999716,humidity=35.651929918691714,co=0.5141876544505826 + 1630424257000000000 schema: format: byte type: string - description: Data in line protocol format. + description: > + Data in line protocol format. + + + To send compressed data, do the following: + + 1. Use [GZIP](https://www.gzip.org/) to compress the line protocol data. + 2. In your request, send the compressed data and the + `Content-Encoding: gzip` header. + + #### Related guides + + + - [Best practices for optimizing + writes](https://docs.influxdata.com/influxdb/cloud/write-data/best-practices/optimize-writes/). required: true responses: '204': - description: >- - InfluxDB validated the request data format and accepted the data for - writing to the bucket. `204` doesn't indicate a successful write - operation since writes are asynchronous. See [how to check for write + description: > + Success. InfluxDB validated the request and the data format and + + accepted the data for writing to the bucket. + + Because data is written to InfluxDB asynchronously, data may not yet + be written to a bucket. + + + #### Related guides + + + - [How to check for write errors](https://docs.influxdata.com/influxdb/cloud/write-data/troubleshoot/). '400': content: application/json: examples: measurementSchemaFieldTypeConflict: - summary: Field type conflict thrown by an explicit bucket schema + summary: >- + InfluxDB Cloud field type conflict thrown by an explicit + bucket schema value: code: invalid message: >- @@ -12253,9 +12244,9 @@ paths: schema; got String but expected Float schema: $ref: '#/components/schemas/LineProtocolError' - description: >- - Bad request. The line protocol data in the request is malformed. The - response body contains the first malformed line in the data. + description: | + Bad request. The line protocol data in the request is malformed. + The response body contains the first malformed line in the data. InfluxDB rejected the batch and did not write any data. '401': content: @@ -12316,26 +12307,39 @@ paths: schema: type: string - description: > - The request payload is too large. InfluxDB rejected the batch and - did not write any data. + description: | + The request payload is too large. + InfluxDB rejected the batch and did not write any data. #### InfluxDB Cloud: - - returns this error if the payload exceeds the 50MB size limit. - - returns `Content-Type: text/html` for this error. + + - Returns this error if the payload exceeds the 50MB size limit. + - Returns `Content-Type: text/html` for this error. #### InfluxDB OSS: - - returns this error only if the [Go (golang) `ioutil.ReadAll()`](https://pkg.go.dev/io/ioutil#ReadAll) function raises an error. - - returns `Content-Type: application/json` for this error. + + - Returns this error only if the [Go (golang) `ioutil.ReadAll()`](https://pkg.go.dev/io/ioutil#ReadAll) function raises an error. + - Returns `Content-Type: application/json` for this error. '429': - description: >- - InfluxDB Cloud only. The token is temporarily over quota. The - Retry-After header describes when to try the write again. + description: | + Too many requests. + + #### InfluxDB Cloud + + - Returns this error if a **read** or **write** request exceeds your + plan's [adjustable service quotas](https://docs.influxdata.com/influxdb/cloud/account-management/limits/#adjustable-service-quotas) + or if a **delete** request exceeds the maximum + [global limit](https://docs.influxdata.com/influxdb/cloud/account-management/limits/#global-limits). + - Returns `Retry-After` header that describes when to try the write again. + + #### InfluxDB OSS + + - Doesn't return this error. headers: Retry-After: description: >- - A non-negative decimal integer indicating the seconds to delay - after the response is received. + Non-negative decimal integer indicating seconds to wait before + retrying the request. schema: format: int32 type: integer @@ -12351,14 +12355,25 @@ paths: $ref: '#/components/schemas/Error' description: Internal server error. '503': - description: >- - The server is temporarily unavailable to accept writes. The - `Retry-After` header describes when to try the write again. + description: | + Service unavailable. + + #### InfluxDB Cloud + + - Returns this error if series cardinality exceeds your plan's + [adjustable service quotas](https://docs.influxdata.com/influxdb/cloud/account-management/limits/#adjustable-service-quotas). + See [how to resolve high series cardinality](https://docs.influxdata.com/influxdb/cloud/write-data/best-practices/resolve-high-cardinality/). + + #### InfluxDB OSS + + - Returns this error if + the server is temporarily unavailable to accept writes. + - Returns `Retry-After` header that describes when to try the write again. headers: Retry-After: description: >- - A non-negative decimal integer indicating the seconds to delay - after the response is received. + Non-negative decimal integer indicating seconds to wait before + retrying the request. schema: format: int32 type: integer @@ -12372,44 +12387,111 @@ security: servers: - url: / tags: - - description: | - The InfluxDB `/api/v2` API requires authentication for all requests. - Use InfluxDB API tokens to authenticate requests to the `/api/v2` API. + - description: > + Use one of the following schemes to authenticate to the InfluxDB API: - For more information, see - [Token authentication](#section/Authentication/TokenAuthentication) + - [Token authentication](#section/Authentication/TokenAuthentication) + + - [Basic authentication](#section/Authentication/BasicAuthentication) + + - [Querystring + authentication](#section/Authentication/QuerystringAuthentication) name: Authentication x-traitTag: true - description: > - Create and manage API tokens. An **authorization** associates a list of - permissions to an **organization** and provides a token for API access. + Create and manage API tokens. + + An **authorization** associates a list of permissions to an + + **organization** and provides a token for API access. + Optionally, you can restrict an authorization and its token to a specific user. - For more information and examples, see the following: + ### Related guides + - [Authorize API requests](/influxdb/cloud/api-guide/api_intro/#authentication). - [Manage API tokens](/influxdb/cloud/security/tokens/). - [Assign a token to a specific user](/influxdb/cloud/security/tokens/create-token/). name: Authorizations - - Bucket Schemas - - Buckets - - Cells - - Checks - - Dashboards - - DBRPs - - Delete - - DemoDataBuckets - - Invocable Scripts - - Labels - - Limits - - NotificationEndpoints - - NotificationRules - - Organizations - - Ping + - name: Bucket Schemas + - name: Buckets + - name: Cells + - name: Checks + - name: Dashboards + - name: DBRPs + - name: Delete + - description: > + InfluxDB API endpoints use standard HTTP request and response headers. + + + **Note**: Not all operations support all headers. + + + ### Request headers + + + | Header | Value type | + Description | + + |:------------------------ |:--------------------- + |:-------------------------------------------| + + | `Accept` | string | The content type that + the client can understand. | + + | `Authorization` | string | The authorization + scheme and credential. | + + | `Content-Encoding` | string | The compression + applied to the line protocol in the request payload. | + + | `Content-Length` | integer | The size of the + entity-body, in bytes, sent to the database. | + + | `Content-Type` | string | The format of the + data in the request body. | + name: Headers + x-traitTag: true + - description: > + Manage and execute scripts as API endpoints in InfluxDB. + + + An API Invokable Script assigns your custom Flux script to a new + + InfluxDB API endpoint for your organization. + + Invokable scripts let you execute your script as an HTTP request to the + endpoint. + + + Invokable scripts accept parameters. + + Add parameter references in your script as `params.myparameter`. + + When you `invoke` your script, you send parameters as key-value pairs in + the `params` object. + + Then, InfluxDB executes your script with the key-value pairs as arguments, + and returns the result. + + + ### Related guides + + + - [Invoke custom + scripts](/influxdb/cloud/api-guide/api-invokable-scripts/). + name: Invokable Scripts + - name: Labels + - name: Limits + - name: NotificationEndpoints + - name: NotificationRules + - name: Organizations + - name: Ping - description: | Retrieve data, analyze queries, and get query suggestions. name: Query @@ -12424,7 +12506,7 @@ tags: for popular languages and ready to import into your application. name: Quick start x-traitTag: true - - Resources + - name: Resources - description: > The InfluxDB API uses standard HTTP status codes for success and failure responses. @@ -12479,20 +12561,20 @@ tags: when to try the request again. | name: Response codes x-traitTag: true - - Routes - - Rules - - Secrets - - Setup - - Signin - - Signout - - Tasks - - Telegraf Plugins - - Telegrafs - - Templates - - Usage - - Users - - Variables - - Views + - name: Routes + - name: Rules + - name: Secrets + - name: Setup + - name: Signin + - name: Signout + - name: Tasks + - name: Telegraf Plugins + - name: Telegrafs + - name: Templates + - name: Usage + - name: Users + - name: Variables + - name: Views - description: | Write time series data to buckets. name: Write @@ -12501,11 +12583,14 @@ x-tagGroups: tags: - Quick start - Authentication + - Headers - Response codes - name: Data I/O endpoints tags: - Write - Query + - Invokable Scripts + - Tasks - name: Resource endpoints tags: - Buckets @@ -12531,8 +12616,7 @@ x-tagGroups: - Dashboards - DBRPs - Delete - - DemoDataBuckets - - Invocable Scripts + - Invokable Scripts - Labels - Limits - NotificationEndpoints diff --git a/api-docs/cloud/swaggerV1Compat.yml b/api-docs/cloud/swaggerV1Compat.yml index 1a2cdaef3..b06e5956c 100644 --- a/api-docs/cloud/swaggerV1Compat.yml +++ b/api-docs/cloud/swaggerV1Compat.yml @@ -36,7 +36,7 @@ paths: type: string required: true description: >- - Bucket to write to. If none exists, a bucket will be created with a + Bucket to write to. If none exists, InfluxDB creates a bucket with a default 3-day retention policy. - in: query name: rp @@ -486,8 +486,8 @@ tags: x-traitTag: true - - Query - - Write + - name: Query + - name: Write x-tagGroups: - name: Overview tags: diff --git a/api-docs/generate-api-docs.sh b/api-docs/generate-api-docs.sh index 6c92284bb..3c6b240ad 100755 --- a/api-docs/generate-api-docs.sh +++ b/api-docs/generate-api-docs.sh @@ -46,17 +46,12 @@ weight: 304 # npm_config_yes=true npx overrides the prompt # and (vs. npx --yes) is compatible with npm@6 and npm@7. - openapiCLI="@redocly/openapi-cli" redocCLI="redoc-cli@0.12.3" - npm --version - - # Use Redoc's openapi-cli to regenerate the spec with custom decorations. - INFLUXDB_VERSION=$version npm_config_yes=true npx $openapiCLI bundle $version/ref.yml \ - --config=./.redocly.yaml \ - -o $version/ref.yml + npx --version # Use Redoc to generate the v2 API html + echo "Bundling ${version}/ref.yml" npm_config_yes=true npx $redocCLI bundle $version/ref.yml \ -t template.hbs \ --title="InfluxDB $titleVersion API documentation" \ @@ -67,12 +62,8 @@ weight: 304 --templateOptions.version="$version" \ --templateOptions.titleVersion="$titleVersion" \ - # Use Redoc's openapi-cli to regenerate the v1-compat spec with custom decorations. - INFLUXDB_API_VERSION=v1compat INFLUXDB_VERSION=$version npm_config_yes=true npx $openapiCLI bundle $version/swaggerV1Compat.yml \ - --config=./.redocly.yaml \ - -o $version/swaggerV1Compat.yml - # Use Redoc to generate the v1 compatibility API html + echo "Bundling ${version}/swaggerV1Compat.yml" npm_config_yes=true npx $redocCLI bundle $version/swaggerV1Compat.yml \ -t template.hbs \ --title="InfluxDB $titleVersion v1 compatibility API documentation" \ diff --git a/api-docs/getswagger.sh b/api-docs/getswagger.sh index 3d43a8671..eaa2d7560 100755 --- a/api-docs/getswagger.sh +++ b/api-docs/getswagger.sh @@ -11,12 +11,14 @@ # - a base URL using the -b flag. # The baseURL specifies where to retrieve the openapi files from. # The default baseUrl is the master branch of the influxdata/openapi repo. +# The default baseUrl for OSS is the tag for the latest InfluxDB release. +# When a new OSS version is released, update baseUrlOSS to with the tag (/influxdb-oss-[SEMANTIC_VERSION]). # For local development, pass your openapi directory using the file:/// protocol. # # Syntax: # sh ./getswagger.sh # sh ./getswagger.sh -b -# sh .getswagger.sh -c -o -b +# sh ./getswagger.sh -c -o -b # # Examples: # sh ./getswagger.sh cloud @@ -25,6 +27,7 @@ versionDirs=($(ls -d */)) latestOSS=${versionDirs[${#versionDirs[@]}-1]} baseUrl="https://raw.githubusercontent.com/influxdata/openapi/master" +baseUrlOSS="https://raw.githubusercontent.com/influxdata/openapi/docs-release/influxdb-oss" ossVersion=${latestOSS%/} verbose="" context="" @@ -63,6 +66,7 @@ case "$subcommand" in ;; b) baseUrl=$OPTARG + baseUrlOSS=$OPTARG ;; o) ossVersion=$OPTARG @@ -87,20 +91,43 @@ function showArgs { echo "ossVersion: $ossVersion"; } +function postProcess() { + # Use npx to install and run the specified version of openapi-cli. + # npm_config_yes=true npx overrides the prompt + # and (vs. npx --yes) is compatible with npm@6 and npm@7. + specPath=$1 + version="$2" + apiVersion="$3" + + openapiCLI="@redocly/openapi-cli" + + npx --version + + # Use Redoc's openapi-cli to regenerate the spec with custom decorations. + INFLUXDB_API_VERSION=$apiVersion \ + INFLUXDB_VERSION=$version \ + npm_config_yes=true \ + npx $openapiCLI bundle $specPath \ + --config=./.redocly.yaml \ + -o $specPath +} + function updateCloud { - echo "Updating Cloud openapi..." curl ${verbose} ${baseUrl}/contracts/ref/cloud.yml -s -o cloud/ref.yml + postProcess $_ cloud } function updateOSS { - echo "Updating OSS ${ossVersion} openapi..." - mkdir -p ${ossVersion} && curl ${verbose} ${baseUrl}/contracts/ref/oss.yml -s -o $_/ref.yml + mkdir -p ${ossVersion} && curl ${verbose} ${baseUrlOSS}/contracts/ref/oss.yml -s -o $_/ref.yml + postProcess $_ $ossVersion } function updateV1Compat { - echo "Updating Cloud and ${ossVersion} v1 compatibility openapi..." curl ${verbose} ${baseUrl}/contracts/swaggerV1Compat.yml -s -o cloud/swaggerV1Compat.yml + postProcess $_ cloud v1compat + mkdir -p ${ossVersion} && cp cloud/swaggerV1Compat.yml $_/swaggerV1Compat.yml + postProcess $_ $ossVersion v1compat } if [ ! -z ${verbose} ]; diff --git a/api-docs/openapi/content/cloud/security-schemes.yml b/api-docs/openapi/content/cloud/security-schemes.yml deleted file mode 100644 index 8bb0bcd7e..000000000 --- a/api-docs/openapi/content/cloud/security-schemes.yml +++ /dev/null @@ -1,56 +0,0 @@ -TokenAuthentication: - type: http - scheme: token - bearerFormat: InfluxDB Token String - description: | - ### Token authentication scheme - - InfluxDB API tokens ensure secure interaction between users and data. A token belongs to an organization and identifies InfluxDB permissions within the organization. - - Include your API token in an `Authorization: Token YOUR_API_TOKEN` HTTP header with each request. - - ### Example - - `curl https://us-east-1-1.aws.cloud2.influxdata.com/ - --header "Authorization: Token YOUR_API_TOKEN"` - - For more information and examples, see the following: - - [Use tokens in API requests](https://docs.influxdata.com/influxdb/cloud/api-guide/api_intro/#authentication). - - [Manage API tokens](https://docs.influxdata.com/influxdb/cloud/security/tokens). - - [`/authorizations`](#tag/Authorizations) endpoint. -BasicAuthentication: - type: http - scheme: basic - description: | - ### Basic authentication scheme - - Use HTTP Basic Auth with clients that support the InfluxDB 1.x convention of username and password (that don't support the `Authorization: Token` scheme): - - **username**: InfluxDB Cloud username - - **password**: InfluxDB Cloud API token - - ### Example - - `curl --get "https://europe-west1-1.gcp.cloud2.influxdata.com/query" - --user "YOUR_USERNAME":"YOUR_TOKEN_OR_PASSWORD"` - - For more information and examples, see how to [authenticate with a username and password scheme](https://docs.influxdata.com/influxdb/cloud/reference/api/influxdb-1x/). -QuerystringAuthentication: - type: apiKey - in: query - name: u=&p= - description: | - ### Querystring authentication scheme - - Use InfluxDB 1.x API parameters to provide credentials through the query string. - - Username and password schemes require the following credentials: - - **username**: InfluxDB Cloud username - - **password**: InfluxDB Cloud API token - - ### Example - - `curl --get "https://europe-west1-1.gcp.cloud2.influxdata.com/query" - --data-urlencode "u=YOUR_USERNAME" - --data-urlencode "p=YOUR_TOKEN_OR_PASSWORD"` - - For more information and examples, see how to [authenticate with a username and password scheme](https://docs.influxdata.com/influxdb/cloud/reference/api/influxdb-1x/). diff --git a/api-docs/openapi/content/cloud/tag-groups.yml b/api-docs/openapi/content/cloud/tag-groups.yml deleted file mode 100644 index 975b8831a..000000000 --- a/api-docs/openapi/content/cloud/tag-groups.yml +++ /dev/null @@ -1,26 +0,0 @@ -- name: Overview - tags: - - Quick start - - Authentication - - Response codes -- name: Data I/O endpoints - tags: - - Write - - Query -- name: Resource endpoints - tags: - - Buckets - - Dashboards - - Tasks - - Resources -- name: Security and access endpoints - tags: - - Authorizations - - Organizations - - Users -- name: System information endpoints - tags: - - Ping - - Routes -- name: All endpoints - tags: [] diff --git a/api-docs/openapi/content/cloud/tags.yml b/api-docs/openapi/content/cloud/tags.yml deleted file mode 100644 index 99031e526..000000000 --- a/api-docs/openapi/content/cloud/tags.yml +++ /dev/null @@ -1,55 +0,0 @@ -- name: Authentication - description: | - Use one of the following schemes to authenticate to the InfluxDB API: - - [Token authentication](#section/Authentication/TokenAuthentication) - - [Basic authentication](#section/Authentication/BasicAuthentication) - - [Querystring authentication](#section/Authentication/QuerystringAuthentication) - - x-traitTag: true -- name: Invokable Scripts - description: | - Manage and execute scripts as API endpoints in InfluxDB. - - An API Invokable Script assigns your custom Flux script to a new InfluxDB API endpoint for your organization. - Invokable scripts let you execute your script as an HTTP request to the endpoint. - - Invokable scripts accept parameters. Add parameter references in your script as `params.myparameter`. - When you `invoke` your script, you send parameters as key-value pairs in the `params` object. - Then, InfluxDB executes your script with the key-value pairs as arguments, and returns the result. - - For more information and examples, see [Invoke custom scripts](https://docs.influxdata.com/influxdb/cloud/api-guide/api-invokable-scripts). -- name: Quick start - x-traitTag: true - description: | - See the [**API Quick Start**](https://docs.influxdata.com/influxdb/cloud/api-guide/api_intro/) to get up and running authenticating with tokens, writing to buckets, and querying data. - - [**InfluxDB API client libraries**](https://docs.influxdata.com/influxdb/cloud/api-guide/client-libraries/) are available for popular languages and ready to import into your application. -- name: Response codes - x-traitTag: true - description: | - The InfluxDB API uses standard HTTP status codes for success and failure responses. - The response body may include additional details. For details about a specific operation's response, see **Responses** and **Response Samples** for that operation. - - API operations may return the following HTTP status codes: - - |  Code  | Status | Description | - |:-----------:|:------------------------ |:--------------------- | - | `200` | Success | | - | `204` | No content | For a `POST` request, `204` indicates that InfluxDB accepted the request and request data is valid. Asynchronous operations, such as `write`, might not have completed yet. | - | `400` | Bad request | `Authorization` header is missing or malformed or the API token does not have permission for the operation. | - | `401` | Unauthorized | May indicate one of the following:
  • `Authorization: Token` header is missing or malformed
  • API token value is missing from the header
  • API token does not have permission. For more information about token types and permissions, see [Manage API tokens](https://docs.influxdata.com/influxdb/cloud/security/tokens/)
  • | - | `404` | Not found | Requested resource was not found. `message` in the response body provides details about the requested resource. | - | `413` | Request entity too large | Request payload exceeds the size limit. | - | `422` | Unprocessible entity | Request data is invalid. `code` and `message` in the response body provide details about the problem. | - | `429` | Too many requests | API token is temporarily over the request quota. The `Retry-After` header describes when to try the request again. | - | `500` | Internal server error | | - | `503` | Service unavailable | Server is temporarily unavailable to process the request. The `Retry-After` header describes when to try the request again. | -- name: Query - description: | - Retrieve data, analyze queries, and get query suggestions. -- name: Write - description: | - Write time series data to buckets. -- name: Authorizations - description: | - Create and manage API tokens. An **authorization** associates a list of permissions to an **organization** and provides a token for API access. To assign a token to a specific user, scope the authorization to the user ID. diff --git a/api-docs/openapi/content/content.js b/api-docs/openapi/content/content.js index f9045476d..131f045a9 100644 --- a/api-docs/openapi/content/content.js +++ b/api-docs/openapi/content/content.js @@ -2,18 +2,20 @@ const path = require('path'); const { toJSON } = require('../plugins/helpers/content-helper'); function getVersion(filename) { - return path.join(__dirname, process.env.INFLUXDB_VERSION, (process.env.INFLUXDB_API_VERSION || ''), filename); + return path.join(__dirname, process.env.INFLUXDB_VERSION, + (process.env.INFLUXDB_API_VERSION || ''), + filename); } -const info = toJSON(getVersion('info.yml')); +const info = () => toJSON(getVersion('info.yml')); -const securitySchemes = toJSON(getVersion('security-schemes.yml')); +const securitySchemes = () => toJSON(getVersion('security-schemes.yml')); -const servers = toJSON(path.join(__dirname, 'servers.yml')); +const servers = () => toJSON(path.join(__dirname, 'servers.yml')); -const tags = toJSON(getVersion('tags.yml')); +const tags = () => toJSON(getVersion('tags.yml')); -const tagGroups = toJSON(getVersion('tag-groups.yml')); +const tagGroups = () => toJSON(getVersion('tag-groups.yml')); module.exports = { info, @@ -22,4 +24,3 @@ module.exports = { tagGroups, tags, } - diff --git a/api-docs/openapi/content/v2.0/info.yml b/api-docs/openapi/content/v2.0/info.yml index de7a5cc2c..ece63acba 100644 --- a/api-docs/openapi/content/v2.0/info.yml +++ b/api-docs/openapi/content/v2.0/info.yml @@ -1,3 +1,4 @@ title: InfluxDB OSS API Service +version: 2.0.0 description: | The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint. diff --git a/api-docs/openapi/content/v2.0/security-schemes.yml b/api-docs/openapi/content/v2.0/security-schemes.yml deleted file mode 100644 index bc9504cca..000000000 --- a/api-docs/openapi/content/v2.0/security-schemes.yml +++ /dev/null @@ -1,58 +0,0 @@ -TokenAuthentication: - type: http - scheme: token - bearerFormat: InfluxDB Token String - description: | - ### Token authentication scheme - - InfluxDB API tokens ensure secure interaction between users and data. A token belongs to an organization and identifies InfluxDB permissions within the organization. - - Include your API token in an `Authorization: Token YOUR_API_TOKEN` HTTP header with each request. - - ### Example - - `curl http://localhost:8086/ping - --header "Authorization: Token YOUR_API_TOKEN"` - - For more information and examples, see the following: - - [Use tokens in API requests](https://docs.influxdata.com/influxdb/v2.0/api-guide/api_intro/#authentication). - - [Manage API tokens](https://docs.influxdata.com/influxdb/v2.0/security/tokens). - - [`/authorizations`](#tag/Authorizations) endpoint. -BasicAuthentication: - type: http - scheme: basic - description: | - ### Basic authentication scheme - - Use HTTP Basic Auth with clients that support the InfluxDB 1.x convention of username and password (that don't support the `Authorization: Token` scheme). - - Username and password schemes require the following credentials: - - **username**: 1.x username (this is separate from the UI login username) - - **password**: 1.x password or InfluxDB API token - - ### Example - - `curl --get "http://localhost:8086/query" - --user "YOUR_1.x_USERNAME":"YOUR_TOKEN_OR_PASSWORD"` - - For more information and examples, see how to [authenticate with a username and password scheme](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/) -QuerystringAuthentication: - type: apiKey - in: query - name: u=&p= - description: | - ### Querystring authentication scheme - - Use InfluxDB 1.x API parameters to provide credentials through the query string. - - Username and password schemes require the following credentials: - - **username**: 1.x username (this is separate from the UI login username) - - **password**: 1.x password or InfluxDB API token - - ### Example - - `curl --get "http://localhost:8086/query" - --data-urlencode "u=YOUR_1.x_USERNAME" - --data-urlencode "p=YOUR_TOKEN_OR_PASSWORD"` - - For more information and examples, see how to [authenticate with a username and password scheme](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/) diff --git a/api-docs/openapi/content/v2.0/tag-groups.yml b/api-docs/openapi/content/v2.0/tag-groups.yml deleted file mode 100644 index da30ad692..000000000 --- a/api-docs/openapi/content/v2.0/tag-groups.yml +++ /dev/null @@ -1,28 +0,0 @@ -- name: Overview - tags: - - Quick start - - Authentication - - Response codes -- name: Data I/O endpoints - tags: - - Write - - Query -- name: Resource endpoints - tags: - - Buckets - - Dashboards - - Tasks - - Resources -- name: Security and access endpoints - tags: - - Authorizations - - Organizations - - Users -- name: System information endpoints - tags: - - Health - - Ping - - Ready - - Routes -- name: All endpoints - tags: [] diff --git a/api-docs/openapi/content/v2.0/tags.yml b/api-docs/openapi/content/v2.0/tags.yml deleted file mode 100644 index 80c9055dd..000000000 --- a/api-docs/openapi/content/v2.0/tags.yml +++ /dev/null @@ -1,43 +0,0 @@ -- name: Authentication - description: | - Use one of the following schemes to authenticate to the InfluxDB API: - - [Token authentication](#section/Authentication/TokenAuthentication) - - [Basic authentication](#section/Authentication/BasicAuthentication) - - [Querystring authentication](#section/Authentication/QuerystringAuthentication) - - x-traitTag: true -- name: Quick start - x-traitTag: true - description: | - See the [**API Quick Start**](https://docs.influxdata.com/influxdb/v2.0/api-guide/api_intro/) to get up and running authenticating with tokens, writing to buckets, and querying data. - - [**InfluxDB API client libraries**](https://docs.influxdata.com/influxdb/v2.0/api-guide/client-libraries/) are available for popular languages and ready to import into your application. -- name: Response codes - x-traitTag: true - description: | - The InfluxDB API uses standard HTTP status codes for success and failure responses. - The response body may include additional details. For details about a specific operation's response, see **Responses** and **Response Samples** for that operation. - - API operations may return the following HTTP status codes: - - |  Code  | Status | Description | - |:-----------:|:------------------------ |:--------------------- | - | `200` | Success | | - | `204` | No content | For a `POST` request, `204` indicates that InfluxDB accepted the request and request data is valid. Asynchronous operations, such as `write`, might not have completed yet. | - | `400` | Bad request | `Authorization` header is missing or malformed or the API token does not have permission for the operation. | - | `401` | Unauthorized | May indicate one of the following:
  • `Authorization: Token` header is missing or malformed
  • API token value is missing from the header
  • API token does not have permission. For more information about token types and permissions, see [Manage API tokens](https://docs.influxdata.com/influxdb/v2.0/security/tokens/)
  • | - | `404` | Not found | Requested resource was not found. `message` in the response body provides details about the requested resource. | - | `413` | Request entity too large | Request payload exceeds the size limit. | - | `422` | Unprocessible entity | Request data is invalid. `code` and `message` in the response body provide details about the problem. | - | `429` | Too many requests | API token is temporarily over the request quota. The `Retry-After` header describes when to try the request again. | - | `500` | Internal server error | | - | `503` | Service unavailable | Server is temporarily unavailable to process the request. The `Retry-After` header describes when to try the request again. | -- name: Query - description: | - Retrieve data, analyze queries, and get query suggestions. -- name: Write - description: | - Write time series data to buckets. -- name: Authorizations - description: | - Create and manage API tokens. An **authorization** associates a list of permissions to an **organization** and provides a token for API access. To assign a token to a specific user, scope the authorization to the user ID. diff --git a/api-docs/openapi/content/v2.1/info.yml b/api-docs/openapi/content/v2.1/info.yml index de7a5cc2c..ece63acba 100644 --- a/api-docs/openapi/content/v2.1/info.yml +++ b/api-docs/openapi/content/v2.1/info.yml @@ -1,3 +1,4 @@ title: InfluxDB OSS API Service +version: 2.0.0 description: | The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint. diff --git a/api-docs/openapi/content/v2.1/security-schemes.yml b/api-docs/openapi/content/v2.1/security-schemes.yml deleted file mode 100644 index c12659c01..000000000 --- a/api-docs/openapi/content/v2.1/security-schemes.yml +++ /dev/null @@ -1,58 +0,0 @@ -TokenAuthentication: - type: http - scheme: token - bearerFormat: InfluxDB Token String - description: | - ### Token authentication scheme - - InfluxDB API tokens ensure secure interaction between users and data. A token belongs to an organization and identifies InfluxDB permissions within the organization. - - Include your API token in an `Authentication: Token YOUR_API_TOKEN` HTTP header with each request. - - ### Example - - `curl http://localhost:8086/ping - --header "Authentication: Token YOUR_API_TOKEN"` - - For more information and examples, see the following: - - [Use tokens in API requests](https://docs.influxdata.com/influxdb/v2.1/api-guide/api_intro/#authentication). - - [Manage API tokens](https://docs.influxdata.com/influxdb/v2.1/security/tokens). - - [`/authorizations`](#tag/Authorizations) endpoint. -BasicAuthentication: - type: http - scheme: basic - description: | - ### Basic authentication scheme - - Use HTTP Basic Auth with clients that support the InfluxDB 1.x convention of username and password (that don't support the `Authorization: Token` scheme). - - Username and password schemes require the following credentials: - - **username**: 1.x username (this is separate from the UI login username) - - **password**: 1.x password or InfluxDB API token - - ### Example - - `curl --get "http://localhost:8086/query" - --user "YOUR_1.x_USERNAME":"YOUR_TOKEN_OR_PASSWORD"` - - For more information and examples, see how to [authenticate with a username and password scheme](https://docs.influxdata.com/influxdb/v2.1/reference/api/influxdb-1x/) -QuerystringAuthentication: - type: apiKey - in: query - name: u=&p= - description: | - ### Querystring authentication scheme - - Use InfluxDB 1.x API parameters to provide credentials through the query string. - - Username and password schemes require the following credentials: - - **username**: 1.x username (this is separate from the UI login username) - - **password**: 1.x password or InfluxDB API token - - ### Example - - `curl --get "http://localhost:8086/query" - --data-urlencode "u=YOUR_1.x_USERNAME" - --data-urlencode "p=YOUR_TOKEN_OR_PASSWORD"` - - For more information and examples, see how to [authenticate with a username and password scheme](https://docs.influxdata.com/influxdb/v2.1/reference/api/influxdb-1x/) diff --git a/api-docs/openapi/content/v2.1/tag-groups.yml b/api-docs/openapi/content/v2.1/tag-groups.yml deleted file mode 100644 index da30ad692..000000000 --- a/api-docs/openapi/content/v2.1/tag-groups.yml +++ /dev/null @@ -1,28 +0,0 @@ -- name: Overview - tags: - - Quick start - - Authentication - - Response codes -- name: Data I/O endpoints - tags: - - Write - - Query -- name: Resource endpoints - tags: - - Buckets - - Dashboards - - Tasks - - Resources -- name: Security and access endpoints - tags: - - Authorizations - - Organizations - - Users -- name: System information endpoints - tags: - - Health - - Ping - - Ready - - Routes -- name: All endpoints - tags: [] diff --git a/api-docs/openapi/content/v2.1/tags.yml b/api-docs/openapi/content/v2.1/tags.yml deleted file mode 100644 index 7659f4e77..000000000 --- a/api-docs/openapi/content/v2.1/tags.yml +++ /dev/null @@ -1,43 +0,0 @@ -- name: Authentication - description: | - Use one of the following schemes to authenticate to the InfluxDB API: - - [Token authentication](#section/Authentication/TokenAuthentication) - - [Basic authentication](#section/Authentication/BasicAuthentication) - - [Querystring authentication](#section/Authentication/QuerystringAuthentication) - - x-traitTag: true -- name: Quick start - x-traitTag: true - description: | - See the [**API Quick Start**](https://docs.influxdata.com/influxdb/v2.1/api-guide/api_intro/) to get up and running authenticating with tokens, writing to buckets, and querying data. - - [**InfluxDB API client libraries**](https://docs.influxdata.com/influxdb/v2.1/api-guide/client-libraries/) are available for popular languages and ready to import into your application. -- name: Response codes - x-traitTag: true - description: | - The InfluxDB API uses standard HTTP status codes for success and failure responses. - The response body may include additional details. For details about a specific operation's response, see **Responses** and **Response Samples** for that operation. - - API operations may return the following HTTP status codes: - - |  Code  | Status | Description | - |:-----------:|:------------------------ |:--------------------- | - | `200` | Success | | - | `204` | No content | For a `POST` request, `204` indicates that InfluxDB accepted the request and request data is valid. Asynchronous operations, such as `write`, might not have completed yet. | - | `400` | Bad request | `Authorization` header is missing or malformed or the API token does not have permission for the operation. | - | `401` | Unauthorized | May indicate one of the following:
  • `Authorization: Token` header is missing or malformed
  • API token value is missing from the header
  • API token does not have permission. For more information about token types and permissions, see [Manage API tokens](https://docs.influxdata.com/influxdb/v2.1/security/tokens/)
  • | - | `404` | Not found | Requested resource was not found. `message` in the response body provides details about the requested resource. | - | `413` | Request entity too large | Request payload exceeds the size limit. | - | `422` | Unprocessible entity | Request data is invalid. `code` and `message` in the response body provide details about the problem. | - | `429` | Too many requests | API token is temporarily over the request quota. The `Retry-After` header describes when to try the request again. | - | `500` | Internal server error | | - | `503` | Service unavailable | Server is temporarily unavailable to process the request. The `Retry-After` header describes when to try the request again. | -- name: Query - description: | - Retrieve data, analyze queries, and get query suggestions. -- name: Write - description: | - Write time series data to buckets. -- name: Authorizations - description: | - Create and manage API tokens. An **authorization** associates a list of permissions to an **organization** and provides a token for API access. To assign a token to a specific user, scope the authorization to the user ID. diff --git a/api-docs/openapi/content/v2.2/info.yml b/api-docs/openapi/content/v2.2/info.yml new file mode 100644 index 000000000..ece63acba --- /dev/null +++ b/api-docs/openapi/content/v2.2/info.yml @@ -0,0 +1,4 @@ +title: InfluxDB OSS API Service +version: 2.0.0 +description: | + The InfluxDB v2 API provides a programmatic interface for all interactions with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint. diff --git a/api-docs/openapi/content/v2.2/v1compat/info.yml b/api-docs/openapi/content/v2.2/v1compat/info.yml new file mode 100644 index 000000000..3ef1f3cda --- /dev/null +++ b/api-docs/openapi/content/v2.2/v1compat/info.yml @@ -0,0 +1 @@ +title: InfluxDB OSS v1 compatibility API documentation diff --git a/api-docs/openapi/content/v2.2/v1compat/tag-groups.yml b/api-docs/openapi/content/v2.2/v1compat/tag-groups.yml new file mode 100644 index 000000000..22cf0f916 --- /dev/null +++ b/api-docs/openapi/content/v2.2/v1compat/tag-groups.yml @@ -0,0 +1,9 @@ +- name: Overview + tags: + - Authentication +- name: Data I/O endpoints + tags: + - Write + - Query +- name: All endpoints + tags: [] diff --git a/api-docs/openapi/plugins/decorators/paths/strip-trailing-slash.js b/api-docs/openapi/plugins/decorators/paths/strip-trailing-slash.js new file mode 100644 index 000000000..cc9bdad3f --- /dev/null +++ b/api-docs/openapi/plugins/decorators/paths/strip-trailing-slash.js @@ -0,0 +1,18 @@ +module.exports = StripVersionPrefix; + +/** @type {import('@redocly/openapi-cli').OasDecorator} */ +function StripVersionPrefix() { + return { + PathMap: { + leave(paths, ctx) { + Object.keys(paths).forEach(function(p) { + if(p.length > 1 && p.endsWith('/')) { + const props = JSON.stringify(paths[p]); + paths[p.slice(0, -1)] = JSON.parse(props); + delete paths[p]; + } + }); + } + } + } +} \ No newline at end of file diff --git a/api-docs/openapi/plugins/decorators/paths/strip-version-prefix.js b/api-docs/openapi/plugins/decorators/paths/strip-version-prefix.js index 407893f98..77ca0b6d7 100644 --- a/api-docs/openapi/plugins/decorators/paths/strip-version-prefix.js +++ b/api-docs/openapi/plugins/decorators/paths/strip-version-prefix.js @@ -6,6 +6,7 @@ function StripVersionPrefix() { PathMap: { leave(paths, ctx) { const nonversioned = [ + '/debug', '/health', '/legacy/authorizations', '/legacy/authorizations/{authID}', diff --git a/api-docs/openapi/plugins/decorators/replace-shortcodes.js b/api-docs/openapi/plugins/decorators/replace-shortcodes.js index 9591431f2..586d147b5 100644 --- a/api-docs/openapi/plugins/decorators/replace-shortcodes.js +++ b/api-docs/openapi/plugins/decorators/replace-shortcodes.js @@ -1,37 +1,41 @@ module.exports = ReplaceShortcodes; -function replaceDocsUrl(node) { - const shortcode = /\{\{\% INFLUXDB_DOCS_URL \%\}\}/g; +function replaceDocsUrl(field) { + if(!field) { return } + const shortcode = '{{% INFLUXDB_DOCS_URL %}}'; let replacement = `/influxdb/${process.env.INFLUXDB_VERSION}`; - let description = node.description?.replace(shortcode, replacement); - const fullUrl = /https:\/\/docs\.influxdata\.com\/influxdb\//g; + let replaced = field.replaceAll(shortcode, replacement); + const fullUrl = 'https://docs.influxdata.com/influxdb/'; replacement = "/influxdb/"; - return description?.replace(fullUrl, replacement); + return replaced.replaceAll(fullUrl, replacement); } /** @type {import('@redocly/openapi-cli').OasDecorator} */ function docsUrl() { return { - Info: { - leave(info, ctx) { - info.description = replaceDocsUrl(info); - } - }, - PathItem: { - leave(pathItem, ctx) { - pathItem.description = replaceDocsUrl(pathItem); - } - }, - Tag: { - leave(tag, ctx) { - tag.description = replaceDocsUrl(tag); - } - }, - SecurityScheme: { - leave(scheme, ctx) { - scheme.description = replaceDocsUrl(scheme); + DefinitionRoot: { + Info: { + leave(info, ctx) { + info.description = replaceDocsUrl(info.description); + }, + }, + PathItem: { + leave(pathItem, ctx) { + pathItem.description = replaceDocsUrl(pathItem.description); + } + }, + Tag: { + leave(tag, ctx) { + tag.description = replaceDocsUrl(tag.description); + } + }, + SecurityScheme: { + leave(scheme, ctx) { + scheme.description = replaceDocsUrl(scheme.description); + } } } + } } diff --git a/api-docs/openapi/plugins/decorators/security/set-security-schemes.js b/api-docs/openapi/plugins/decorators/security/set-security-schemes.js deleted file mode 100644 index 78e2b362d..000000000 --- a/api-docs/openapi/plugins/decorators/security/set-security-schemes.js +++ /dev/null @@ -1,18 +0,0 @@ -module.exports = SetSecuritySchemes; - -/** @type {import('@redocly/openapi-cli').OasDecorator} */ -function SetSecuritySchemes(options) { - return { - Components: { - leave(comps, ctx) { - if(options.data) { - comps.securitySchemes = comps.securitySchemes || {}; - Object.keys(options.data).forEach( - function(scheme) { - comps.securitySchemes[scheme] = options.data[scheme]; - }) - } - } - } - } -} diff --git a/api-docs/openapi/plugins/decorators/servers/delete-servers.js b/api-docs/openapi/plugins/decorators/servers/delete-servers.js new file mode 100644 index 000000000..563d22d2f --- /dev/null +++ b/api-docs/openapi/plugins/decorators/servers/delete-servers.js @@ -0,0 +1,21 @@ +module.exports = DeleteServers; + +/** @type {import('@redocly/openapi-cli').OasDecorator} */ + +/** + * Returns an object with keys in [node type, any, ref]. + * The key instructs openapi when to invoke the key's Visitor object. + * Object key "Server" is an OAS 3.0 node type. + */ +function DeleteServers() { + return { + Operation: { + leave(op) { + /** Delete servers with empty url. **/ + if(Array.isArray(op.servers)) { + op.servers = op.servers.filter(server => server.url); + } + } + } + } +}; diff --git a/api-docs/openapi/plugins/decorators/servers/set-servers.js b/api-docs/openapi/plugins/decorators/servers/set-servers.js index 27f988793..ae2c5a162 100644 --- a/api-docs/openapi/plugins/decorators/servers/set-servers.js +++ b/api-docs/openapi/plugins/decorators/servers/set-servers.js @@ -1,17 +1,20 @@ module.exports = SetServers; +const { servers } = require('../../../content/content') + /** @type {import('@redocly/openapi-cli').OasDecorator} */ /** * Returns an object with keys in [node type, any, ref]. - * The key instructs openapi when to invoke the key's Visitor object. + * The key instructs openapi when to invoke the key's Visitor object. * Object key "Server" is an OAS 3.0 node type. */ -function SetServers(options) { +function SetServers() { + const data = servers(); return { DefinitionRoot: { leave(root) { - root.servers = options.data; + root.servers = data; } }, } diff --git a/api-docs/openapi/plugins/decorators/set-info.js b/api-docs/openapi/plugins/decorators/set-info.js index 8ec217a36..5efda6491 100644 --- a/api-docs/openapi/plugins/decorators/set-info.js +++ b/api-docs/openapi/plugins/decorators/set-info.js @@ -1,16 +1,24 @@ module.exports = SetInfo; +const { info } = require('../../content/content') + /** @type {import('@redocly/openapi-cli').OasDecorator} */ -function SetInfo(options) { +function SetInfo() { + const data = info(); + return { Info: { leave(info, ctx) { - if(options.data) { - if(options.data.hasOwnProperty('title')) { - info.title = options.data.title; + if(data) { + if(data.hasOwnProperty('title')) { + info.title = data.title; } - if(options.data.hasOwnProperty('description')) { - info.description = options.data.description; + if(data.hasOwnProperty('version')) { + + info.version = data.version; + } + if(data.hasOwnProperty('description')) { + info.description = data.description; } } } diff --git a/api-docs/openapi/plugins/decorators/tags/set-tag-groups.js b/api-docs/openapi/plugins/decorators/tags/set-tag-groups.js index 3a0d84a9c..91fac505f 100644 --- a/api-docs/openapi/plugins/decorators/tags/set-tag-groups.js +++ b/api-docs/openapi/plugins/decorators/tags/set-tag-groups.js @@ -1,43 +1,60 @@ module.exports = SetTagGroups; +const { tagGroups } = require('../../../content/content') const { collect, getName, sortName } = require('../../helpers/content-helper.js') /** * Returns an object that defines handler functions for: * - Operation nodes * - DefinitionRoot (the root openapi) node * The order of the two functions is significant. - * The Operation handler collects tags from the + * The Operation handler collects tags from the * operation ('get', 'post', etc.) in every path. * The DefinitionRoot handler, executed when * the parser is leaving the root node, - * sets `x-tagGroups` to the provided `data` + * adds custom `tagGroups` content to `x-tagGroups` * and sets the value of `All Endpoints` to the collected tags. */ /** @type {import('@redocly/openapi-cli').OasDecorator} */ -function SetTagGroups(options) { +function SetTagGroups() { + let data = tagGroups(); + if(!Array.isArray(data)) { + data = []; + } + let tags = []; + /** Collect tags for each operation and convert string tags to object tags. **/ return { - Operation: { - leave(op, ctx, parents) { - tags = collect(tags, op.tags); - } - }, DefinitionRoot: { + Operation: { + leave(op) { + let opTags = op.tags?.map( + function(t) { + return typeof t === 'string' ? { name: t } : t; + } + ) || []; + tags = collect(tags, opTags); + } + }, leave(root) { - root.tags = root.tags || []; - root.tags = collect(root.tags, tags) - .sort((a, b) => sortName(a, b)); + root.tags = root.tags || []; + root.tags = collect(root.tags, tags) + .sort((a, b) => sortName(a, b)); - if(!options.data) { return; } + endpointTags = root.tags + .filter(t => !t['x-traitTag']) + .map(t => getName(t)); - endpointTags = root.tags - .filter(t => !t['x-traitTag']) - .map(t => getName(t)); - root['x-tagGroups'] = options.data - .map(function(grp) { - grp.tags = grp.name === 'All endpoints' ? endpointTags : grp.tags; - return grp; - }); + if(Array.isArray(root['x-tagGroups'])) { + root['x-tagGroups'].concat(data); + } else { + root['x-tagGroups'] = data; + } + + root['x-tagGroups'].map( + function(grp) { + grp.tags = grp.name === 'All endpoints' ? endpointTags : grp.tags; + return grp; + }); } } } diff --git a/api-docs/openapi/plugins/decorators/tags/set-tags.js b/api-docs/openapi/plugins/decorators/tags/set-tags.js index de9eef03a..7369eeea6 100644 --- a/api-docs/openapi/plugins/decorators/tags/set-tags.js +++ b/api-docs/openapi/plugins/decorators/tags/set-tags.js @@ -1,21 +1,24 @@ module.exports = SetTags; +const { tags } = require('../../../content/content') /** * Returns an object that defines handler functions for: * - DefinitionRoot (the root openapi) node * The DefinitionRoot handler, executed when * the parser is leaving the root node, - * sets the root `tags` list to the provided `data`. + * sets the root `tags` list to the provided `data`. */ /** @type {import('@redocly/openapi-cli').OasDecorator} */ -function SetTags(options) { - let tags = []; +function SetTags() { + const data = tags(); + return { DefinitionRoot: { - leave(root) { - if(options.data) { - root.tags = options.data; - } + /** Set tags from custom tags when visitor enters root. */ + enter(root) { + if(data) { + root.tags = data; + } } } } diff --git a/api-docs/openapi/plugins/docs-plugin.js b/api-docs/openapi/plugins/docs-plugin.js index cfa1113c7..8d2ed8275 100644 --- a/api-docs/openapi/plugins/docs-plugin.js +++ b/api-docs/openapi/plugins/docs-plugin.js @@ -3,12 +3,11 @@ const ValidateServersUrl = require('./rules/validate-servers-url'); const RemovePrivatePaths = require('./decorators/paths/remove-private-paths'); const ReplaceShortcodes = require('./decorators/replace-shortcodes'); const SetInfo = require('./decorators/set-info'); +const DeleteServers = require('./decorators/servers/delete-servers'); const SetServers = require('./decorators/servers/set-servers'); -const SetSecuritySchemes = require('./decorators/security/set-security-schemes'); -const SetTags = require('./decorators/tags/set-tags'); const SetTagGroups = require('./decorators/tags/set-tag-groups'); const StripVersionPrefix = require('./decorators/paths/strip-version-prefix'); -const {info, securitySchemes, servers, tags, tagGroups } = require('../content/content') +const StripTrailingSlash = require('./decorators/paths/strip-trailing-slash'); const id = 'docs'; @@ -23,15 +22,14 @@ const rules = { /** @type {import('@redocly/openapi-cli').CustomRulesConfig} */ const decorators = { oas3: { - 'set-servers': () => SetServers({data: servers}), + 'set-servers': SetServers, + 'delete-servers': DeleteServers, 'remove-private-paths': RemovePrivatePaths, - 'replace-docs-url-shortcode': ReplaceShortcodes().docsUrl, 'strip-version-prefix': StripVersionPrefix, - 'set-info': () => SetInfo({data: info}), - 'set-security': () => SetSecurity({data: security}), - 'set-security-schemes': () => SetSecuritySchemes({data: securitySchemes}), - 'set-tags': () => SetTags({data: tags}), - 'set-tag-groups': () => SetTagGroups({data: tagGroups}), + 'strip-trailing-slash': StripTrailingSlash, + 'set-info': SetInfo, + 'set-tag-groups': SetTagGroups, + 'replace-docs-url-shortcode': ReplaceShortcodes().docsUrl, } }; @@ -40,16 +38,18 @@ module.exports = { configs: { all: { rules: { - 'no-server-trailing-slash': 'off', + 'no-server-trailing-slash': 'off', 'docs/validate-servers-url': 'error', }, decorators: { 'docs/set-servers': 'error', - 'docs/remove-private-paths': 'error', - 'docs/replace-docs-url-shortcode': 'error', - 'docs/strip-version-prefix': 'error', - 'docs/set-info': 'error', - 'docs/set-tag-groups': 'error', + 'docs/delete-servers': 'error', + 'docs/remove-private-paths': 'error', + 'docs/strip-version-prefix': 'error', + 'docs/strip-trailing-slash': 'error', + 'docs/set-info': 'error', + 'docs/set-tag-groups': 'error', + 'docs/replace-docs-url-shortcode': 'error', }, }, }, diff --git a/api-docs/v2.0/ref.yml b/api-docs/v2.0/ref.yml index ad502060b..fa6f30576 100644 --- a/api-docs/v2.0/ref.yml +++ b/api-docs/v2.0/ref.yml @@ -3188,7 +3188,7 @@ components: Ready: properties: started: - example: 2019-03-13T10:09:33.891196-04:00 + example: '2019-03-13T10:09:33.891196-04:00' format: date-time type: string status: diff --git a/api-docs/v2.1/ref.yml b/api-docs/v2.1/ref.yml index 3e83b7739..0b35456e4 100644 --- a/api-docs/v2.1/ref.yml +++ b/api-docs/v2.1/ref.yml @@ -2,8 +2,8 @@ components: parameters: After: description: > - The last resource ID from which to seek from (but not including). This - is to be used instead of `offset`. + Resource ID to seek from. Results are not inclusive of this ID. Use + `after` instead of `offset`. in: query name: after required: false @@ -125,24 +125,22 @@ components: readOnly: true type: object org: - description: Name of the org token is scoped to. + description: Name of the organization that the token is scoped to. readOnly: true type: string orgID: - description: ID of org that authorization is scoped to. + description: ID of the organization that the authorization is scoped to. type: string permissions: description: >- - List of permissions for an auth. An auth must have at least one - Permission. + List of permissions for an authorization. An authorization must + have at least one permission. items: $ref: '#/components/schemas/Permission' minItems: 1 type: array token: - description: >- - Passed via the Authorization Header and Token Authentication - type. + description: Token used to authenticate API requests. readOnly: true type: string updatedAt: @@ -150,11 +148,11 @@ components: readOnly: true type: string user: - description: Name of user that created and owns the token. + description: Name of the user that created and owns the token. readOnly: true type: string userID: - description: ID of user that created and owns the token. + description: ID of the user that created and owns the token. readOnly: true type: string type: object @@ -191,8 +189,8 @@ components: status: default: active description: >- - If inactive the token is inactive and requests using the token will - be rejected. + Status of the token. If `inactive`, requests using the token will be + rejected. enum: - active - inactive @@ -219,10 +217,10 @@ components: - 'y' type: object Axis: - description: The description of a particular axis for a visualization. + description: Axis used in a visualization. properties: base: - description: Base represents the radix for formatting axis values. + description: Radix for formatting axis values. enum: - '' - '2' @@ -230,23 +228,23 @@ components: type: string bounds: description: >- - The extents of an axis in the form [lower, upper]. Clients determine - whether bounds are to be inclusive or exclusive of their limits + The extents of the axis in the form [lower, upper]. Clients + determine whether bounds are inclusive or exclusive of their limits. items: type: string maxItems: 2 minItems: 0 type: array label: - description: Label is a description of this Axis + description: Description of the axis. type: string prefix: - description: Prefix represents a label prefix for formatting axis values. + description: Label prefix for formatting axis values. type: string scale: $ref: '#/components/schemas/AxisScale' suffix: - description: Suffix represents a label suffix for formatting axis values. + description: Label suffix for formatting axis values. type: string type: object AxisScale: @@ -413,22 +411,22 @@ components: properties: labels: $ref: '#/components/schemas/Link' - description: URL to retrieve labels for this bucket + description: URL to retrieve labels for this bucket. members: $ref: '#/components/schemas/Link' - description: URL to retrieve members that can read this bucket + description: URL to retrieve members that can read this bucket. org: $ref: '#/components/schemas/Link' - description: URL to retrieve parent organization for this bucket + description: URL to retrieve parent organization for this bucket. owners: $ref: '#/components/schemas/Link' description: URL to retrieve owners that can read and write to this bucket. self: $ref: '#/components/schemas/Link' - description: URL for this bucket + description: URL for this bucket. write: $ref: '#/components/schemas/Link' - description: URL to write line protocol for this bucket + description: URL to write line protocol to this bucket. readOnly: true type: object name: @@ -662,10 +660,12 @@ components: readOnly: true type: string latestCompleted: - description: Timestamp of latest scheduled, completed run, RFC3339. + description: >- + Timestamp (in RFC3339 date/time + format](https://datatracker.ietf.org/doc/html/rfc3339)) of the + latest scheduled and completed run. format: date-time readOnly: true - type: string links: example: labels: /api/v2/checks/1/labels @@ -821,6 +821,11 @@ components: type: $ref: '#/components/schemas/NodeType' type: object + Config: + properties: + config: + type: object + type: object ConstantVariableProperties: properties: type: @@ -879,24 +884,24 @@ components: DBRP: properties: bucketID: - description: the bucket ID used as target for the translation. + description: ID of the bucket used as the target for the translation. type: string database: description: InfluxDB v1 database type: string default: description: >- - Specify if this mapping represents the default retention policy for - the database specificed. + Mapping represents the default retention policy for the database + specified. type: boolean id: - description: the mapping identifier + description: ID of the DBRP mapping. readOnly: true type: string links: $ref: '#/components/schemas/Links' orgID: - description: the organization ID that owns this mapping. + description: ID of the organization that owns this mapping. type: string retention_policy: description: InfluxDB v1 retention policy @@ -912,21 +917,21 @@ components: DBRPCreate: properties: bucketID: - description: the bucket ID used as target for the translation. + description: ID of the bucket used as the target for the translation. type: string database: description: InfluxDB v1 database type: string default: description: >- - Specify if this mapping represents the default retention policy for - the database specificed. + Mapping represents the default retention policy for the database + specified. type: boolean org: - description: the organization that owns this mapping. + description: Name of the organization that owns this mapping. type: string orgID: - description: the organization ID that owns this mapping. + description: ID of the organization that owns this mapping. type: string retention_policy: description: InfluxDB v1 retention policy @@ -1292,23 +1297,22 @@ components: type: string err: description: >- - err is a stack of errors that occurred during processing of the - request. Useful for debugging. + Stack of errors that occurred during processing of the request. + Useful for debugging. readOnly: true type: string message: - description: message is a human-readable message. + description: Human-readable message. readOnly: true type: string op: description: >- - op describes the logical code operation during error. Useful for - debugging. + Describes the logical code operation when the error occurred. Useful + for debugging. readOnly: true type: string required: - code - - message Expression: oneOf: - $ref: '#/components/schemas/ArrayExpression' @@ -2275,30 +2279,27 @@ components: type: string err: description: >- - Err is a stack of errors that occurred during processing of the - request. Useful for debugging. + Stack of errors that occurred during processing of the request. + Useful for debugging. readOnly: true type: string line: - description: First line within sent body containing malformed data + description: First line in the request body that contains malformed data. format: int32 readOnly: true type: integer message: - description: Message is a human-readable message. + description: Human-readable message. readOnly: true type: string op: description: >- - Op describes the logical code operation during error. Useful for - debugging. + Describes the logical code operation when the error occurred. Useful + for debugging. readOnly: true type: string required: - code - - message - - op - - err LineProtocolLengthError: properties: code: @@ -2308,7 +2309,7 @@ components: readOnly: true type: string message: - description: Message is a human-readable message. + description: Human-readable message. readOnly: true type: string required: @@ -2682,7 +2683,10 @@ components: readOnly: true type: string latestCompleted: - description: Timestamp of latest scheduled, completed run, RFC3339. + description: >- + Timestamp (in RFC3339 date/time + format](https://datatracker.ietf.org/doc/html/rfc3339)) of the + latest scheduled and completed run. format: date-time readOnly: true type: string @@ -3195,7 +3199,7 @@ components: Ready: properties: started: - example: 2019-03-13T10:09:33.891196-04:00 + example: '2019-03-13T10:09:33.891196-04:00' format: date-time type: string status: @@ -3216,6 +3220,82 @@ components: value: type: string type: object + RemoteConnection: + properties: + allowInsecureTLS: + default: false + type: boolean + description: + type: string + id: + type: string + name: + type: string + orgID: + type: string + remoteOrgID: + type: string + remoteURL: + format: uri + type: string + required: + - id + - name + - orgID + - remoteURL + - remoteOrgID + - allowInsecureTLS + type: object + RemoteConnectionCreationRequest: + properties: + allowInsecureTLS: + default: false + type: boolean + description: + type: string + name: + type: string + orgID: + type: string + remoteAPIToken: + type: string + remoteOrgID: + type: string + remoteURL: + format: uri + type: string + required: + - name + - orgID + - remoteURL + - remoteAPIToken + - remoteOrgID + - allowInsecureTLS + type: object + RemoteConnectionUpdateRequest: + properties: + allowInsecureTLS: + default: false + type: boolean + description: + type: string + name: + type: string + remoteAPIToken: + type: string + remoteOrgID: + type: string + remoteURL: + format: uri + type: string + type: object + RemoteConnections: + properties: + remotes: + items: + $ref: '#/components/schemas/RemoteConnection' + type: array + type: object RenamableField: description: Describes a field that can be renamed and made visible or invisible. properties: @@ -3230,6 +3310,98 @@ components: description: Indicates whether this field should be visible on the table. type: boolean type: object + Replication: + properties: + currentQueueSizeBytes: + format: int64 + type: integer + description: + type: string + dropNonRetryableData: + type: boolean + id: + type: string + latestErrorMessage: + type: string + latestResponseCode: + type: integer + localBucketID: + type: string + maxQueueSizeBytes: + format: int64 + type: integer + name: + type: string + orgID: + type: string + remoteBucketID: + type: string + remoteID: + type: string + required: + - id + - name + - remoteID + - orgID + - localBucketID + - remoteBucketID + - maxQueueSizeBytes + - currentQueueSizeBytes + type: object + ReplicationCreationRequest: + properties: + description: + type: string + dropNonRetryableData: + default: false + type: boolean + localBucketID: + type: string + maxQueueSizeBytes: + default: 67108860 + format: int64 + minimum: 33554430 + type: integer + name: + type: string + orgID: + type: string + remoteBucketID: + type: string + remoteID: + type: string + required: + - name + - orgID + - remoteID + - localBucketID + - remoteBucketID + - maxQueueSizeBytes + type: object + ReplicationUpdateRequest: + properties: + description: + type: string + dropNonRetryableData: + type: boolean + maxQueueSizeBytes: + format: int64 + minimum: 33554430 + type: integer + name: + type: string + remoteBucketID: + type: string + remoteID: + type: string + type: object + Replications: + properties: + replications: + items: + $ref: '#/components/schemas/Replication' + type: array + type: object Resource: properties: id: @@ -4257,8 +4429,8 @@ components: properties: authorizationID: description: >- - The ID of the authorization used when this task communicates with - the query engine. + ID of the authorization used when the task communicates with the + query engine. type: string createdAt: format: date-time @@ -4266,17 +4438,27 @@ components: type: string cron: description: >- - A task repetition schedule in the form '* * * * * *'; parsed from - Flux. + [Cron expression](https://en.wikipedia.org/wiki/Cron#Overview) that + defines the schedule on which the task runs. Cron scheduling is + based on system time. + + Value is a [Cron + expression](https://en.wikipedia.org/wiki/Cron#Overview). type: string description: - description: An optional description of the task. + description: Description of the task. type: string every: - description: A simple task repetition schedule; parsed from Flux. + description: >- + Interval at which the task runs. `every` also determines when the + task first runs, depending on the specified time. + + Value is a [duration + literal](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals)). + format: duration type: string flux: - description: The Flux script to run for this task. + description: Flux script to run for this task. type: string id: readOnly: true @@ -4294,7 +4476,11 @@ components: readOnly: true type: string latestCompleted: - description: Timestamp of latest scheduled, completed run, RFC3339. + description: >- + Timestamp of the latest scheduled and completed run. + + Value is a timestamp in [RFC3339 date/time + format](https://docs.influxdata.com/flux/v0.x/data-types/basic/time/#time-syntax). format: date-time readOnly: true type: string @@ -4322,29 +4508,31 @@ components: readOnly: true type: object name: - description: The name of the task. + description: Name of the task. type: string offset: description: >- - Duration to delay after the schedule, before executing the task; - parsed from flux, if set to zero it will remove this option and use - 0 as the default. + [Duration](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals) + to delay execution of the task after the scheduled time has elapsed. + `0` removes the offset. + + The value is a [duration + literal](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals). + format: duration type: string org: - description: The name of the organization that owns this Task. + description: Name of the organization that owns the task. type: string orgID: - description: The ID of the organization that owns this Task. + description: ID of the organization that owns the task. type: string ownerID: - description: The ID of the user who owns this Task. + description: ID of the user who owns this Task. type: string status: $ref: '#/components/schemas/TaskStatusType' type: - description: >- - The type of task, this can be used for filtering tasks on list - actions. + description: Type of the task, useful for filtering a task list. type: string updatedAt: format: date-time @@ -5907,7 +6095,7 @@ info: with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint. openapi: 3.0.0 paths: - /api/v2/: + /api/v2: get: operationId: GetRoutes parameters: @@ -8372,13 +8560,14 @@ paths: application/json: schema: $ref: '#/components/schemas/Token' - description: A temp token for Mapbox + description: Temporary token for Mapbox. '401': $ref: '#/components/responses/ServerError' '500': $ref: '#/components/responses/ServerError' default: $ref: '#/components/responses/ServerError' + summary: Get a mapbox token /api/v2/me: get: operationId: GetMe @@ -8462,7 +8651,8 @@ paths: externalDocs: description: Prometheus exposition formats url: https://prometheus.io/docs/instrumenting/exposition_formats - type: Prometheus text-based exposition + format: Prometheus text-based exposition + type: string description: > Payload body contains metrics about the InfluxDB instance. @@ -9885,6 +10075,308 @@ paths: summary: Get the readiness of an instance at startup tags: - Ready + /api/v2/remotes: + get: + operationId: GetRemoteConnections + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: query + name: orgID + required: true + schema: + type: string + - in: query + name: name + schema: + type: string + - in: query + name: remoteURL + schema: + format: uri + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnections' + description: List of remote connections + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: List all remote connections + tags: + - RemoteConnections + post: + operationId: PostRemoteConnection + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnectionCreationRequest' + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnection' + description: Remote connection saved + '400': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Register a new remote connection + tags: + - RemoteConnections + /api/v2/remotes/{remoteID}: + delete: + operationId: DeleteRemoteConnectionByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: remoteID + required: true + schema: + type: string + responses: + '204': + description: Remote connection info deleted. + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Delete a remote connection + tags: + - RemoteConnections + get: + operationId: GetRemoteConnectionByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: remoteID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnection' + description: Remote connection + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Retrieve a remote connection + tags: + - RemoteConnections + patch: + operationId: PatchRemoteConnectionByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: remoteID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnectionUpdateRequest' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnection' + description: Updated information saved + '400': + $ref: '#/components/responses/ServerError' + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Update a remote connection + tags: + - RemoteConnections + /api/v2/replications: + get: + operationId: GetReplications + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: query + name: orgID + required: true + schema: + type: string + - in: query + name: name + schema: + type: string + - in: query + name: remoteID + schema: + type: string + - in: query + name: localBucketID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Replications' + description: List of replications + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: List all replications + tags: + - Replications + post: + operationId: PostReplication + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: If true, validate the replication, but don't save it. + in: query + name: validate + schema: + default: false + type: boolean + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/ReplicationCreationRequest' + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Replication' + description: Replication saved + '204': + description: Replication validated, but not saved + '400': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Register a new replication + tags: + - Replications + /api/v2/replications/{replicationID}: + delete: + operationId: DeleteReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + responses: + '204': + description: Replication deleted. + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Delete a replication + tags: + - Replications + get: + operationId: GetReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Replication' + description: Replication + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Retrieve a replication + tags: + - Replications + patch: + operationId: PatchReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + - description: If true, validate the updated information, but don't save it. + in: query + name: validate + schema: + default: false + type: boolean + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/ReplicationUpdateRequest' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Replication' + description: Updated information saved + '204': + description: Updated replication validated, but not saved + '400': + $ref: '#/components/responses/ServerError' + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Update a replication + tags: + - Replications + /api/v2/replications/{replicationID}/validate: + post: + operationId: PostValidateReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + responses: + '204': + description: Replication is valid + '400': + $ref: '#/components/responses/ServerError' + description: Replication failed validation + default: + $ref: '#/components/responses/ServerError' + summary: Validate a replication + tags: + - Replications /api/v2/resources: get: operationId: GetResources @@ -12287,6 +12779,22 @@ paths: summary of the run. The summary contains newly created resources. The diff compares the initial state to the state after the package applied. This corresponds to `"dryRun": true`. + '422': + content: + application/json: + schema: + allOf: + - $ref: '#/components/schemas/TemplateSummary' + - properties: + code: + type: string + message: + type: string + required: + - message + - code + type: object + description: Template failed validation default: content: application/json: @@ -13044,6 +13552,7 @@ tags: - Buckets - Cells - Checks + - Config - Dashboards - DBRPs - Delete @@ -13070,6 +13579,8 @@ tags: name: Quick start x-traitTag: true - Ready + - RemoteConnections + - Replications - Resources - description: > The InfluxDB API uses standard HTTP status codes for success and failure @@ -13168,6 +13679,7 @@ x-tagGroups: - name: System information endpoints tags: - Health + - Metrics - Ping - Ready - Routes @@ -13191,6 +13703,8 @@ x-tagGroups: - Ping - Query - Ready + - RemoteConnections + - Replications - Resources - Restore - Routes diff --git a/api-docs/v2.1/swaggerV1Compat.yml b/api-docs/v2.1/swaggerV1Compat.yml index 6ebeeb9f3..e5b1a9eec 100644 --- a/api-docs/v2.1/swaggerV1Compat.yml +++ b/api-docs/v2.1/swaggerV1Compat.yml @@ -3,7 +3,7 @@ info: title: InfluxDB OSS v1 compatibility API documentation version: 0.1.0 description: | - The InfluxDB 1.x compatibility /write and /query endpoints work with + The InfluxDB 1.x compatibility `/write` and `/query` endpoints work with InfluxDB 1.x client libraries and third-party integrations like Grafana and others. diff --git a/api-docs/v2.2/ref.yml b/api-docs/v2.2/ref.yml new file mode 100644 index 000000000..5c764c1b7 --- /dev/null +++ b/api-docs/v2.2/ref.yml @@ -0,0 +1,14788 @@ +components: + parameters: + After: + description: > + Resource ID to seek from. Results are not inclusive of this ID. Use + `after` instead of `offset`. + in: query + name: after + required: false + schema: + type: string + Descending: + in: query + name: descending + required: false + schema: + default: false + type: boolean + Limit: + in: query + name: limit + required: false + schema: + default: 20 + maximum: 100 + minimum: 1 + type: integer + Offset: + in: query + name: offset + required: false + schema: + minimum: 0 + type: integer + SortBy: + in: query + name: sortBy + required: false + schema: + type: string + TraceSpan: + description: OpenTracing span context + example: + baggage: + key: value + span_id: '1' + trace_id: '1' + in: header + name: Zap-Trace-Span + required: false + schema: + type: string + responses: + ServerError: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Non 2XX error response from server. + schemas: + ASTResponse: + description: Contains the AST for the supplied Flux query + properties: + ast: + $ref: '#/components/schemas/Package' + type: object + AddResourceMemberRequestBody: + properties: + id: + type: string + name: + type: string + required: + - id + type: object + AnalyzeQueryResponse: + properties: + errors: + items: + properties: + character: + type: integer + column: + type: integer + line: + type: integer + message: + type: string + type: object + type: array + type: object + ArrayExpression: + description: Used to create and directly specify the elements of an array object + properties: + elements: + description: Elements of the array + items: + $ref: '#/components/schemas/Expression' + type: array + type: + $ref: '#/components/schemas/NodeType' + type: object + Authorization: + allOf: + - $ref: '#/components/schemas/AuthorizationUpdateRequest' + - properties: + createdAt: + format: date-time + readOnly: true + type: string + id: + readOnly: true + type: string + links: + example: + self: /api/v2/authorizations/1 + user: /api/v2/users/12 + properties: + self: + $ref: '#/components/schemas/Link' + readOnly: true + user: + $ref: '#/components/schemas/Link' + readOnly: true + readOnly: true + type: object + org: + description: Name of the organization that the token is scoped to. + readOnly: true + type: string + orgID: + description: ID of the organization that the authorization is scoped to. + type: string + permissions: + description: >- + List of permissions for an authorization. An authorization must + have at least one permission. + items: + $ref: '#/components/schemas/Permission' + minItems: 1 + type: array + token: + description: Token used to authenticate API requests. + readOnly: true + type: string + updatedAt: + format: date-time + readOnly: true + type: string + user: + description: Name of the user that created and owns the token. + readOnly: true + type: string + userID: + description: ID of the user that created and owns the token. + readOnly: true + type: string + type: object + required: + - orgID + - permissions + AuthorizationPostRequest: + allOf: + - $ref: '#/components/schemas/AuthorizationUpdateRequest' + - properties: + orgID: + description: ID of org that authorization is scoped to. + type: string + permissions: + description: >- + List of permissions for an auth. An auth must have at least one + Permission. + items: + $ref: '#/components/schemas/Permission' + minItems: 1 + type: array + userID: + description: ID of user that authorization is scoped to. + type: string + type: object + required: + - orgID + - permissions + AuthorizationUpdateRequest: + properties: + description: + description: A description of the token. + type: string + status: + default: active + description: >- + Status of the token. If `inactive`, requests using the token will be + rejected. + enum: + - active + - inactive + type: string + Authorizations: + properties: + authorizations: + items: + $ref: '#/components/schemas/Authorization' + type: array + links: + $ref: '#/components/schemas/Links' + readOnly: true + type: object + Axes: + description: The viewport for a View's visualizations + properties: + x: + $ref: '#/components/schemas/Axis' + 'y': + $ref: '#/components/schemas/Axis' + required: + - x + - 'y' + type: object + Axis: + description: Axis used in a visualization. + properties: + base: + description: Radix for formatting axis values. + enum: + - '' + - '2' + - '10' + type: string + bounds: + description: >- + The extents of the axis in the form [lower, upper]. Clients + determine whether bounds are inclusive or exclusive of their limits. + items: + type: string + maxItems: 2 + minItems: 0 + type: array + label: + description: Description of the axis. + type: string + prefix: + description: Label prefix for formatting axis values. + type: string + scale: + $ref: '#/components/schemas/AxisScale' + suffix: + description: Label suffix for formatting axis values. + type: string + type: object + AxisScale: + description: 'Scale is the axis formatting scale. Supported: "log", "linear"' + enum: + - log + - linear + type: string + BadStatement: + description: >- + A placeholder for statements for which no correct statement nodes can be + created + properties: + text: + description: Raw source text + type: string + type: + $ref: '#/components/schemas/NodeType' + type: object + BandViewProperties: + properties: + axes: + $ref: '#/components/schemas/Axes' + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + generateXAxisTicks: + items: + type: string + type: array + generateYAxisTicks: + items: + type: string + type: array + geom: + $ref: '#/components/schemas/XYGeom' + hoverDimension: + enum: + - auto + - x + - 'y' + - xy + type: string + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + lowerColumn: + type: string + mainColumn: + type: string + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + staticLegend: + $ref: '#/components/schemas/StaticLegend' + timeFormat: + type: string + type: + enum: + - band + type: string + upperColumn: + type: string + xColumn: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yColumn: + type: string + yTickStart: + format: float + type: number + yTickStep: + format: float + type: number + yTotalTicks: + type: integer + required: + - type + - geom + - queries + - shape + - axes + - colors + - note + - showNoteWhenEmpty + type: object + BinaryExpression: + description: uses binary operators to act on two operands in an expression + properties: + left: + $ref: '#/components/schemas/Expression' + operator: + type: string + right: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Block: + description: A set of statements + properties: + body: + description: Block body + items: + $ref: '#/components/schemas/Statement' + type: array + type: + $ref: '#/components/schemas/NodeType' + type: object + BooleanLiteral: + description: Represents boolean values + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: boolean + type: object + Bucket: + properties: + createdAt: + format: date-time + readOnly: true + type: string + description: + type: string + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + links: + example: + labels: /api/v2/buckets/1/labels + members: /api/v2/buckets/1/members + org: /api/v2/orgs/2 + owners: /api/v2/buckets/1/owners + self: /api/v2/buckets/1 + write: /api/v2/write?org=2&bucket=1 + properties: + labels: + $ref: '#/components/schemas/Link' + description: URL to retrieve labels for this bucket. + members: + $ref: '#/components/schemas/Link' + description: URL to retrieve members that can read this bucket. + org: + $ref: '#/components/schemas/Link' + description: URL to retrieve parent organization for this bucket. + owners: + $ref: '#/components/schemas/Link' + description: URL to retrieve owners that can read and write to this bucket. + self: + $ref: '#/components/schemas/Link' + description: URL for this bucket. + write: + $ref: '#/components/schemas/Link' + description: URL to write line protocol to this bucket. + readOnly: true + type: object + name: + type: string + orgID: + type: string + retentionRules: + $ref: '#/components/schemas/RetentionRules' + rp: + type: string + schemaType: + $ref: '#/components/schemas/SchemaType' + default: implicit + type: + default: user + enum: + - user + - system + readOnly: true + type: string + updatedAt: + format: date-time + readOnly: true + type: string + required: + - name + - retentionRules + BucketMetadataManifest: + properties: + bucketID: + type: string + bucketName: + type: string + defaultRetentionPolicy: + type: string + description: + type: string + organizationID: + type: string + organizationName: + type: string + retentionPolicies: + $ref: '#/components/schemas/RetentionPolicyManifests' + required: + - organizationID + - organizationName + - bucketID + - bucketName + - defaultRetentionPolicy + - retentionPolicies + type: object + BucketMetadataManifests: + items: + $ref: '#/components/schemas/BucketMetadataManifest' + type: array + BucketShardMapping: + properties: + newId: + format: int64 + type: integer + oldId: + format: int64 + type: integer + required: + - oldId + - newId + type: object + BucketShardMappings: + items: + $ref: '#/components/schemas/BucketShardMapping' + type: array + Buckets: + properties: + buckets: + items: + $ref: '#/components/schemas/Bucket' + type: array + links: + $ref: '#/components/schemas/Links' + readOnly: true + type: object + BuilderAggregateFunctionType: + enum: + - filter + - group + type: string + BuilderConfig: + properties: + aggregateWindow: + properties: + fillValues: + type: boolean + period: + type: string + type: object + buckets: + items: + type: string + type: array + functions: + items: + $ref: '#/components/schemas/BuilderFunctionsType' + type: array + tags: + items: + $ref: '#/components/schemas/BuilderTagsType' + type: array + type: object + BuilderFunctionsType: + properties: + name: + type: string + type: object + BuilderTagsType: + properties: + aggregateFunctionType: + $ref: '#/components/schemas/BuilderAggregateFunctionType' + key: + type: string + values: + items: + type: string + type: array + type: object + BuiltinStatement: + description: Declares a builtin identifier and its type + properties: + id: + $ref: '#/components/schemas/Identifier' + type: + $ref: '#/components/schemas/NodeType' + type: object + CallExpression: + description: Represents a function call + properties: + arguments: + description: Function arguments + items: + $ref: '#/components/schemas/Expression' + type: array + callee: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Cell: + properties: + h: + format: int32 + type: integer + id: + type: string + links: + properties: + self: + type: string + view: + type: string + type: object + viewID: + description: The reference to a view from the views API. + type: string + w: + format: int32 + type: integer + x: + format: int32 + type: integer + 'y': + format: int32 + type: integer + type: object + CellUpdate: + properties: + h: + format: int32 + type: integer + w: + format: int32 + type: integer + x: + format: int32 + type: integer + 'y': + format: int32 + type: integer + type: object + CellWithViewProperties: + allOf: + - $ref: '#/components/schemas/Cell' + - properties: + name: + type: string + properties: + $ref: '#/components/schemas/ViewProperties' + type: object + type: object + Cells: + items: + $ref: '#/components/schemas/Cell' + type: array + CellsWithViewProperties: + items: + $ref: '#/components/schemas/CellWithViewProperties' + type: array + Check: + allOf: + - $ref: '#/components/schemas/CheckDiscriminator' + CheckBase: + properties: + createdAt: + format: date-time + readOnly: true + type: string + description: + description: An optional description of the check. + type: string + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + lastRunError: + readOnly: true + type: string + lastRunStatus: + enum: + - failed + - success + - canceled + readOnly: true + type: string + latestCompleted: + description: >- + Timestamp (in RFC3339 date/time + format](https://datatracker.ietf.org/doc/html/rfc3339)) of the + latest scheduled and completed run. + format: date-time + readOnly: true + links: + example: + labels: /api/v2/checks/1/labels + members: /api/v2/checks/1/members + owners: /api/v2/checks/1/owners + query: /api/v2/checks/1/query + self: /api/v2/checks/1 + properties: + labels: + $ref: '#/components/schemas/Link' + description: URL to retrieve labels for this check + members: + $ref: '#/components/schemas/Link' + description: URL to retrieve members for this check + owners: + $ref: '#/components/schemas/Link' + description: URL to retrieve owners for this check + query: + $ref: '#/components/schemas/Link' + description: URL to retrieve flux script for this check + self: + $ref: '#/components/schemas/Link' + description: URL for this check + readOnly: true + type: object + name: + type: string + orgID: + description: The ID of the organization that owns this check. + type: string + ownerID: + description: The ID of creator used to create this check. + readOnly: true + type: string + query: + $ref: '#/components/schemas/DashboardQuery' + status: + $ref: '#/components/schemas/TaskStatusType' + taskID: + description: The ID of the task associated with this check. + type: string + updatedAt: + format: date-time + readOnly: true + type: string + required: + - name + - orgID + - query + CheckDiscriminator: + discriminator: + mapping: + custom: '#/components/schemas/CustomCheck' + deadman: '#/components/schemas/DeadmanCheck' + threshold: '#/components/schemas/ThresholdCheck' + propertyName: type + oneOf: + - $ref: '#/components/schemas/DeadmanCheck' + - $ref: '#/components/schemas/ThresholdCheck' + - $ref: '#/components/schemas/CustomCheck' + CheckPatch: + properties: + description: + type: string + name: + type: string + status: + enum: + - active + - inactive + type: string + type: object + CheckStatusLevel: + description: The state to record if check matches a criteria. + enum: + - UNKNOWN + - OK + - INFO + - CRIT + - WARN + type: string + CheckViewProperties: + properties: + check: + $ref: '#/components/schemas/Check' + checkID: + type: string + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + type: + enum: + - check + type: string + required: + - type + - shape + - checkID + - queries + - colors + type: object + Checks: + properties: + checks: + items: + $ref: '#/components/schemas/Check' + type: array + links: + $ref: '#/components/schemas/Links' + ColorMapping: + additionalProperties: + type: string + description: >- + A color mapping is an object that maps time series data to a UI color + scheme to allow the UI to render graphs consistent colors across + reloads. + example: + configcat_deployments-autopromotionblocker: '#663cd0' + measurement_birdmigration_europe: '#663cd0' + series_id_1: '#edf529' + series_id_2: '#edf529' + type: object + ConditionalExpression: + description: >- + Selects one of two expressions, `Alternate` or `Consequent`, depending + on a third boolean expression, `Test` + properties: + alternate: + $ref: '#/components/schemas/Expression' + consequent: + $ref: '#/components/schemas/Expression' + test: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Config: + properties: + config: + type: object + type: object + ConstantVariableProperties: + properties: + type: + enum: + - constant + type: string + values: + items: + type: string + type: array + CreateCell: + properties: + h: + format: int32 + type: integer + name: + type: string + usingView: + description: Makes a copy of the provided view. + type: string + w: + format: int32 + type: integer + x: + format: int32 + type: integer + 'y': + format: int32 + type: integer + type: object + CreateDashboardRequest: + properties: + description: + description: The user-facing description of the dashboard. + type: string + name: + description: The user-facing name of the dashboard. + type: string + orgID: + description: The ID of the organization that owns the dashboard. + type: string + required: + - orgID + - name + CustomCheck: + allOf: + - $ref: '#/components/schemas/CheckBase' + - properties: + type: + enum: + - custom + type: string + required: + - type + type: object + DBRP: + properties: + bucketID: + description: ID of the bucket used as the target for the translation. + type: string + database: + description: InfluxDB v1 database + type: string + default: + description: >- + Mapping represents the default retention policy for the database + specified. + type: boolean + id: + description: ID of the DBRP mapping. + readOnly: true + type: string + links: + $ref: '#/components/schemas/Links' + orgID: + description: ID of the organization that owns this mapping. + type: string + retention_policy: + description: InfluxDB v1 retention policy + type: string + required: + - id + - orgID + - bucketID + - database + - retention_policy + - default + type: object + DBRPCreate: + properties: + bucketID: + description: ID of the bucket used as the target for the translation. + type: string + database: + description: InfluxDB v1 database + type: string + default: + description: >- + Mapping represents the default retention policy for the database + specified. + type: boolean + org: + description: Name of the organization that owns this mapping. + type: string + orgID: + description: ID of the organization that owns this mapping. + type: string + retention_policy: + description: InfluxDB v1 retention policy + type: string + required: + - bucketID + - database + - retention_policy + type: object + DBRPGet: + properties: + content: + $ref: '#/components/schemas/DBRP' + required: true + type: object + DBRPUpdate: + properties: + default: + type: boolean + retention_policy: + description: InfluxDB v1 retention policy + type: string + DBRPs: + properties: + content: + items: + $ref: '#/components/schemas/DBRP' + type: array + Dashboard: + allOf: + - $ref: '#/components/schemas/CreateDashboardRequest' + - properties: + cells: + $ref: '#/components/schemas/Cells' + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + links: + example: + cells: /api/v2/dashboards/1/cells + labels: /api/v2/dashboards/1/labels + members: /api/v2/dashboards/1/members + org: /api/v2/labels/1 + owners: /api/v2/dashboards/1/owners + self: /api/v2/dashboards/1 + properties: + cells: + $ref: '#/components/schemas/Link' + labels: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + org: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + type: object + meta: + properties: + createdAt: + format: date-time + type: string + updatedAt: + format: date-time + type: string + type: object + type: object + type: object + DashboardColor: + description: Defines an encoding of data value into color space. + properties: + hex: + description: The hex number of the color + maxLength: 7 + minLength: 7 + type: string + id: + description: The unique ID of the view color. + type: string + name: + description: The user-facing name of the hex color. + type: string + type: + description: Type is how the color is used. + enum: + - min + - max + - threshold + - scale + - text + - background + type: string + value: + description: The data value mapped to this color. + format: float + type: number + required: + - id + - type + - hex + - name + - value + type: object + DashboardQuery: + properties: + builderConfig: + $ref: '#/components/schemas/BuilderConfig' + editMode: + $ref: '#/components/schemas/QueryEditMode' + name: + type: string + text: + description: The text of the Flux query. + type: string + type: object + DashboardWithViewProperties: + allOf: + - $ref: '#/components/schemas/CreateDashboardRequest' + - properties: + cells: + $ref: '#/components/schemas/CellsWithViewProperties' + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + links: + example: + cells: /api/v2/dashboards/1/cells + labels: /api/v2/dashboards/1/labels + members: /api/v2/dashboards/1/members + org: /api/v2/labels/1 + owners: /api/v2/dashboards/1/owners + self: /api/v2/dashboards/1 + properties: + cells: + $ref: '#/components/schemas/Link' + labels: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + org: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + type: object + meta: + properties: + createdAt: + format: date-time + type: string + updatedAt: + format: date-time + type: string + type: object + type: object + type: object + Dashboards: + properties: + dashboards: + items: + $ref: '#/components/schemas/Dashboard' + type: array + links: + $ref: '#/components/schemas/Links' + type: object + DateTimeLiteral: + description: >- + Represents an instant in time with nanosecond precision using the syntax + of golang's RFC3339 Nanosecond variant + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + format: date-time + type: string + type: object + DeadmanCheck: + allOf: + - $ref: '#/components/schemas/CheckBase' + - properties: + every: + description: Check repetition interval. + type: string + level: + $ref: '#/components/schemas/CheckStatusLevel' + offset: + description: Duration to delay after the schedule, before executing check. + type: string + reportZero: + description: If only zero values reported since time, trigger an alert + type: boolean + staleTime: + description: >- + String duration for time that a series is considered stale and + should not trigger deadman. + type: string + statusMessageTemplate: + description: The template used to generate and write a status message. + type: string + tags: + description: List of tags to write to each status. + items: + properties: + key: + type: string + value: + type: string + type: object + type: array + timeSince: + description: String duration before deadman triggers. + type: string + type: + enum: + - deadman + type: string + required: + - type + type: object + DecimalPlaces: + description: >- + Indicates whether decimal places should be enforced, and how many digits + it should show. + properties: + digits: + description: The number of digits after decimal to display + format: int32 + type: integer + isEnforced: + description: Indicates whether decimal point setting should be enforced + type: boolean + type: object + DeletePredicateRequest: + description: The delete predicate request. + properties: + predicate: + description: InfluxQL-like delete statement + example: tag1="value1" and (tag2="value2" and tag3!="value3") + type: string + start: + description: RFC3339Nano + format: date-time + type: string + stop: + description: RFC3339Nano + format: date-time + type: string + required: + - start + - stop + type: object + Dialect: + description: >- + Dialect are options to change the default CSV output format; + https://www.w3.org/TR/2015/REC-tabular-metadata-20151217/#dialect-descriptions + properties: + annotations: + description: https://www.w3.org/TR/2015/REC-tabular-data-model-20151217/#columns + items: + enum: + - group + - datatype + - default + type: string + type: array + uniqueItems: true + commentPrefix: + default: '#' + description: Character prefixed to comment strings + maxLength: 1 + minLength: 0 + type: string + dateTimeFormat: + default: RFC3339 + description: Format of timestamps + enum: + - RFC3339 + - RFC3339Nano + type: string + delimiter: + default: ',' + description: Separator between cells; the default is , + maxLength: 1 + minLength: 1 + type: string + header: + default: true + description: If true, the results will contain a header row + type: boolean + type: object + DictExpression: + description: Used to create and directly specify the elements of a dictionary + properties: + elements: + description: Elements of the dictionary + items: + $ref: '#/components/schemas/DictItem' + type: array + type: + $ref: '#/components/schemas/NodeType' + type: object + DictItem: + description: A key/value pair in a dictionary + properties: + key: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + val: + $ref: '#/components/schemas/Expression' + type: object + Duration: + description: >- + A pair consisting of length of time and the unit of time measured. It is + the atomic unit from which all duration literals are composed. + properties: + magnitude: + type: integer + type: + $ref: '#/components/schemas/NodeType' + unit: + type: string + type: object + DurationLiteral: + description: >- + Represents the elapsed time between two instants as an int64 nanosecond + count with syntax of golang's time.Duration + properties: + type: + $ref: '#/components/schemas/NodeType' + values: + description: Duration values + items: + $ref: '#/components/schemas/Duration' + type: array + type: object + Error: + properties: + code: + description: code is the machine-readable error code. + enum: + - internal error + - not found + - conflict + - invalid + - unprocessable entity + - empty value + - unavailable + - forbidden + - too many requests + - unauthorized + - method not allowed + - request too large + - unsupported media type + readOnly: true + type: string + err: + description: >- + Stack of errors that occurred during processing of the request. + Useful for debugging. + readOnly: true + type: string + message: + description: Human-readable message. + readOnly: true + type: string + op: + description: >- + Describes the logical code operation when the error occurred. Useful + for debugging. + readOnly: true + type: string + required: + - code + Expression: + oneOf: + - $ref: '#/components/schemas/ArrayExpression' + - $ref: '#/components/schemas/DictExpression' + - $ref: '#/components/schemas/FunctionExpression' + - $ref: '#/components/schemas/BinaryExpression' + - $ref: '#/components/schemas/CallExpression' + - $ref: '#/components/schemas/ConditionalExpression' + - $ref: '#/components/schemas/LogicalExpression' + - $ref: '#/components/schemas/MemberExpression' + - $ref: '#/components/schemas/IndexExpression' + - $ref: '#/components/schemas/ObjectExpression' + - $ref: '#/components/schemas/ParenExpression' + - $ref: '#/components/schemas/PipeExpression' + - $ref: '#/components/schemas/UnaryExpression' + - $ref: '#/components/schemas/BooleanLiteral' + - $ref: '#/components/schemas/DateTimeLiteral' + - $ref: '#/components/schemas/DurationLiteral' + - $ref: '#/components/schemas/FloatLiteral' + - $ref: '#/components/schemas/IntegerLiteral' + - $ref: '#/components/schemas/PipeLiteral' + - $ref: '#/components/schemas/RegexpLiteral' + - $ref: '#/components/schemas/StringLiteral' + - $ref: '#/components/schemas/UnsignedIntegerLiteral' + - $ref: '#/components/schemas/Identifier' + ExpressionStatement: + description: >- + May consist of an expression that does not return a value and is + executed solely for its side-effects + properties: + expression: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Field: + properties: + alias: + description: >- + Alias overrides the field name in the returned response. Applies + only if type is `func` + type: string + args: + description: Args are the arguments to the function + items: + $ref: '#/components/schemas/Field' + type: array + type: + description: >- + `type` describes the field type. `func` is a function. `field` is a + field reference. + enum: + - func + - field + - integer + - number + - regex + - wildcard + type: string + value: + description: >- + value is the value of the field. Meaning of the value is implied by + the `type` key + type: string + type: object + File: + description: Represents a source from a single file + properties: + body: + description: List of Flux statements + items: + $ref: '#/components/schemas/Statement' + type: array + imports: + description: A list of package imports + items: + $ref: '#/components/schemas/ImportDeclaration' + type: array + name: + description: The name of the file. + type: string + package: + $ref: '#/components/schemas/PackageClause' + type: + $ref: '#/components/schemas/NodeType' + type: object + Flags: + additionalProperties: true + type: object + FloatLiteral: + description: >- + Represents floating point numbers according to the double + representations defined by the IEEE-754-1985 + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: number + type: object + FluxResponse: + description: Rendered flux that backs the check or notification. + properties: + flux: + type: string + FluxSuggestion: + properties: + name: + type: string + params: + additionalProperties: + type: string + type: object + type: object + FluxSuggestions: + properties: + funcs: + items: + $ref: '#/components/schemas/FluxSuggestion' + type: array + type: object + FunctionExpression: + description: Function expression + properties: + body: + $ref: '#/components/schemas/Node' + params: + description: Function parameters + items: + $ref: '#/components/schemas/Property' + type: array + type: + $ref: '#/components/schemas/NodeType' + type: object + GaugeViewProperties: + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + decimalPlaces: + $ref: '#/components/schemas/DecimalPlaces' + note: + type: string + prefix: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + suffix: + type: string + tickPrefix: + type: string + tickSuffix: + type: string + type: + enum: + - gauge + type: string + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - prefix + - tickPrefix + - suffix + - tickSuffix + - decimalPlaces + type: object + GeoCircleViewLayer: + allOf: + - $ref: '#/components/schemas/GeoViewLayerProperties' + - properties: + colorDimension: + $ref: '#/components/schemas/Axis' + colorField: + description: Circle color field + type: string + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + interpolateColors: + description: Interpolate circle color based on displayed value + type: boolean + radius: + description: Maximum radius size in pixels + type: integer + radiusDimension: + $ref: '#/components/schemas/Axis' + radiusField: + description: Radius field + type: string + required: + - radiusField + - radiusDimension + - colorField + - colorDimension + - colors + type: object + GeoHeatMapViewLayer: + allOf: + - $ref: '#/components/schemas/GeoViewLayerProperties' + - properties: + blur: + description: Blur for heatmap points + type: integer + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + intensityDimension: + $ref: '#/components/schemas/Axis' + intensityField: + description: Intensity field + type: string + radius: + description: Radius size in pixels + type: integer + required: + - intensityField + - intensityDimension + - radius + - blur + - colors + type: object + GeoPointMapViewLayer: + allOf: + - $ref: '#/components/schemas/GeoViewLayerProperties' + - properties: + colorDimension: + $ref: '#/components/schemas/Axis' + colorField: + description: Marker color field + type: string + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + isClustered: + description: Cluster close markers together + type: boolean + tooltipColumns: + description: An array for which columns to display in tooltip + items: + type: string + type: array + required: + - colorField + - colorDimension + - colors + type: object + GeoTrackMapViewLayer: + allOf: + - $ref: '#/components/schemas/GeoViewLayerProperties' + - required: + - trackWidth + - speed + - randomColors + - trackPointVisualization + type: object + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + randomColors: + description: Assign different colors to different tracks + type: boolean + speed: + description: Speed of the track animation + type: integer + trackWidth: + description: Width of the track + type: integer + GeoViewLayer: + oneOf: + - $ref: '#/components/schemas/GeoCircleViewLayer' + - $ref: '#/components/schemas/GeoHeatMapViewLayer' + - $ref: '#/components/schemas/GeoPointMapViewLayer' + - $ref: '#/components/schemas/GeoTrackMapViewLayer' + type: object + GeoViewLayerProperties: + properties: + type: + enum: + - heatmap + - circleMap + - pointMap + - trackMap + type: string + required: + - type + type: object + GeoViewProperties: + properties: + allowPanAndZoom: + default: true + description: If true, map zoom and pan controls are enabled on the dashboard view + type: boolean + center: + description: Coordinates of the center of the map + properties: + lat: + description: Latitude of the center of the map + format: double + type: number + lon: + description: Longitude of the center of the map + format: double + type: number + required: + - lat + - lon + type: object + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + detectCoordinateFields: + default: true + description: >- + If true, search results get automatically regroupped so that lon,lat + and value are treated as columns + type: boolean + latLonColumns: + $ref: '#/components/schemas/LatLonColumns' + layers: + description: List of individual layers shown in the map + items: + $ref: '#/components/schemas/GeoViewLayer' + type: array + mapStyle: + description: Define map type - regular, satellite etc. + type: string + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + s2Column: + description: String to define the column + type: string + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + type: + enum: + - geo + type: string + useS2CellID: + description: If true, S2 column is used to calculate lat/lon + type: boolean + zoom: + description: Zoom level used for initial display of the map + format: double + maximum: 28 + minimum: 1 + type: number + required: + - type + - shape + - queries + - note + - showNoteWhenEmpty + - center + - zoom + - allowPanAndZoom + - detectCoordinateFields + - layers + type: object + GreaterThreshold: + allOf: + - $ref: '#/components/schemas/ThresholdBase' + - properties: + type: + enum: + - greater + type: string + value: + format: float + type: number + required: + - type + - value + type: object + HTTPNotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointBase' + - properties: + authMethod: + enum: + - none + - basic + - bearer + type: string + contentTemplate: + type: string + headers: + additionalProperties: + type: string + description: Customized headers. + type: object + method: + enum: + - POST + - GET + - PUT + type: string + password: + type: string + token: + type: string + url: + type: string + username: + type: string + required: + - url + - authMethod + - method + type: object + type: object + HTTPNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleBase' + - $ref: '#/components/schemas/HTTPNotificationRuleBase' + HTTPNotificationRuleBase: + properties: + type: + enum: + - http + type: string + url: + type: string + required: + - type + type: object + HealthCheck: + properties: + checks: + items: + $ref: '#/components/schemas/HealthCheck' + type: array + commit: + type: string + message: + type: string + name: + type: string + status: + enum: + - pass + - fail + type: string + version: + type: string + required: + - name + - status + type: object + HeatmapViewProperties: + properties: + binSize: + type: number + colors: + description: Colors define color encoding of data into a visualization + items: + type: string + type: array + generateXAxisTicks: + items: + type: string + type: array + generateYAxisTicks: + items: + type: string + type: array + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + timeFormat: + type: string + type: + enum: + - heatmap + type: string + xAxisLabel: + type: string + xColumn: + type: string + xDomain: + items: + type: number + maxItems: 2 + type: array + xPrefix: + type: string + xSuffix: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yAxisLabel: + type: string + yColumn: + type: string + yDomain: + items: + type: number + maxItems: 2 + type: array + yPrefix: + type: string + ySuffix: + type: string + yTickStart: + format: float + type: number + yTickStep: + format: float + type: number + yTotalTicks: + type: integer + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - xColumn + - yColumn + - xDomain + - yDomain + - xAxisLabel + - yAxisLabel + - xPrefix + - yPrefix + - xSuffix + - ySuffix + - binSize + type: object + HistogramViewProperties: + properties: + binCount: + type: integer + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + fillColumns: + items: + type: string + type: array + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + position: + enum: + - overlaid + - stacked + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + type: + enum: + - histogram + type: string + xAxisLabel: + type: string + xColumn: + type: string + xDomain: + items: + format: float + type: number + type: array + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - xColumn + - fillColumns + - xDomain + - xAxisLabel + - position + - binCount + type: object + Identifier: + description: A valid Flux identifier + properties: + name: + type: string + type: + $ref: '#/components/schemas/NodeType' + type: object + ImportDeclaration: + description: Declares a package import + properties: + as: + $ref: '#/components/schemas/Identifier' + path: + $ref: '#/components/schemas/StringLiteral' + type: + $ref: '#/components/schemas/NodeType' + type: object + IndexExpression: + description: Represents indexing into an array + properties: + array: + $ref: '#/components/schemas/Expression' + index: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + IntegerLiteral: + description: Represents integer numbers + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: string + type: object + IsOnboarding: + properties: + allowed: + description: >- + True means that the influxdb instance has NOT had initial setup; + false means that the database has been setup. + type: boolean + type: object + Label: + properties: + id: + readOnly: true + type: string + name: + type: string + orgID: + readOnly: true + type: string + properties: + additionalProperties: + type: string + description: >- + Key/Value pairs associated with this label. Keys can be removed by + sending an update with an empty value. + example: + color: ffb3b3 + description: this is a description + type: object + type: object + LabelCreateRequest: + properties: + name: + type: string + orgID: + type: string + properties: + additionalProperties: + type: string + description: >- + Key/Value pairs associated with this label. Keys can be removed by + sending an update with an empty value. + example: + color: ffb3b3 + description: this is a description + type: object + required: + - orgID + - name + type: object + LabelMapping: + properties: + labelID: + type: string + type: object + LabelResponse: + properties: + label: + $ref: '#/components/schemas/Label' + links: + $ref: '#/components/schemas/Links' + type: object + LabelUpdate: + properties: + name: + type: string + properties: + additionalProperties: + type: string + description: >- + Key/Value pairs associated with this label. Keys can be removed by + sending an update with an empty value. + example: + color: ffb3b3 + description: this is a description + type: object + type: object + Labels: + items: + $ref: '#/components/schemas/Label' + type: array + LabelsResponse: + properties: + labels: + $ref: '#/components/schemas/Labels' + links: + $ref: '#/components/schemas/Links' + type: object + LanguageRequest: + description: Flux query to be analyzed. + properties: + query: + description: Flux query script to be analyzed + type: string + required: + - query + type: object + LatLonColumn: + description: Object type for key and column definitions + properties: + column: + description: Column to look up Lat/Lon + type: string + key: + description: Key to determine whether the column is tag/field + type: string + required: + - key + - column + type: object + LatLonColumns: + description: Object type to define lat/lon columns + properties: + lat: + $ref: '#/components/schemas/LatLonColumn' + lon: + $ref: '#/components/schemas/LatLonColumn' + required: + - lat + - lon + type: object + LegacyAuthorizationPostRequest: + allOf: + - $ref: '#/components/schemas/AuthorizationUpdateRequest' + - properties: + orgID: + description: ID of org that authorization is scoped to. + type: string + permissions: + description: >- + List of permissions for an auth. An auth must have at least one + Permission. + items: + $ref: '#/components/schemas/Permission' + minItems: 1 + type: array + token: + description: Token (name) of the authorization + type: string + userID: + description: ID of user that authorization is scoped to. + type: string + type: object + required: + - orgID + - permissions + LesserThreshold: + allOf: + - $ref: '#/components/schemas/ThresholdBase' + - properties: + type: + enum: + - lesser + type: string + value: + format: float + type: number + required: + - type + - value + type: object + LinePlusSingleStatProperties: + properties: + axes: + $ref: '#/components/schemas/Axes' + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + decimalPlaces: + $ref: '#/components/schemas/DecimalPlaces' + generateXAxisTicks: + items: + type: string + type: array + generateYAxisTicks: + items: + type: string + type: array + hoverDimension: + enum: + - auto + - x + - 'y' + - xy + type: string + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + position: + enum: + - overlaid + - stacked + type: string + prefix: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shadeBelow: + type: boolean + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + staticLegend: + $ref: '#/components/schemas/StaticLegend' + suffix: + type: string + timeFormat: + type: string + type: + enum: + - line-plus-single-stat + type: string + xColumn: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yColumn: + type: string + yTickStart: + format: float + type: number + yTickStep: + format: float + type: number + yTotalTicks: + type: integer + required: + - type + - queries + - shape + - axes + - colors + - note + - showNoteWhenEmpty + - prefix + - suffix + - decimalPlaces + - position + type: object + LineProtocolError: + properties: + code: + description: Code is the machine-readable error code. + enum: + - internal error + - not found + - conflict + - invalid + - empty value + - unavailable + readOnly: true + type: string + err: + description: >- + Stack of errors that occurred during processing of the request. + Useful for debugging. + readOnly: true + type: string + line: + description: First line in the request body that contains malformed data. + format: int32 + readOnly: true + type: integer + message: + description: Human-readable message. + readOnly: true + type: string + op: + description: >- + Describes the logical code operation when the error occurred. Useful + for debugging. + readOnly: true + type: string + required: + - code + LineProtocolLengthError: + properties: + code: + description: Code is the machine-readable error code. + enum: + - invalid + readOnly: true + type: string + message: + description: Human-readable message. + readOnly: true + type: string + required: + - code + - message + Link: + description: URI of resource. + format: uri + readOnly: true + type: string + Links: + properties: + next: + $ref: '#/components/schemas/Link' + prev: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + required: + - self + type: object + LogEvent: + properties: + message: + description: A description of the event that occurred. + example: Halt and catch fire + readOnly: true + type: string + runID: + description: the ID of the task that logged + readOnly: true + type: string + time: + description: Time event occurred, RFC3339Nano. + format: date-time + readOnly: true + type: string + type: object + LogicalExpression: + description: >- + Represents the rule conditions that collectively evaluate to either true + or false + properties: + left: + $ref: '#/components/schemas/Expression' + operator: + type: string + right: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Logs: + properties: + events: + items: + $ref: '#/components/schemas/LogEvent' + readOnly: true + type: array + type: object + MapVariableProperties: + properties: + type: + enum: + - map + type: string + values: + additionalProperties: + type: string + type: object + MarkdownViewProperties: + properties: + note: + type: string + shape: + enum: + - chronograf-v2 + type: string + type: + enum: + - markdown + type: string + required: + - type + - shape + - note + type: object + MemberAssignment: + description: Object property assignment + properties: + init: + $ref: '#/components/schemas/Expression' + member: + $ref: '#/components/schemas/MemberExpression' + type: + $ref: '#/components/schemas/NodeType' + type: object + MemberExpression: + description: Represents accessing a property of an object + properties: + object: + $ref: '#/components/schemas/Expression' + property: + $ref: '#/components/schemas/PropertyKey' + type: + $ref: '#/components/schemas/NodeType' + type: object + MetadataBackup: + properties: + buckets: + $ref: '#/components/schemas/BucketMetadataManifests' + kv: + format: binary + type: string + sql: + format: binary + type: string + required: + - kv + - sql + - buckets + type: object + MosaicViewProperties: + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + type: string + type: array + fillColumns: + items: + type: string + type: array + generateXAxisTicks: + items: + type: string + type: array + hoverDimension: + enum: + - auto + - x + - 'y' + - xy + type: string + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + timeFormat: + type: string + type: + enum: + - mosaic + type: string + xAxisLabel: + type: string + xColumn: + type: string + xDomain: + items: + type: number + maxItems: 2 + type: array + xPrefix: + type: string + xSuffix: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yAxisLabel: + type: string + yDomain: + items: + type: number + maxItems: 2 + type: array + yLabelColumnSeparator: + type: string + yLabelColumns: + items: + type: string + type: array + yPrefix: + type: string + ySeriesColumns: + items: + type: string + type: array + ySuffix: + type: string + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - xColumn + - ySeriesColumns + - fillColumns + - xDomain + - yDomain + - xAxisLabel + - yAxisLabel + - xPrefix + - yPrefix + - xSuffix + - ySuffix + type: object + Node: + oneOf: + - $ref: '#/components/schemas/Expression' + - $ref: '#/components/schemas/Block' + NodeType: + description: Type of AST node + type: string + NotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointDiscriminator' + NotificationEndpointBase: + properties: + createdAt: + format: date-time + readOnly: true + type: string + description: + description: An optional description of the notification endpoint. + type: string + id: + type: string + labels: + $ref: '#/components/schemas/Labels' + links: + example: + labels: /api/v2/notificationEndpoints/1/labels + members: /api/v2/notificationEndpoints/1/members + owners: /api/v2/notificationEndpoints/1/owners + self: /api/v2/notificationEndpoints/1 + properties: + labels: + $ref: '#/components/schemas/Link' + description: URL to retrieve labels for this endpoint. + members: + $ref: '#/components/schemas/Link' + description: URL to retrieve members for this endpoint. + owners: + $ref: '#/components/schemas/Link' + description: URL to retrieve owners for this endpoint. + self: + $ref: '#/components/schemas/Link' + description: URL for this endpoint. + readOnly: true + type: object + name: + type: string + orgID: + type: string + status: + default: active + description: The status of the endpoint. + enum: + - active + - inactive + type: string + type: + $ref: '#/components/schemas/NotificationEndpointType' + updatedAt: + format: date-time + readOnly: true + type: string + userID: + type: string + required: + - type + - name + type: object + NotificationEndpointDiscriminator: + discriminator: + mapping: + http: '#/components/schemas/HTTPNotificationEndpoint' + pagerduty: '#/components/schemas/PagerDutyNotificationEndpoint' + slack: '#/components/schemas/SlackNotificationEndpoint' + telegram: '#/components/schemas/TelegramNotificationEndpoint' + propertyName: type + oneOf: + - $ref: '#/components/schemas/SlackNotificationEndpoint' + - $ref: '#/components/schemas/PagerDutyNotificationEndpoint' + - $ref: '#/components/schemas/HTTPNotificationEndpoint' + - $ref: '#/components/schemas/TelegramNotificationEndpoint' + NotificationEndpointType: + enum: + - slack + - pagerduty + - http + - telegram + type: string + NotificationEndpointUpdate: + properties: + description: + type: string + name: + type: string + status: + enum: + - active + - inactive + type: string + type: object + NotificationEndpoints: + properties: + links: + $ref: '#/components/schemas/Links' + notificationEndpoints: + items: + $ref: '#/components/schemas/NotificationEndpoint' + type: array + NotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleDiscriminator' + NotificationRuleBase: + properties: + createdAt: + format: date-time + readOnly: true + type: string + description: + description: An optional description of the notification rule. + type: string + endpointID: + type: string + every: + description: The notification repetition interval. + type: string + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + lastRunError: + readOnly: true + type: string + lastRunStatus: + enum: + - failed + - success + - canceled + readOnly: true + type: string + latestCompleted: + description: >- + Timestamp (in RFC3339 date/time + format](https://datatracker.ietf.org/doc/html/rfc3339)) of the + latest scheduled and completed run. + format: date-time + readOnly: true + type: string + limit: + description: >- + Don't notify me more than times every seconds. + If set, limitEvery cannot be empty. + type: integer + limitEvery: + description: >- + Don't notify me more than times every seconds. + If set, limit cannot be empty. + type: integer + links: + example: + labels: /api/v2/notificationRules/1/labels + members: /api/v2/notificationRules/1/members + owners: /api/v2/notificationRules/1/owners + query: /api/v2/notificationRules/1/query + self: /api/v2/notificationRules/1 + properties: + labels: + $ref: '#/components/schemas/Link' + description: URL to retrieve labels for this notification rule. + members: + $ref: '#/components/schemas/Link' + description: URL to retrieve members for this notification rule. + owners: + $ref: '#/components/schemas/Link' + description: URL to retrieve owners for this notification rule. + query: + $ref: '#/components/schemas/Link' + description: URL to retrieve flux script for this notification rule. + self: + $ref: '#/components/schemas/Link' + description: URL for this endpoint. + readOnly: true + type: object + name: + description: Human-readable name describing the notification rule. + type: string + offset: + description: Duration to delay after the schedule, before executing check. + type: string + orgID: + description: The ID of the organization that owns this notification rule. + type: string + ownerID: + description: The ID of creator used to create this notification rule. + readOnly: true + type: string + runbookLink: + type: string + sleepUntil: + type: string + status: + $ref: '#/components/schemas/TaskStatusType' + statusRules: + description: List of status rules the notification rule attempts to match. + items: + $ref: '#/components/schemas/StatusRule' + minItems: 1 + type: array + tagRules: + description: List of tag rules the notification rule attempts to match. + items: + $ref: '#/components/schemas/TagRule' + type: array + taskID: + description: The ID of the task associated with this notification rule. + type: string + updatedAt: + format: date-time + readOnly: true + type: string + required: + - orgID + - status + - name + - statusRules + - endpointID + type: object + NotificationRuleDiscriminator: + discriminator: + mapping: + http: '#/components/schemas/HTTPNotificationRule' + pagerduty: '#/components/schemas/PagerDutyNotificationRule' + slack: '#/components/schemas/SlackNotificationRule' + smtp: '#/components/schemas/SMTPNotificationRule' + telegram: '#/components/schemas/TelegramNotificationRule' + propertyName: type + oneOf: + - $ref: '#/components/schemas/SlackNotificationRule' + - $ref: '#/components/schemas/SMTPNotificationRule' + - $ref: '#/components/schemas/PagerDutyNotificationRule' + - $ref: '#/components/schemas/HTTPNotificationRule' + - $ref: '#/components/schemas/TelegramNotificationRule' + NotificationRuleUpdate: + properties: + description: + type: string + name: + type: string + status: + enum: + - active + - inactive + type: string + type: object + NotificationRules: + properties: + links: + $ref: '#/components/schemas/Links' + notificationRules: + items: + $ref: '#/components/schemas/NotificationRule' + type: array + ObjectExpression: + description: Allows the declaration of an anonymous object within a declaration + properties: + properties: + description: Object properties + items: + $ref: '#/components/schemas/Property' + type: array + type: + $ref: '#/components/schemas/NodeType' + type: object + OnboardingRequest: + properties: + bucket: + type: string + org: + type: string + password: + type: string + retentionPeriodHrs: + deprecated: true + description: > + Retention period *in nanoseconds* for the new bucket. This key's + name has been misleading since OSS 2.0 GA, please transition to use + `retentionPeriodSeconds` + type: integer + retentionPeriodSeconds: + format: int64 + type: integer + token: + description: > + Authentication token to set on the initial user. If not specified, + the server will generate a token. + type: string + username: + type: string + required: + - username + - org + - bucket + type: object + OnboardingResponse: + properties: + auth: + $ref: '#/components/schemas/Authorization' + bucket: + $ref: '#/components/schemas/Bucket' + org: + $ref: '#/components/schemas/Organization' + user: + $ref: '#/components/schemas/UserResponse' + type: object + OptionStatement: + description: A single variable declaration + properties: + assignment: + oneOf: + - $ref: '#/components/schemas/VariableAssignment' + - $ref: '#/components/schemas/MemberAssignment' + type: + $ref: '#/components/schemas/NodeType' + type: object + Organization: + properties: + createdAt: + format: date-time + readOnly: true + type: string + description: + type: string + id: + readOnly: true + type: string + links: + example: + buckets: /api/v2/buckets?org=myorg + dashboards: /api/v2/dashboards?org=myorg + labels: /api/v2/orgs/1/labels + members: /api/v2/orgs/1/members + owners: /api/v2/orgs/1/owners + secrets: /api/v2/orgs/1/secrets + self: /api/v2/orgs/1 + tasks: /api/v2/tasks?org=myorg + properties: + buckets: + $ref: '#/components/schemas/Link' + dashboards: + $ref: '#/components/schemas/Link' + labels: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + secrets: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + tasks: + $ref: '#/components/schemas/Link' + readOnly: true + type: object + name: + type: string + status: + default: active + description: If inactive the organization is inactive. + enum: + - active + - inactive + type: string + updatedAt: + format: date-time + readOnly: true + type: string + required: + - name + Organizations: + properties: + links: + $ref: '#/components/schemas/Links' + orgs: + items: + $ref: '#/components/schemas/Organization' + type: array + type: object + Package: + description: Represents a complete package source tree. + properties: + files: + description: Package files + items: + $ref: '#/components/schemas/File' + type: array + package: + description: Package name + type: string + path: + description: Package import path + type: string + type: + $ref: '#/components/schemas/NodeType' + type: object + PackageClause: + description: Defines a package identifier + properties: + name: + $ref: '#/components/schemas/Identifier' + type: + $ref: '#/components/schemas/NodeType' + type: object + PagerDutyNotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointBase' + - properties: + clientURL: + type: string + routingKey: + type: string + required: + - routingKey + type: object + type: object + PagerDutyNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleBase' + - $ref: '#/components/schemas/PagerDutyNotificationRuleBase' + PagerDutyNotificationRuleBase: + properties: + messageTemplate: + type: string + type: + enum: + - pagerduty + type: string + required: + - type + - messageTemplate + type: object + ParenExpression: + description: Represents an expression wrapped in parenthesis + properties: + expression: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + PasswordResetBody: + properties: + password: + type: string + required: + - password + PatchBucketRequest: + description: Updates to an existing bucket resource. + properties: + description: + type: string + name: + type: string + retentionRules: + $ref: '#/components/schemas/PatchRetentionRules' + type: object + PatchOrganizationRequest: + properties: + description: + description: New description to set on the organization + type: string + name: + description: New name to set on the organization + type: string + type: object + PatchRetentionRule: + description: Updates to a rule to expire or retain data. + properties: + everySeconds: + description: >- + Duration in seconds for how long data will be kept in the database. + 0 means infinite. + example: 86400 + format: int64 + minimum: 0 + type: integer + shardGroupDurationSeconds: + description: Shard duration measured in seconds. + format: int64 + type: integer + type: + default: expire + enum: + - expire + type: string + required: + - type + type: object + PatchRetentionRules: + description: Updates to rules to expire or retain data. No rules means no updates. + items: + $ref: '#/components/schemas/PatchRetentionRule' + type: array + Permission: + properties: + action: + enum: + - read + - write + type: string + resource: + $ref: '#/components/schemas/Resource' + required: + - action + - resource + PipeExpression: + description: Call expression with pipe argument + properties: + argument: + $ref: '#/components/schemas/Expression' + call: + $ref: '#/components/schemas/CallExpression' + type: + $ref: '#/components/schemas/NodeType' + type: object + PipeLiteral: + description: >- + Represents a specialized literal value, indicating the left hand value + of a pipe expression + properties: + type: + $ref: '#/components/schemas/NodeType' + type: object + PostBucketRequest: + properties: + description: + type: string + name: + type: string + orgID: + type: string + retentionRules: + $ref: '#/components/schemas/RetentionRules' + rp: + type: string + schemaType: + $ref: '#/components/schemas/SchemaType' + default: implicit + required: + - orgID + - name + - retentionRules + PostCheck: + allOf: + - $ref: '#/components/schemas/CheckDiscriminator' + PostNotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointDiscriminator' + PostNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleDiscriminator' + PostOrganizationRequest: + properties: + description: + type: string + name: + type: string + required: + - name + type: object + Property: + description: The value associated with a key + properties: + key: + $ref: '#/components/schemas/PropertyKey' + type: + $ref: '#/components/schemas/NodeType' + value: + $ref: '#/components/schemas/Expression' + type: object + PropertyKey: + oneOf: + - $ref: '#/components/schemas/Identifier' + - $ref: '#/components/schemas/StringLiteral' + Query: + description: Query influx using the Flux language + properties: + dialect: + $ref: '#/components/schemas/Dialect' + extern: + $ref: '#/components/schemas/File' + now: + description: >- + Specifies the time that should be reported as "now" in the query. + Default is the server's now time. + format: date-time + type: string + params: + additionalProperties: true + description: > + Enumeration of key/value pairs that respresent parameters to be + injected into query (can only specify either this field or extern + and not both) + type: object + query: + description: Query script to execute. + type: string + type: + description: The type of query. Must be "flux". + enum: + - flux + type: string + required: + - query + type: object + QueryEditMode: + enum: + - builder + - advanced + type: string + QueryVariableProperties: + properties: + type: + enum: + - query + type: string + values: + properties: + language: + type: string + query: + type: string + type: object + RangeThreshold: + allOf: + - $ref: '#/components/schemas/ThresholdBase' + - properties: + max: + format: float + type: number + min: + format: float + type: number + type: + enum: + - range + type: string + within: + type: boolean + required: + - type + - min + - max + - within + type: object + Ready: + properties: + started: + example: '2019-03-13T10:09:33.891196-04:00' + format: date-time + type: string + status: + enum: + - ready + type: string + up: + example: 14m45.911966424s + type: string + type: object + RegexpLiteral: + description: >- + Expressions begin and end with `/` and are regular expressions with + syntax accepted by RE2 + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: string + type: object + RemoteConnection: + properties: + allowInsecureTLS: + default: false + type: boolean + description: + type: string + id: + type: string + name: + type: string + orgID: + type: string + remoteOrgID: + type: string + remoteURL: + format: uri + type: string + required: + - id + - name + - orgID + - remoteURL + - remoteOrgID + - allowInsecureTLS + type: object + RemoteConnectionCreationRequest: + properties: + allowInsecureTLS: + default: false + type: boolean + description: + type: string + name: + type: string + orgID: + type: string + remoteAPIToken: + type: string + remoteOrgID: + type: string + remoteURL: + format: uri + type: string + required: + - name + - orgID + - remoteURL + - remoteAPIToken + - remoteOrgID + - allowInsecureTLS + type: object + RemoteConnectionUpdateRequest: + properties: + allowInsecureTLS: + default: false + type: boolean + description: + type: string + name: + type: string + remoteAPIToken: + type: string + remoteOrgID: + type: string + remoteURL: + format: uri + type: string + type: object + RemoteConnections: + properties: + remotes: + items: + $ref: '#/components/schemas/RemoteConnection' + type: array + type: object + RenamableField: + description: Describes a field that can be renamed and made visible or invisible. + properties: + displayName: + description: The name that a field is renamed to by the user. + type: string + internalName: + description: The calculated name of a field. + readOnly: true + type: string + visible: + description: Indicates whether this field should be visible on the table. + type: boolean + type: object + Replication: + properties: + currentQueueSizeBytes: + format: int64 + type: integer + description: + type: string + dropNonRetryableData: + type: boolean + id: + type: string + latestErrorMessage: + type: string + latestResponseCode: + type: integer + localBucketID: + type: string + maxQueueSizeBytes: + format: int64 + type: integer + name: + type: string + orgID: + type: string + remoteBucketID: + type: string + remoteID: + type: string + required: + - id + - name + - remoteID + - orgID + - localBucketID + - remoteBucketID + - maxQueueSizeBytes + - currentQueueSizeBytes + type: object + ReplicationCreationRequest: + properties: + description: + type: string + dropNonRetryableData: + default: false + type: boolean + localBucketID: + type: string + maxQueueSizeBytes: + default: 67108860 + format: int64 + minimum: 33554430 + type: integer + name: + type: string + orgID: + type: string + remoteBucketID: + type: string + remoteID: + type: string + required: + - name + - orgID + - remoteID + - localBucketID + - remoteBucketID + - maxQueueSizeBytes + type: object + ReplicationUpdateRequest: + properties: + description: + type: string + dropNonRetryableData: + type: boolean + maxQueueSizeBytes: + format: int64 + minimum: 33554430 + type: integer + name: + type: string + remoteBucketID: + type: string + remoteID: + type: string + type: object + Replications: + properties: + replications: + items: + $ref: '#/components/schemas/Replication' + type: array + type: object + Resource: + properties: + id: + description: >- + If ID is set that is a permission for a specific resource. if it is + not set it is a permission for all resources of that resource type. + type: string + name: + description: Optional name of the resource if the resource has a name field. + type: string + org: + description: Optional name of the organization of the organization with orgID. + type: string + orgID: + description: >- + If orgID is set that is a permission for all resources owned my that + org. if it is not set it is a permission for all resources of that + resource type. + type: string + type: + enum: + - authorizations + - buckets + - dashboards + - orgs + - sources + - tasks + - telegrafs + - users + - variables + - scrapers + - secrets + - labels + - views + - documents + - notificationRules + - notificationEndpoints + - checks + - dbrp + - notebooks + - annotations + - remotes + - replications + type: string + required: + - type + type: object + ResourceMember: + allOf: + - $ref: '#/components/schemas/UserResponse' + - properties: + role: + default: member + enum: + - member + type: string + type: object + ResourceMembers: + properties: + links: + properties: + self: + format: uri + type: string + type: object + users: + items: + $ref: '#/components/schemas/ResourceMember' + type: array + type: object + ResourceOwner: + allOf: + - $ref: '#/components/schemas/UserResponse' + - properties: + role: + default: owner + enum: + - owner + type: string + type: object + ResourceOwners: + properties: + links: + properties: + self: + format: uri + type: string + type: object + users: + items: + $ref: '#/components/schemas/ResourceOwner' + type: array + type: object + RestoredBucketMappings: + properties: + id: + description: New ID of the restored bucket + type: string + name: + type: string + shardMappings: + $ref: '#/components/schemas/BucketShardMappings' + required: + - id + - name + - shardMappings + type: object + RetentionPolicyManifest: + properties: + duration: + format: int64 + type: integer + name: + type: string + replicaN: + type: integer + shardGroupDuration: + format: int64 + type: integer + shardGroups: + $ref: '#/components/schemas/ShardGroupManifests' + subscriptions: + $ref: '#/components/schemas/SubscriptionManifests' + required: + - name + - replicaN + - duration + - shardGroupDuration + - shardGroups + - subscriptions + type: object + RetentionPolicyManifests: + items: + $ref: '#/components/schemas/RetentionPolicyManifest' + type: array + RetentionRule: + properties: + everySeconds: + description: >- + Duration in seconds for how long data will be kept in the database. + 0 means infinite. + example: 86400 + format: int64 + minimum: 0 + type: integer + shardGroupDurationSeconds: + description: Shard duration measured in seconds. + format: int64 + type: integer + type: + default: expire + enum: + - expire + type: string + required: + - type + - everySeconds + type: object + RetentionRules: + description: Rules to expire or retain data. No rules means data never expires. + items: + $ref: '#/components/schemas/RetentionRule' + type: array + ReturnStatement: + description: Defines an expression to return + properties: + argument: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + Routes: + properties: + authorizations: + format: uri + type: string + buckets: + format: uri + type: string + dashboards: + format: uri + type: string + external: + properties: + statusFeed: + format: uri + type: string + type: object + flags: + format: uri + type: string + me: + format: uri + type: string + orgs: + format: uri + type: string + query: + properties: + analyze: + format: uri + type: string + ast: + format: uri + type: string + self: + format: uri + type: string + suggestions: + format: uri + type: string + type: object + setup: + format: uri + type: string + signin: + format: uri + type: string + signout: + format: uri + type: string + sources: + format: uri + type: string + system: + properties: + debug: + format: uri + type: string + health: + format: uri + type: string + metrics: + format: uri + type: string + type: object + tasks: + format: uri + type: string + telegrafs: + format: uri + type: string + users: + format: uri + type: string + variables: + format: uri + type: string + write: + format: uri + type: string + RuleStatusLevel: + description: The state to record if check matches a criteria. + enum: + - UNKNOWN + - OK + - INFO + - CRIT + - WARN + - ANY + type: string + Run: + properties: + finishedAt: + description: Time run finished executing, RFC3339Nano. + format: date-time + readOnly: true + type: string + id: + readOnly: true + type: string + links: + example: + retry: /api/v2/tasks/1/runs/1/retry + self: /api/v2/tasks/1/runs/1 + task: /api/v2/tasks/1 + properties: + retry: + format: uri + type: string + self: + format: uri + type: string + task: + format: uri + type: string + readOnly: true + type: object + log: + description: An array of logs associated with the run. + items: + $ref: '#/components/schemas/LogEvent' + readOnly: true + type: array + requestedAt: + description: Time run was manually requested, RFC3339Nano. + format: date-time + readOnly: true + type: string + scheduledFor: + description: Time used for run's "now" option, RFC3339. + format: date-time + type: string + startedAt: + description: Time run started executing, RFC3339Nano. + format: date-time + readOnly: true + type: string + status: + enum: + - scheduled + - started + - failed + - success + - canceled + readOnly: true + type: string + taskID: + readOnly: true + type: string + RunManually: + properties: + scheduledFor: + description: >- + Time used for run's "now" option, RFC3339. Default is the server's + now time. + format: date-time + nullable: true + type: string + Runs: + properties: + links: + $ref: '#/components/schemas/Links' + runs: + items: + $ref: '#/components/schemas/Run' + type: array + type: object + SMTPNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleBase' + - $ref: '#/components/schemas/SMTPNotificationRuleBase' + SMTPNotificationRuleBase: + properties: + bodyTemplate: + type: string + subjectTemplate: + type: string + to: + type: string + type: + enum: + - smtp + type: string + required: + - type + - subjectTemplate + - to + type: object + ScatterViewProperties: + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + type: string + type: array + fillColumns: + items: + type: string + type: array + generateXAxisTicks: + items: + type: string + type: array + generateYAxisTicks: + items: + type: string + type: array + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + symbolColumns: + items: + type: string + type: array + timeFormat: + type: string + type: + enum: + - scatter + type: string + xAxisLabel: + type: string + xColumn: + type: string + xDomain: + items: + type: number + maxItems: 2 + type: array + xPrefix: + type: string + xSuffix: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yAxisLabel: + type: string + yColumn: + type: string + yDomain: + items: + type: number + maxItems: 2 + type: array + yPrefix: + type: string + ySuffix: + type: string + yTickStart: + format: float + type: number + yTickStep: + format: float + type: number + yTotalTicks: + type: integer + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - xColumn + - yColumn + - fillColumns + - symbolColumns + - xDomain + - yDomain + - xAxisLabel + - yAxisLabel + - xPrefix + - yPrefix + - xSuffix + - ySuffix + type: object + SchemaType: + enum: + - implicit + - explicit + type: string + ScraperTargetRequest: + properties: + allowInsecure: + default: false + description: Skip TLS verification on endpoint. + type: boolean + bucketID: + description: The ID of the bucket to write to. + type: string + name: + description: The name of the scraper target. + type: string + orgID: + description: The organization ID. + type: string + type: + description: The type of the metrics to be parsed. + enum: + - prometheus + type: string + url: + description: The URL of the metrics endpoint. + example: http://localhost:9090/metrics + type: string + type: object + ScraperTargetResponse: + allOf: + - $ref: '#/components/schemas/ScraperTargetRequest' + - properties: + bucket: + description: The bucket name. + type: string + id: + readOnly: true + type: string + links: + example: + bucket: /api/v2/buckets/1 + members: /api/v2/scrapers/1/members + organization: /api/v2/orgs/1 + owners: /api/v2/scrapers/1/owners + self: /api/v2/scrapers/1 + properties: + bucket: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + organization: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + readOnly: true + type: object + org: + description: The name of the organization. + type: string + type: object + type: object + ScraperTargetResponses: + properties: + configurations: + items: + $ref: '#/components/schemas/ScraperTargetResponse' + type: array + type: object + SecretKeys: + properties: + secrets: + items: + type: string + type: array + type: object + SecretKeysResponse: + allOf: + - $ref: '#/components/schemas/SecretKeys' + - properties: + links: + properties: + org: + type: string + self: + type: string + readOnly: true + type: object + type: object + Secrets: + additionalProperties: + type: string + example: + apikey: abc123xyz + ShardGroupManifest: + properties: + deletedAt: + format: date-time + type: string + endTime: + format: date-time + type: string + id: + format: int64 + type: integer + shards: + $ref: '#/components/schemas/ShardManifests' + startTime: + format: date-time + type: string + truncatedAt: + format: date-time + type: string + required: + - id + - startTime + - endTime + - shards + type: object + ShardGroupManifests: + items: + $ref: '#/components/schemas/ShardGroupManifest' + type: array + ShardManifest: + properties: + id: + format: int64 + type: integer + shardOwners: + $ref: '#/components/schemas/ShardOwners' + required: + - id + - shardOwners + type: object + ShardManifests: + items: + $ref: '#/components/schemas/ShardManifest' + type: array + ShardOwner: + properties: + nodeID: + description: ID of the node that owns a shard. + format: int64 + type: integer + required: + - nodeID + type: object + ShardOwners: + items: + $ref: '#/components/schemas/ShardOwner' + type: array + SimpleTableViewProperties: + properties: + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showAll: + type: boolean + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + type: + enum: + - simple-table + type: string + required: + - type + - showAll + - queries + - shape + - note + - showNoteWhenEmpty + type: object + SingleStatViewProperties: + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + decimalPlaces: + $ref: '#/components/schemas/DecimalPlaces' + note: + type: string + prefix: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + staticLegend: + $ref: '#/components/schemas/StaticLegend' + suffix: + type: string + tickPrefix: + type: string + tickSuffix: + type: string + type: + enum: + - single-stat + type: string + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - prefix + - tickPrefix + - suffix + - tickSuffix + - decimalPlaces + type: object + SlackNotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointBase' + - properties: + token: + description: Specifies the API token string. Specify either `URL` or `Token`. + type: string + url: + description: >- + Specifies the URL of the Slack endpoint. Specify either `URL` or + `Token`. + type: string + type: object + type: object + SlackNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleBase' + - $ref: '#/components/schemas/SlackNotificationRuleBase' + SlackNotificationRuleBase: + properties: + channel: + type: string + messageTemplate: + type: string + type: + enum: + - slack + type: string + required: + - type + - messageTemplate + type: object + Source: + properties: + default: + type: boolean + defaultRP: + type: string + id: + type: string + insecureSkipVerify: + type: boolean + languages: + items: + enum: + - flux + - influxql + type: string + readOnly: true + type: array + links: + properties: + buckets: + type: string + health: + type: string + query: + type: string + self: + type: string + type: object + metaUrl: + format: uri + type: string + name: + type: string + orgID: + type: string + password: + type: string + sharedSecret: + type: string + telegraf: + type: string + token: + type: string + type: + enum: + - v1 + - v2 + - self + type: string + url: + format: uri + type: string + username: + type: string + type: object + Sources: + properties: + links: + properties: + self: + format: uri + type: string + type: object + sources: + items: + $ref: '#/components/schemas/Source' + type: array + type: object + Stack: + properties: + createdAt: + format: date-time + readOnly: true + type: string + events: + items: + properties: + description: + type: string + eventType: + type: string + name: + type: string + resources: + items: + properties: + apiVersion: + type: string + associations: + items: + properties: + kind: + $ref: '#/components/schemas/TemplateKind' + metaName: + type: string + type: object + type: array + kind: + $ref: '#/components/schemas/TemplateKind' + links: + properties: + self: + type: string + type: object + resourceID: + type: string + templateMetaName: + type: string + type: object + type: array + sources: + items: + type: string + type: array + updatedAt: + format: date-time + readOnly: true + type: string + urls: + items: + type: string + type: array + type: object + type: array + id: + type: string + orgID: + type: string + type: object + Statement: + oneOf: + - $ref: '#/components/schemas/BadStatement' + - $ref: '#/components/schemas/VariableAssignment' + - $ref: '#/components/schemas/MemberAssignment' + - $ref: '#/components/schemas/ExpressionStatement' + - $ref: '#/components/schemas/ReturnStatement' + - $ref: '#/components/schemas/OptionStatement' + - $ref: '#/components/schemas/BuiltinStatement' + - $ref: '#/components/schemas/TestStatement' + StaticLegend: + description: StaticLegend represents the options specific to the static legend + properties: + colorizeRows: + type: boolean + heightRatio: + format: float + type: number + opacity: + format: float + type: number + orientationThreshold: + type: integer + show: + type: boolean + valueAxis: + type: string + widthRatio: + format: float + type: number + type: object + StatusRule: + properties: + count: + type: integer + currentLevel: + $ref: '#/components/schemas/RuleStatusLevel' + period: + type: string + previousLevel: + $ref: '#/components/schemas/RuleStatusLevel' + type: object + StringLiteral: + description: Expressions begin and end with double quote marks + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: string + type: object + SubscriptionManifest: + properties: + destinations: + items: + type: string + type: array + mode: + type: string + name: + type: string + required: + - name + - mode + - destinations + type: object + SubscriptionManifests: + items: + $ref: '#/components/schemas/SubscriptionManifest' + type: array + TableViewProperties: + properties: + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + decimalPlaces: + $ref: '#/components/schemas/DecimalPlaces' + fieldOptions: + description: >- + fieldOptions represent the fields retrieved by the query with + customization options + items: + $ref: '#/components/schemas/RenamableField' + type: array + note: + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + tableOptions: + properties: + fixFirstColumn: + description: >- + fixFirstColumn indicates whether the first column of the table + should be locked + type: boolean + sortBy: + $ref: '#/components/schemas/RenamableField' + verticalTimeAxis: + description: >- + verticalTimeAxis describes the orientation of the table by + indicating whether the time axis will be displayed vertically + type: boolean + wrapping: + description: >- + Wrapping describes the text wrapping style to be used in table + views + enum: + - truncate + - wrap + - single-line + type: string + type: object + timeFormat: + description: >- + timeFormat describes the display format for time values according to + moment.js date formatting + type: string + type: + enum: + - table + type: string + required: + - type + - queries + - colors + - shape + - note + - showNoteWhenEmpty + - tableOptions + - fieldOptions + - timeFormat + - decimalPlaces + type: object + TagRule: + properties: + key: + type: string + operator: + enum: + - equal + - notequal + - equalregex + - notequalregex + type: string + value: + type: string + type: object + Task: + properties: + authorizationID: + description: >- + ID of the authorization used when the task communicates with the + query engine. + type: string + createdAt: + format: date-time + readOnly: true + type: string + cron: + description: >- + [Cron expression](https://en.wikipedia.org/wiki/Cron#Overview) that + defines the schedule on which the task runs. Cron scheduling is + based on system time. + + Value is a [Cron + expression](https://en.wikipedia.org/wiki/Cron#Overview). + type: string + description: + description: Description of the task. + type: string + every: + description: >- + Interval at which the task runs. `every` also determines when the + task first runs, depending on the specified time. + + Value is a [duration + literal](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals)). + format: duration + type: string + flux: + description: Flux script to run for this task. + type: string + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + lastRunError: + readOnly: true + type: string + lastRunStatus: + enum: + - failed + - success + - canceled + readOnly: true + type: string + latestCompleted: + description: >- + Timestamp of the latest scheduled and completed run. + + Value is a timestamp in [RFC3339 date/time + format](https://docs.influxdata.com/flux/v0.x/data-types/basic/time/#time-syntax). + format: date-time + readOnly: true + type: string + links: + example: + labels: /api/v2/tasks/1/labels + logs: /api/v2/tasks/1/logs + members: /api/v2/tasks/1/members + owners: /api/v2/tasks/1/owners + runs: /api/v2/tasks/1/runs + self: /api/v2/tasks/1 + properties: + labels: + $ref: '#/components/schemas/Link' + logs: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + runs: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + readOnly: true + type: object + name: + description: Name of the task. + type: string + offset: + description: >- + [Duration](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals) + to delay execution of the task after the scheduled time has elapsed. + `0` removes the offset. + + The value is a [duration + literal](https://docs.influxdata.com/flux/v0.x/spec/lexical-elements/#duration-literals). + format: duration + type: string + org: + description: Name of the organization that owns the task. + type: string + orgID: + description: ID of the organization that owns the task. + type: string + ownerID: + description: ID of the user who owns this Task. + type: string + status: + $ref: '#/components/schemas/TaskStatusType' + type: + description: Type of the task, useful for filtering a task list. + type: string + updatedAt: + format: date-time + readOnly: true + type: string + required: + - id + - name + - orgID + - flux + type: object + TaskCreateRequest: + properties: + description: + description: An optional description of the task. + type: string + flux: + description: The Flux script to run for this task. + type: string + org: + description: The name of the organization that owns this Task. + type: string + orgID: + description: The ID of the organization that owns this Task. + type: string + status: + $ref: '#/components/schemas/TaskStatusType' + required: + - flux + type: object + TaskStatusType: + enum: + - active + - inactive + type: string + TaskUpdateRequest: + properties: + cron: + description: Override the 'cron' option in the flux script. + type: string + description: + description: An optional description of the task. + type: string + every: + description: Override the 'every' option in the flux script. + type: string + flux: + description: The Flux script to run for this task. + type: string + name: + description: Override the 'name' option in the flux script. + type: string + offset: + description: Override the 'offset' option in the flux script. + type: string + status: + $ref: '#/components/schemas/TaskStatusType' + type: object + Tasks: + properties: + links: + $ref: '#/components/schemas/Links' + readOnly: true + tasks: + items: + $ref: '#/components/schemas/Task' + type: array + type: object + Telegraf: + allOf: + - $ref: '#/components/schemas/TelegrafRequest' + - properties: + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + readOnly: true + links: + example: + lables: /api/v2/telegrafs/1/labels + members: /api/v2/telegrafs/1/members + owners: /api/v2/telegrafs/1/owners + self: /api/v2/telegrafs/1 + properties: + labels: + $ref: '#/components/schemas/Link' + members: + $ref: '#/components/schemas/Link' + owners: + $ref: '#/components/schemas/Link' + self: + $ref: '#/components/schemas/Link' + readOnly: true + type: object + type: object + type: object + TelegrafPlugin: + properties: + config: + type: string + description: + type: string + name: + type: string + type: + type: string + type: object + TelegrafPluginRequest: + properties: + config: + type: string + description: + type: string + metadata: + properties: + buckets: + items: + type: string + type: array + type: object + name: + type: string + orgID: + type: string + plugins: + items: + properties: + alias: + type: string + config: + type: string + description: + type: string + name: + type: string + type: + type: string + type: object + type: array + type: object + TelegrafPlugins: + properties: + os: + type: string + plugins: + items: + $ref: '#/components/schemas/TelegrafPlugin' + type: array + version: + type: string + type: object + TelegrafRequest: + properties: + config: + type: string + description: + type: string + metadata: + properties: + buckets: + items: + type: string + type: array + type: object + name: + type: string + orgID: + type: string + type: object + Telegrafs: + properties: + configurations: + items: + $ref: '#/components/schemas/Telegraf' + type: array + type: object + TelegramNotificationEndpoint: + allOf: + - $ref: '#/components/schemas/NotificationEndpointBase' + - properties: + channel: + description: >- + ID of the telegram channel, a chat_id in + https://core.telegram.org/bots/api#sendmessage . + type: string + token: + description: >- + Specifies the Telegram bot token. See + https://core.telegram.org/bots#creating-a-new-bot . + type: string + required: + - token + - channel + type: object + type: object + TelegramNotificationRule: + allOf: + - $ref: '#/components/schemas/NotificationRuleBase' + - $ref: '#/components/schemas/TelegramNotificationRuleBase' + TelegramNotificationRuleBase: + properties: + disableWebPagePreview: + description: >- + Disables preview of web links in the sent messages when "true". + Defaults to "false" . + type: boolean + messageTemplate: + description: The message template as a flux interpolated string. + type: string + parseMode: + description: >- + Parse mode of the message text per + https://core.telegram.org/bots/api#formatting-options . Defaults to + "MarkdownV2" . + enum: + - MarkdownV2 + - HTML + - Markdown + type: string + type: + description: >- + The discriminator between other types of notification rules is + "telegram". + enum: + - telegram + type: string + required: + - type + - messageTemplate + - channel + type: object + Template: + items: + properties: + apiVersion: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + meta: + properties: + name: + type: string + type: object + spec: + type: object + type: object + type: array + TemplateApply: + properties: + actions: + items: + oneOf: + - properties: + action: + enum: + - skipKind + type: string + properties: + properties: + kind: + $ref: '#/components/schemas/TemplateKind' + required: + - kind + type: object + type: object + - properties: + action: + enum: + - skipResource + type: string + properties: + properties: + kind: + $ref: '#/components/schemas/TemplateKind' + resourceTemplateName: + type: string + required: + - kind + - resourceTemplateName + type: object + type: object + type: array + dryRun: + type: boolean + envRefs: + additionalProperties: + oneOf: + - type: string + - type: integer + - type: number + - type: boolean + type: object + orgID: + type: string + remotes: + items: + properties: + contentType: + type: string + url: + type: string + required: + - url + type: object + type: array + secrets: + additionalProperties: + type: string + type: object + stackID: + type: string + template: + properties: + contentType: + type: string + contents: + $ref: '#/components/schemas/Template' + sources: + items: + type: string + type: array + type: object + templates: + items: + properties: + contentType: + type: string + contents: + $ref: '#/components/schemas/Template' + sources: + items: + type: string + type: array + type: object + type: array + type: object + TemplateChart: + properties: + height: + type: integer + properties: + $ref: '#/components/schemas/ViewProperties' + width: + type: integer + xPos: + type: integer + yPos: + type: integer + type: object + TemplateEnvReferences: + items: + properties: + defaultValue: + description: >- + Default value that will be provided for the reference when no + value is provided + nullable: true + oneOf: + - type: string + - type: integer + - type: number + - type: boolean + envRefKey: + description: >- + Key identified as environment reference and is the key identified + in the template + type: string + resourceField: + description: Field the environment reference corresponds too + type: string + value: + description: Value provided to fulfill reference + nullable: true + oneOf: + - type: string + - type: integer + - type: number + - type: boolean + required: + - resourceField + - envRefKey + type: object + type: array + TemplateExportByID: + properties: + orgIDs: + items: + properties: + orgID: + type: string + resourceFilters: + properties: + byLabel: + items: + type: string + type: array + byResourceKind: + items: + $ref: '#/components/schemas/TemplateKind' + type: array + type: object + type: object + type: array + resources: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + name: + description: >- + if defined with id, name is used for resource exported by id. + if defined independently, resources strictly matching name are + exported + type: string + required: + - id + - kind + type: object + type: array + stackID: + type: string + type: object + TemplateExportByName: + properties: + orgIDs: + items: + properties: + orgID: + type: string + resourceFilters: + properties: + byLabel: + items: + type: string + type: array + byResourceKind: + items: + $ref: '#/components/schemas/TemplateKind' + type: array + type: object + type: object + type: array + resources: + items: + properties: + kind: + $ref: '#/components/schemas/TemplateKind' + name: + type: string + required: + - name + - kind + type: object + type: array + stackID: + type: string + type: object + TemplateKind: + enum: + - Bucket + - Check + - CheckDeadman + - CheckThreshold + - Dashboard + - Label + - NotificationEndpoint + - NotificationEndpointHTTP + - NotificationEndpointPagerDuty + - NotificationEndpointSlack + - NotificationRule + - Task + - Telegraf + - Variable + type: string + TemplateSummary: + properties: + diff: + properties: + buckets: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + description: + type: string + name: + type: string + retentionRules: + $ref: '#/components/schemas/RetentionRules' + type: object + old: + properties: + description: + type: string + name: + type: string + retentionRules: + $ref: '#/components/schemas/RetentionRules' + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + checks: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + $ref: '#/components/schemas/CheckDiscriminator' + old: + $ref: '#/components/schemas/CheckDiscriminator' + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + dashboards: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + charts: + items: + $ref: '#/components/schemas/TemplateChart' + type: array + description: + type: string + name: + type: string + type: object + old: + properties: + charts: + items: + $ref: '#/components/schemas/TemplateChart' + type: array + description: + type: string + name: + type: string + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + labelMappings: + items: + properties: + labelID: + type: string + labelName: + type: string + labelTemplateMetaName: + type: string + resourceID: + type: string + resourceName: + type: string + resourceTemplateMetaName: + type: string + resourceType: + type: string + status: + type: string + type: object + type: array + labels: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + color: + type: string + description: + type: string + name: + type: string + type: object + old: + properties: + color: + type: string + description: + type: string + name: + type: string + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + notificationEndpoints: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + $ref: '#/components/schemas/NotificationEndpointDiscriminator' + old: + $ref: '#/components/schemas/NotificationEndpointDiscriminator' + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + notificationRules: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + description: + type: string + endpointID: + type: string + endpointName: + type: string + endpointType: + type: string + every: + type: string + messageTemplate: + type: string + name: + type: string + offset: + type: string + status: + type: string + statusRules: + items: + properties: + currentLevel: + type: string + previousLevel: + type: string + type: object + type: array + tagRules: + items: + properties: + key: + type: string + operator: + type: string + value: + type: string + type: object + type: array + type: object + old: + properties: + description: + type: string + endpointID: + type: string + endpointName: + type: string + endpointType: + type: string + every: + type: string + messageTemplate: + type: string + name: + type: string + offset: + type: string + status: + type: string + statusRules: + items: + properties: + currentLevel: + type: string + previousLevel: + type: string + type: object + type: array + tagRules: + items: + properties: + key: + type: string + operator: + type: string + value: + type: string + type: object + type: array + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + tasks: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + cron: + type: string + description: + type: string + every: + type: string + name: + type: string + offset: + type: string + query: + type: string + status: + type: string + type: object + old: + properties: + cron: + type: string + description: + type: string + every: + type: string + name: + type: string + offset: + type: string + query: + type: string + status: + type: string + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + telegrafConfigs: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + $ref: '#/components/schemas/TelegrafRequest' + old: + $ref: '#/components/schemas/TelegrafRequest' + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + variables: + items: + properties: + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + new: + properties: + args: + $ref: '#/components/schemas/VariableProperties' + description: + type: string + name: + type: string + type: object + old: + properties: + args: + $ref: '#/components/schemas/VariableProperties' + description: + type: string + name: + type: string + type: object + stateStatus: + type: string + templateMetaName: + type: string + type: object + type: array + type: object + errors: + items: + properties: + fields: + items: + type: string + type: array + indexes: + items: + type: integer + type: array + kind: + $ref: '#/components/schemas/TemplateKind' + reason: + type: string + type: object + type: array + sources: + items: + type: string + type: array + stackID: + type: string + summary: + properties: + buckets: + items: + properties: + description: + type: string + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + name: + type: string + orgID: + type: string + retentionPeriod: + type: integer + templateMetaName: + type: string + type: object + type: array + checks: + items: + allOf: + - $ref: '#/components/schemas/CheckDiscriminator' + - properties: + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + templateMetaName: + type: string + type: object + type: array + dashboards: + items: + properties: + charts: + items: + $ref: '#/components/schemas/TemplateChart' + type: array + description: + type: string + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + name: + type: string + orgID: + type: string + templateMetaName: + type: string + type: object + type: array + labelMappings: + items: + properties: + labelID: + type: string + labelName: + type: string + labelTemplateMetaName: + type: string + resourceID: + type: string + resourceName: + type: string + resourceTemplateMetaName: + type: string + resourceType: + type: string + status: + type: string + type: object + type: array + labels: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + missingEnvRefs: + items: + type: string + type: array + missingSecrets: + items: + type: string + type: array + notificationEndpoints: + items: + allOf: + - $ref: '#/components/schemas/NotificationEndpointDiscriminator' + - properties: + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + templateMetaName: + type: string + type: object + type: array + notificationRules: + items: + properties: + description: + type: string + endpointID: + type: string + endpointTemplateMetaName: + type: string + endpointType: + type: string + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + every: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + messageTemplate: + type: string + name: + type: string + offset: + type: string + status: + type: string + statusRules: + items: + properties: + currentLevel: + type: string + previousLevel: + type: string + type: object + type: array + tagRules: + items: + properties: + key: + type: string + operator: + type: string + value: + type: string + type: object + type: array + templateMetaName: + type: string + type: object + type: array + tasks: + items: + properties: + cron: + type: string + description: + type: string + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + every: + type: string + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + name: + type: string + offset: + type: string + query: + type: string + status: + type: string + templateMetaName: + type: string + type: object + type: array + telegrafConfigs: + items: + allOf: + - $ref: '#/components/schemas/TelegrafRequest' + - properties: + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + templateMetaName: + type: string + type: object + type: array + variables: + items: + properties: + arguments: + $ref: '#/components/schemas/VariableProperties' + description: + type: string + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + labelAssociations: + items: + $ref: '#/components/schemas/TemplateSummaryLabel' + type: array + name: + type: string + orgID: + type: string + templateMetaName: + type: string + type: object + type: array + type: object + type: object + TemplateSummaryLabel: + properties: + envReferences: + $ref: '#/components/schemas/TemplateEnvReferences' + id: + type: string + kind: + $ref: '#/components/schemas/TemplateKind' + name: + type: string + orgID: + type: string + properties: + properties: + color: + type: string + description: + type: string + type: object + templateMetaName: + type: string + type: object + TestStatement: + description: Declares a Flux test case + properties: + assignment: + $ref: '#/components/schemas/VariableAssignment' + type: + $ref: '#/components/schemas/NodeType' + type: object + Threshold: + discriminator: + mapping: + greater: '#/components/schemas/GreaterThreshold' + lesser: '#/components/schemas/LesserThreshold' + range: '#/components/schemas/RangeThreshold' + propertyName: type + oneOf: + - $ref: '#/components/schemas/GreaterThreshold' + - $ref: '#/components/schemas/LesserThreshold' + - $ref: '#/components/schemas/RangeThreshold' + ThresholdBase: + properties: + allValues: + description: If true, only alert if all values meet threshold. + type: boolean + level: + $ref: '#/components/schemas/CheckStatusLevel' + ThresholdCheck: + allOf: + - $ref: '#/components/schemas/CheckBase' + - properties: + every: + description: Check repetition interval. + type: string + offset: + description: Duration to delay after the schedule, before executing check. + type: string + statusMessageTemplate: + description: The template used to generate and write a status message. + type: string + tags: + description: List of tags to write to each status. + items: + properties: + key: + type: string + value: + type: string + type: object + type: array + thresholds: + items: + $ref: '#/components/schemas/Threshold' + type: array + type: + enum: + - threshold + type: string + required: + - type + type: object + Token: + properties: + token: + type: string + type: object + UnaryExpression: + description: Uses operators to act on a single operand in an expression + properties: + argument: + $ref: '#/components/schemas/Expression' + operator: + type: string + type: + $ref: '#/components/schemas/NodeType' + type: object + UnsignedIntegerLiteral: + description: Represents integer numbers + properties: + type: + $ref: '#/components/schemas/NodeType' + value: + type: string + type: object + User: + properties: + id: + readOnly: true + type: string + name: + type: string + oauthID: + type: string + status: + default: active + description: If inactive the user is inactive. + enum: + - active + - inactive + type: string + required: + - name + UserResponse: + properties: + id: + readOnly: true + type: string + links: + example: + self: /api/v2/users/1 + properties: + self: + format: uri + type: string + readOnly: true + type: object + name: + type: string + oauthID: + type: string + status: + default: active + description: If inactive the user is inactive. + enum: + - active + - inactive + type: string + required: + - name + Users: + properties: + links: + properties: + self: + format: uri + type: string + type: object + users: + items: + $ref: '#/components/schemas/UserResponse' + type: array + type: object + Variable: + properties: + arguments: + $ref: '#/components/schemas/VariableProperties' + createdAt: + format: date-time + type: string + description: + type: string + id: + readOnly: true + type: string + labels: + $ref: '#/components/schemas/Labels' + links: + properties: + labels: + format: uri + type: string + org: + format: uri + type: string + self: + format: uri + type: string + readOnly: true + type: object + name: + type: string + orgID: + type: string + selected: + items: + type: string + type: array + updatedAt: + format: date-time + type: string + required: + - name + - orgID + - arguments + type: object + VariableAssignment: + description: Represents the declaration of a variable + properties: + id: + $ref: '#/components/schemas/Identifier' + init: + $ref: '#/components/schemas/Expression' + type: + $ref: '#/components/schemas/NodeType' + type: object + VariableProperties: + oneOf: + - $ref: '#/components/schemas/QueryVariableProperties' + - $ref: '#/components/schemas/ConstantVariableProperties' + - $ref: '#/components/schemas/MapVariableProperties' + type: object + Variables: + example: + variables: + - arguments: + type: constant + values: + - howdy + - hello + - hi + - yo + - oy + id: '1221432' + name: ':ok:' + selected: + - hello + - arguments: + type: map + values: + a: fdjaklfdjkldsfjlkjdsa + b: dfaksjfkljekfajekdljfas + c: fdjksajfdkfeawfeea + id: '1221432' + name: ':ok:' + selected: + - c + - arguments: + language: flux + query: 'from(bucket: "foo") |> showMeasurements()' + type: query + id: '1221432' + name: ':ok:' + selected: + - host + properties: + variables: + items: + $ref: '#/components/schemas/Variable' + type: array + type: object + View: + properties: + id: + readOnly: true + type: string + links: + properties: + self: + type: string + readOnly: true + type: object + name: + type: string + properties: + $ref: '#/components/schemas/ViewProperties' + required: + - name + - properties + ViewProperties: + oneOf: + - $ref: '#/components/schemas/LinePlusSingleStatProperties' + - $ref: '#/components/schemas/XYViewProperties' + - $ref: '#/components/schemas/SingleStatViewProperties' + - $ref: '#/components/schemas/HistogramViewProperties' + - $ref: '#/components/schemas/GaugeViewProperties' + - $ref: '#/components/schemas/TableViewProperties' + - $ref: '#/components/schemas/SimpleTableViewProperties' + - $ref: '#/components/schemas/MarkdownViewProperties' + - $ref: '#/components/schemas/CheckViewProperties' + - $ref: '#/components/schemas/ScatterViewProperties' + - $ref: '#/components/schemas/HeatmapViewProperties' + - $ref: '#/components/schemas/MosaicViewProperties' + - $ref: '#/components/schemas/BandViewProperties' + - $ref: '#/components/schemas/GeoViewProperties' + Views: + properties: + links: + properties: + self: + type: string + type: object + views: + items: + $ref: '#/components/schemas/View' + type: array + type: object + WritePrecision: + enum: + - ms + - s + - us + - ns + type: string + XYGeom: + enum: + - line + - step + - stacked + - bar + - monotoneX + type: string + XYViewProperties: + properties: + axes: + $ref: '#/components/schemas/Axes' + colorMapping: + $ref: '#/components/schemas/ColorMapping' + description: An object that contains information about the color mapping + colors: + description: Colors define color encoding of data into a visualization + items: + $ref: '#/components/schemas/DashboardColor' + type: array + generateXAxisTicks: + items: + type: string + type: array + generateYAxisTicks: + items: + type: string + type: array + geom: + $ref: '#/components/schemas/XYGeom' + hoverDimension: + enum: + - auto + - x + - 'y' + - xy + type: string + legendColorizeRows: + type: boolean + legendHide: + type: boolean + legendOpacity: + format: float + type: number + legendOrientationThreshold: + type: integer + note: + type: string + position: + enum: + - overlaid + - stacked + type: string + queries: + items: + $ref: '#/components/schemas/DashboardQuery' + type: array + shadeBelow: + type: boolean + shape: + enum: + - chronograf-v2 + type: string + showNoteWhenEmpty: + description: If true, will display note when empty + type: boolean + staticLegend: + $ref: '#/components/schemas/StaticLegend' + timeFormat: + type: string + type: + enum: + - xy + type: string + xColumn: + type: string + xTickStart: + format: float + type: number + xTickStep: + format: float + type: number + xTotalTicks: + type: integer + yColumn: + type: string + yTickStart: + format: float + type: number + yTickStep: + format: float + type: number + yTotalTicks: + type: integer + required: + - type + - geom + - queries + - shape + - axes + - colors + - note + - showNoteWhenEmpty + - position + type: object + securitySchemes: + BasicAuthentication: + description: > + Use the HTTP Basic authentication scheme for InfluxDB `/api/v2` API + operations that support it. + + + Username and password schemes require the following credentials: + - **username** + - **password** + scheme: basic + type: http + TokenAuthentication: + description: > + Use the [Token + authentication](#section/Authentication/TokenAuthentication) + + scheme to authenticate to the InfluxDB API. + + + + In your API requests, send an `Authorization` header. + + For the header value, provide the word `Token` followed by a space and + an InfluxDB API token. + + The word `Token` is case-sensitive. + + + + ### Syntax + + + `Authorization: Token YOUR_INFLUX_TOKEN` + + + + For more information and examples, see the following: + - [`/authorizations`](#tag/Authorizations) endpoint. + - [Authorize API requests](/influxdb/v2.2/api-guide/api_intro/#authentication). + - [Manage API tokens](/influxdb/v2.2/security/tokens/). + in: header + name: Authorization + type: apiKey +info: + title: InfluxDB OSS API Service + version: 2.0.0 + description: > + The InfluxDB v2 API provides a programmatic interface for all interactions + with InfluxDB. Access the InfluxDB API using the `/api/v2/` endpoint. +openapi: 3.0.0 +paths: + /api/v2: + get: + operationId: GetRoutes + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Routes' + description: All routes + summary: List all top level routes + tags: + - Routes + /api/v2/authorizations: + get: + operationId: GetAuthorizations + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Only show authorizations that belong to a user ID. + in: query + name: userID + schema: + type: string + - description: Only show authorizations that belong to a user name. + in: query + name: user + schema: + type: string + - description: Only show authorizations that belong to an organization ID. + in: query + name: orgID + schema: + type: string + - description: Only show authorizations that belong to a organization name. + in: query + name: org + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorizations' + description: A list of authorizations + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: List all authorizations + tags: + - Authorizations + post: + operationId: PostAuthorizations + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AuthorizationPostRequest' + description: Authorization to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: Authorization created + '400': + $ref: '#/components/responses/ServerError' + description: Invalid request + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Create an authorization + tags: + - Authorizations + /api/v2/authorizations/{authID}: + delete: + operationId: DeleteAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the authorization to delete. + in: path + name: authID + required: true + schema: + type: string + responses: + '204': + description: Authorization deleted + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Delete an authorization + tags: + - Authorizations + get: + operationId: GetAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the authorization to get. + in: path + name: authID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: Authorization details + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Retrieve an authorization + tags: + - Authorizations + patch: + operationId: PatchAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the authorization to update. + in: path + name: authID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AuthorizationUpdateRequest' + description: Authorization to update + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: The active or inactive authorization + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Update an authorization to be active or inactive + tags: + - Authorizations + /api/v2/backup/kv: + get: + deprecated: true + operationId: GetBackupKV + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/octet-stream: + schema: + format: binary + type: string + description: Snapshot of KV metadata + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: >- + Download snapshot of metadata stored in the server's embedded KV store. + Should not be used in versions greater than 2.1.x, as it doesn't include + metadata stored in embedded SQL. + tags: + - Backup + /api/v2/backup/metadata: + get: + operationId: GetBackupMetadata + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: >- + Indicates the content encoding (usually a compression algorithm) + that the client can understand. + in: header + name: Accept-Encoding + schema: + default: identity + description: >- + The content coding. Use `gzip` for compressed data or `identity` + for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + responses: + '200': + content: + multipart/mixed: + schema: + $ref: '#/components/schemas/MetadataBackup' + description: Snapshot of metadata + headers: + Content-Encoding: + description: >- + Lists any encodings (usually compression algorithms) that have + been applied to the response payload. + schema: + default: identity + description: > + The content coding: `gzip` for compressed data or `identity` + for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Download snapshot of all metadata in the server + tags: + - Backup + /api/v2/backup/shards/{shardID}: + get: + operationId: GetBackupShardId + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: >- + Indicates the content encoding (usually a compression algorithm) + that the client can understand. + in: header + name: Accept-Encoding + schema: + default: identity + description: >- + The content coding. Use `gzip` for compressed data or `identity` + for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + - description: The shard ID. + in: path + name: shardID + required: true + schema: + format: int64 + type: integer + - description: Earliest time to include in the snapshot. RFC3339 format. + in: query + name: since + schema: + format: date-time + type: string + responses: + '200': + content: + application/octet-stream: + schema: + format: binary + type: string + description: TSM snapshot. + headers: + Content-Encoding: + description: >- + Lists any encodings (usually compression algorithms) that have + been applied to the response payload. + schema: + default: identity + description: > + The content coding: `gzip` for compressed data or `identity` + for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Shard not found. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Download snapshot of all TSM data in a shard + tags: + - Backup + /api/v2/buckets: + get: + operationId: GetBuckets + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - $ref: '#/components/parameters/After' + - description: The name of the organization. + in: query + name: org + schema: + type: string + - description: The organization ID. + in: query + name: orgID + schema: + type: string + - description: Only returns buckets with a specific name. + in: query + name: name + schema: + type: string + - description: Only returns buckets with a specific ID. + in: query + name: id + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Buckets' + description: A list of buckets + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all buckets + tags: + - Buckets + post: + operationId: PostBuckets + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PostBucketRequest' + description: Bucket to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Bucket' + description: Bucket created + '422': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Request body failed validation + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a bucket + tags: + - Buckets + /api/v2/buckets/{bucketID}: + delete: + operationId: DeleteBucketsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the bucket to delete. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Bucket not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a bucket + tags: + - Buckets + get: + operationId: GetBucketsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Bucket' + description: Bucket details + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a bucket + tags: + - Buckets + patch: + operationId: PatchBucketsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PatchBucketRequest' + description: Bucket update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Bucket' + description: An updated bucket + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a bucket + tags: + - Buckets + /api/v2/buckets/{bucketID}/labels: + get: + operationId: GetBucketsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a bucket + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a bucket + tags: + - Buckets + post: + operationId: PostBucketsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The newly added label + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a bucket + tags: + - Buckets + /api/v2/buckets/{bucketID}/labels/{labelID}: + delete: + operationId: DeleteBucketsIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Bucket not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a bucket + tags: + - Buckets + /api/v2/buckets/{bucketID}/members: + get: + operationId: GetBucketsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMembers' + description: A list of bucket members + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all users with member privileges for a bucket + tags: + - Buckets + post: + operationId: PostBucketsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as member + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMember' + description: Member added to bucket + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to a bucket + tags: + - Buckets + /api/v2/buckets/{bucketID}/members/{userID}: + delete: + operationId: DeleteBucketsIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the member to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '204': + description: Member removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from a bucket + tags: + - Buckets + /api/v2/buckets/{bucketID}/owners: + get: + operationId: GetBucketsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwners' + description: A list of bucket owners + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all owners of a bucket + tags: + - Buckets + post: + operationId: PostBucketsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as owner + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: Bucket owner added + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to a bucket + tags: + - Buckets + /api/v2/buckets/{bucketID}/owners/{userID}: + delete: + operationId: DeleteBucketsIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the owner to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + responses: + '204': + description: Owner removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from a bucket + tags: + - Buckets + /api/v2/checks: + get: + operationId: GetChecks + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - description: Only show checks that belong to a specific organization ID. + in: query + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Checks' + description: A list of checks + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all checks + tags: + - Checks + post: + operationId: CreateCheck + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PostCheck' + description: Check to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Check' + description: Check created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add new check + tags: + - Checks + /api/v2/checks/{checkID}: + delete: + operationId: DeleteChecksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The check was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a check + tags: + - Checks + get: + operationId: GetChecksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Check' + description: The check requested + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a check + tags: + - Checks + patch: + operationId: PatchChecksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/CheckPatch' + description: Check update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Check' + description: An updated check + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The check was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a check + tags: + - Checks + put: + operationId: PutChecksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Check' + description: Check update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Check' + description: An updated check + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The check was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a check + tags: + - Checks + /api/v2/checks/{checkID}/labels: + get: + operationId: GetChecksIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a check + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a check + tags: + - Checks + post: + operationId: PostChecksIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The label was added to the check + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a check + tags: + - Checks + /api/v2/checks/{checkID}/labels/{labelID}: + delete: + operationId: DeleteChecksIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Check or label not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete label from a check + tags: + - Checks + /api/v2/checks/{checkID}/query: + get: + operationId: GetChecksIDQuery + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The check ID. + in: path + name: checkID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/FluxResponse' + description: The check query requested + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Invalid request + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Check not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a check query + tags: + - Checks + /api/v2/config: + get: + description: > + Returns the active runtime configuration of the InfluxDB instance. + + + In InfluxDB v2.2+, use this endpoint to view your active runtime + configuration, + + including flags and environment variables. + + + #### Related guides + + + - [View your runtime server + configuration](https://docs.influxdata.com/influxdb/v2.2/reference/config-options/#view-your-runtime-server-configuration) + operationId: GetConfig + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Config' + description: > + Success. + + The response body contains the active runtime configuration of the + InfluxDB instance. + '401': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Retrieve runtime configuration + tags: + - Config + /api/v2/dashboards: + get: + operationId: GetDashboards + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - $ref: '#/components/parameters/Descending' + - description: >- + A user identifier. Returns only dashboards where this user has the + `owner` role. + in: query + name: owner + schema: + type: string + - description: The column to sort by. + in: query + name: sortBy + schema: + enum: + - ID + - CreatedAt + - UpdatedAt + type: string + - description: >- + A list of dashboard identifiers. Returns only the listed dashboards. + If both `id` and `owner` are specified, only `id` is used. + in: query + name: id + schema: + items: + type: string + type: array + - description: The identifier of the organization. + in: query + name: orgID + schema: + type: string + - description: The name of the organization. + in: query + name: org + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Dashboards' + description: All dashboards + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all dashboards + tags: + - Dashboards + post: + operationId: PostDashboards + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/CreateDashboardRequest' + description: Dashboard to create + required: true + responses: + '201': + content: + application/json: + schema: + oneOf: + - $ref: '#/components/schemas/Dashboard' + - $ref: '#/components/schemas/DashboardWithViewProperties' + description: Added dashboard + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}: + delete: + operationId: DeleteDashboardsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a dashboard + tags: + - Dashboards + get: + operationId: GetDashboardsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + - description: If `properties`, includes the cell view properties in the response. + in: query + name: include + required: false + schema: + enum: + - properties + type: string + responses: + '200': + content: + application/json: + schema: + oneOf: + - $ref: '#/components/schemas/Dashboard' + - $ref: '#/components/schemas/DashboardWithViewProperties' + description: Retrieve a single dashboard + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a dashboard + tags: + - Dashboards + patch: + operationId: PatchDashboardsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + properties: + cells: + $ref: '#/components/schemas/CellWithViewProperties' + description: >- + optional, when provided will replace all existing cells with + the cells provided + description: + description: optional, when provided will replace the description + type: string + name: + description: optional, when provided will replace the name + type: string + title: PatchDashboardRequest + type: object + description: Patching of a dashboard + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Dashboard' + description: Updated dashboard + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/cells: + post: + operationId: PostDashboardsIDCells + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/CreateCell' + description: Cell that will be added + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Cell' + description: Cell successfully added + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a dashboard cell + tags: + - Cells + - Dashboards + put: + description: >- + Replaces all cells in a dashboard. This is used primarily to update the + positional information of all cells. + operationId: PutDashboardsIDCells + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Cells' + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Dashboard' + description: Replaced dashboard cells + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Replace cells in a dashboard + tags: + - Cells + - Dashboards + /api/v2/dashboards/{dashboardID}/cells/{cellID}: + delete: + operationId: DeleteDashboardsIDCellsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to delete. + in: path + name: dashboardID + required: true + schema: + type: string + - description: The ID of the cell to delete. + in: path + name: cellID + required: true + schema: + type: string + responses: + '204': + description: Cell successfully deleted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Cell or dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a dashboard cell + tags: + - Cells + - Dashboards + patch: + description: >- + Updates the non positional information related to a cell. Updates to a + single cell's positional data could cause grid conflicts. + operationId: PatchDashboardsIDCellsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + - description: The ID of the cell to update. + in: path + name: cellID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/CellUpdate' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Cell' + description: Updated dashboard cell + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Cell or dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update the non-positional information related to a cell + tags: + - Cells + - Dashboards + /api/v2/dashboards/{dashboardID}/cells/{cellID}/view: + get: + operationId: GetDashboardsIDCellsIDView + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + - description: The cell ID. + in: path + name: cellID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/View' + description: A dashboard cells view + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Cell or dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve the view for a cell + tags: + - Cells + - Dashboards + - Views + patch: + operationId: PatchDashboardsIDCellsIDView + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the dashboard to update. + in: path + name: dashboardID + required: true + schema: + type: string + - description: The ID of the cell to update. + in: path + name: cellID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/View' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/View' + description: Updated cell view + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Cell or dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update the view for a cell + tags: + - Cells + - Dashboards + - Views + /api/v2/dashboards/{dashboardID}/labels: + get: + operationId: GetDashboardsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a dashboard + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a dashboard + tags: + - Dashboards + post: + operationId: PostDashboardsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The label added to the dashboard + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/labels/{labelID}: + delete: + operationId: DeleteDashboardsIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Dashboard not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/members: + get: + operationId: GetDashboardsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMembers' + description: A list of users who have member privileges for a dashboard + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all dashboard members + tags: + - Dashboards + post: + operationId: PostDashboardsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as member + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMember' + description: Added to dashboard members + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/members/{userID}: + delete: + operationId: DeleteDashboardsIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the member to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '204': + description: Member removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/owners: + get: + operationId: GetDashboardsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwners' + description: A list of users who have owner privileges for a dashboard + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all dashboard owners + tags: + - Dashboards + post: + operationId: PostDashboardsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as owner + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: Added to dashboard owners + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to a dashboard + tags: + - Dashboards + /api/v2/dashboards/{dashboardID}/owners/{userID}: + delete: + operationId: DeleteDashboardsIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the owner to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The dashboard ID. + in: path + name: dashboardID + required: true + schema: + type: string + responses: + '204': + description: Owner removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from a dashboard + tags: + - Dashboards + /api/v2/dbrps: + get: + operationId: GetDBRPs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Specifies the organization ID to filter on + in: query + name: orgID + schema: + type: string + - description: Specifies the organization name to filter on + in: query + name: org + schema: + type: string + - description: Specifies the mapping ID to filter on + in: query + name: id + schema: + type: string + - description: Specifies the bucket ID to filter on + in: query + name: bucketID + schema: + type: string + - description: Specifies filtering on default + in: query + name: default + schema: + type: boolean + - description: Specifies the database to filter on + in: query + name: db + schema: + type: string + - description: Specifies the retention policy to filter on + in: query + name: rp + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/DBRPs' + description: Success. Returns a list of database retention policy mappings. + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Bad request. The request has one or more invalid parameters. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List database retention policy mappings + tags: + - DBRPs + post: + operationId: PostDBRP + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DBRPCreate' + description: The database retention policy mapping to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/DBRP' + description: Created. Returns the created database retention policy mapping. + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Bad request. The mapping in the request has one or more invalid IDs. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a database retention policy mapping + tags: + - DBRPs + /api/v2/dbrps/{dbrpID}: + delete: + operationId: DeleteDBRPID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Specifies the organization ID of the mapping + in: query + name: orgID + schema: + type: string + - description: Specifies the organization name of the mapping + in: query + name: org + schema: + type: string + - description: The database retention policy mapping + in: path + name: dbrpID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: if any of the IDs passed is invalid + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a database retention policy + tags: + - DBRPs + get: + operationId: GetDBRPsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Specifies the organization ID of the mapping + in: query + name: orgID + schema: + type: string + - description: Specifies the organization name of the mapping + in: query + name: org + schema: + type: string + - description: The database retention policy mapping ID + in: path + name: dbrpID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/DBRPGet' + description: The database retention policy requested + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: if any of the IDs passed is invalid + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a database retention policy mapping + tags: + - DBRPs + patch: + operationId: PatchDBRPID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Specifies the organization ID of the mapping + in: query + name: orgID + schema: + type: string + - description: Specifies the organization name of the mapping + in: query + name: org + schema: + type: string + - description: The database retention policy mapping. + in: path + name: dbrpID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DBRPUpdate' + description: Database retention policy update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/DBRPGet' + description: An updated mapping + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: if any of the IDs passed is invalid + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The mapping was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a database retention policy mapping + tags: + - DBRPs + /debug/pprof/all: + get: + description: > + Collects samples and returns reports for the following [Go runtime + profiles](https://pkg.go.dev/runtime/pprof): + + + - **allocs**: All past memory allocations + + - **block**: Stack traces that led to blocking on synchronization + primitives + + - **cpu**: (Optional) Program counters sampled from the executing stack. + Include by passing the `cpu` query parameter with a [duration](https://docs.influxdata.com/influxdb/v2.2/reference/glossary/#duration) value. + Equivalent to the report from [`GET /debug/pprof/profile?seconds=NUMBER_OF_SECONDS`](#operation/GetDebugPprofProfile). + - **goroutine**: All current goroutines + + - **heap**: Memory allocations for live objects + + - **mutex**: Holders of contended mutexes + + - **threadcreate**: Stack traces that led to the creation of new OS + threads + operationId: GetDebugPprofAllProfiles + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: > + Collects and returns CPU profiling data for the specified + [duration](https://docs.influxdata.com/influxdb/v2.2/reference/glossary/#duration). + in: query + name: cpu + schema: + externalDocs: + description: InfluxDB duration + url: >- + https://docs.influxdata.com/influxdb/v2.1/reference/glossary/#duration + format: duration + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: > + GZIP compressed TAR file (`.tar.gz`) that contains + + [Go runtime profile](https://pkg.go.dev/runtime/pprof) + reports. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + description: | + [Go runtime profile](https://pkg.go.dev/runtime/pprof) reports. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve all runtime profiles + tags: + - Debug + x-codeSamples: + - label: 'Shell: Get all profiles' + lang: Shell + source: > + # Download and extract a `tar.gz` of all profiles after 10 seconds + of CPU sampling. + + + curl "http://localhost:8086/debug/pprof/all?cpu=10s" | tar -xz + + + # x profiles/cpu.pb.gz + + # x profiles/goroutine.pb.gz + + # x profiles/block.pb.gz + + # x profiles/mutex.pb.gz + + # x profiles/heap.pb.gz + + # x profiles/allocs.pb.gz + + # x profiles/threadcreate.pb.gz + + + # Analyze a profile. + + + go tool pprof profiles/heap.pb.gz + - label: 'Shell: Get all profiles except CPU' + lang: Shell + source: | + # Download and extract a `tar.gz` of all profiles except CPU. + + curl http://localhost:8086/debug/pprof/all | tar -xz + + # x profiles/goroutine.pb.gz + # x profiles/block.pb.gz + # x profiles/mutex.pb.gz + # x profiles/heap.pb.gz + # x profiles/allocs.pb.gz + # x profiles/threadcreate.pb.gz + + # Analyze a profile. + + go tool pprof profiles/heap.pb.gz + /debug/pprof/allocs: + get: + description: > + Returns a [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + of + + all past memory allocations. + + **allocs** is the same as the **heap** profile, + + but changes the default [pprof](https://pkg.go.dev/runtime/pprof) + + display to __-alloc_space__, + + the total number of bytes allocated since the program began (including + garbage-collected bytes). + operationId: GetDebugPprofAllocs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: > + - `0`: (Default) Return the report as a gzip-compressed protocol + buffer. + + - `1`: Return a response body with the report formatted as + human-readable text. + The report contains comments that translate addresses to function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + compatible + + with [pprof](https://github.com/google/pprof) analysis and + visualization tools. + + If debug is enabled (`?debug=1`), response body contains a + human-readable profile. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve the memory allocations runtime profile + tags: + - Debug + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: > + # Analyze the profile in interactive mode. + + + go tool pprof http://localhost:8086/debug/pprof/allocs + + + # `pprof` returns the following prompt: + + # Entering interactive mode (type "help" for commands, "o" for + options) + + # (pprof) + + + # At the prompt, get the top N memory allocations. + + + (pprof) top10 + /debug/pprof/block: + get: + description: > + Collects samples and returns a [Go runtime + profile](https://pkg.go.dev/runtime/pprof) + + report of stack traces that led to blocking on synchronization + primitives. + operationId: GetDebugPprofBlock + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: > + - `0`: (Default) Return the report as a gzip-compressed protocol + buffer. + + - `1`: Return a response body with the report formatted as + human-readable text. + The report contains comments that translate addresses to function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + compatible + + with [pprof](https://github.com/google/pprof) analysis and + visualization tools. + + If debug is enabled (`?debug=1`), response body contains a + human-readable profile. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve the block runtime profile + tags: + - Debug + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: > + # Analyze the profile in interactive mode. + + + go tool pprof http://localhost:8086/debug/pprof/block + + + # `pprof` returns the following prompt: + + # Entering interactive mode (type "help" for commands, "o" for + options) + + # (pprof) + + + # At the prompt, get the top N entries. + + + (pprof) top10 + /debug/pprof/cmdline: + get: + description: | + Returns the command line that invoked InfluxDB. + operationId: GetDebugPprofCmdline + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + text/plain: + schema: + format: Command line + type: string + description: Command line invocation. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve the command line invocation + tags: + - Debug + /debug/pprof/goroutine: + get: + description: > + Collects statistics and returns a [Go runtime + profile](https://pkg.go.dev/runtime/pprof) + + report of all current goroutines. + operationId: GetDebugPprofGoroutine + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: > + - `0`: (Default) Return the report as a gzip-compressed protocol + buffer. + + - `1`: Return a response body with the report formatted as + human-readable text with comments that translate addresses to + function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + compatible + + with [pprof](https://github.com/google/pprof) analysis and + visualization tools. + + If debug is enabled (`?debug=1`), response body contains a + human-readable profile. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve the goroutines runtime profile + tags: + - Debug + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: > + # Analyze the profile in interactive mode. + + + go tool pprof http://localhost:8086/debug/pprof/goroutine + + + # `pprof` returns the following prompt: + + # Entering interactive mode (type "help" for commands, "o" for + options) + + # (pprof) + + + # At the prompt, get the top N entries. + + + (pprof) top10 + /debug/pprof/heap: + get: + description: > + Collects statistics and returns a [Go runtime + profile](https://pkg.go.dev/runtime/pprof) + + report of memory allocations for live objects. + + + To run **garbage collection** before sampling, + + pass the `gc` query parameter with a value of `1`. + operationId: GetDebugPprofHeap + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: > + - `0`: (Default) Return the report as a gzip-compressed protocol + buffer. + + - `1`: Return a response body with the report formatted as + human-readable text. + The report contains comments that translate addresses to function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + - description: | + - `0`: (Default) don't force garbage collection before sampling. + - `1`: Force garbage collection before sampling. + in: query + name: gc + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + responses: + '200': + content: + application/octet-stream: + schema: + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + examples: + profileDebugResponse: + summary: Profile in plain text + value: "heap profile: 12431: 137356528 [149885081: 846795139976] @ heap/8192\n23: 17711104 [46: 35422208] @ 0x4c6df65 0x4ce03ec 0x4cdf3c5 0x4c6f4db 0x4c9edbc 0x4bdefb3 0x4bf822a 0x567d158 0x567ced9 0x406c0a1\n#\t0x4c6df64\tgithub.com/influxdata/influxdb/v2/tsdb/engine/tsm1.(*entry).add+0x1a4\t\t\t\t\t/Users/me/github/influxdb/tsdb/engine/tsm1/cache.go:97\n#\t0x4ce03eb\tgithub.com/influxdata/influxdb/v2/tsdb/engine/tsm1.(*partition).write+0x2ab\t\t\t\t/Users/me/github/influxdb/tsdb/engine/tsm1/ring.go:229\n#\t0x4cdf3c4\tgithub.com/influxdata/influxdb/v2/tsdb/engine/tsm1.(*ring).write+0xa4\t\t\t\t\t/Users/me/github/influxdb/tsdb/engine/tsm1/ring.go:95\n#\t0x4c6f4da\tgithub.com/influxdata/influxdb/v2/tsdb/engine/tsm1.(*Cache).WriteMulti+0x31a\t\t\t\t/Users/me/github/influxdb/tsdb/engine/tsm1/cache.go:343\n" + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + compatible + + with [pprof](https://github.com/google/pprof) analysis and + visualization tools. + + If debug is enabled (`?debug=1`), response body contains a + human-readable profile. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve the heap runtime profile + tags: + - Debug + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: > + # Analyze the profile in interactive mode. + + + go tool pprof http://localhost:8086/debug/pprof/heap + + + # `pprof` returns the following prompt: + + # Entering interactive mode (type "help" for commands, "o" for + options) + + # (pprof) + + + # At the prompt, get the top N memory-intensive nodes. + + + (pprof) top10 + + + # pprof displays the list: + + # Showing nodes accounting for 142.46MB, 85.43% of 166.75MB total + + # Dropped 895 nodes (cum <= 0.83MB) + + # Showing top 10 nodes out of 143 + /debug/pprof/mutex: + get: + description: > + Collects statistics and returns a [Go runtime + profile](https://pkg.go.dev/runtime/pprof) report of + + lock contentions. + + The profile contains stack traces of holders of contended mutual + exclusions (mutexes). + operationId: GetDebugPprofMutex + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: > + - `0`: (Default) Return the report as a gzip-compressed protocol + buffer. + + - `1`: Return a response body with the report formatted as + human-readable text. + The report contains comments that translate addresses to function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + compatible + + with [pprof](https://github.com/google/pprof) analysis and + visualization tools. + + If debug is enabled (`?debug=1`), response body contains a + human-readable profile. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve the mutual exclusion (mutex) runtime profile + tags: + - Debug + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: > + # Analyze the profile in interactive mode. + + + go tool pprof http://localhost:8086/debug/pprof/mutex + + + # `pprof` returns the following prompt: + + # Entering interactive mode (type "help" for commands, "o" for + options) + + # (pprof) + + + # At the prompt, get the top N entries. + + + (pprof) top10 + /debug/pprof/profile: + get: + description: > + Collects statistics and returns a [Go runtime + profile](https://pkg.go.dev/runtime/pprof) + + report of program counters on the executing stack. + operationId: GetDebugPprofProfile + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Number of seconds to collect profile data. Default is `30` seconds. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + compatible + + with [pprof](https://github.com/google/pprof) analysis and + visualization tools. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve the CPU runtime profile + tags: + - Debug + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: | + # Download the profile report. + + curl http://localhost:8086/debug/pprof/profile -o cpu + + # Analyze the profile in interactive mode. + + go tool pprof ./cpu + + # At the prompt, get the top N functions most often running + # or waiting during the sample period. + + (pprof) top10 + /debug/pprof/threadcreate: + get: + description: > + Collects statistics and returns a [Go runtime + profile](https://pkg.go.dev/runtime/pprof) + + report of stack traces that led to the creation of new OS threads. + operationId: GetDebugPprofThreadCreate + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: > + - `0`: (Default) Return the report as a gzip-compressed protocol + buffer. + + - `1`: Return a response body with the report formatted as + human-readable text. + The report contains comments that translate addresses to function names and line numbers for debugging. + + `debug=1` is mutually exclusive with the `seconds` query parameter. + in: query + name: debug + schema: + enum: + - 0 + - 1 + format: int64 + type: integer + - description: | + Number of seconds to collect statistics. + + `seconds` is mutually exclusive with `debug=1`. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + in protocol buffer format. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: binary + type: string + text/plain: + examples: + profileDebugResponse: + summary: Profile in plain text + value: "threadcreate profile: total 26\n25 @\n#\t0x0\n\n1 @ 0x403dda8 0x403e54b 0x403e810 0x403a90c 0x406c0a1\n#\t0x403dda7\truntime.allocm+0xc7\t\t\t/Users/me/.gvm/gos/go1.17/src/runtime/proc.go:1877\n#\t0x403e54a\truntime.newm+0x2a\t\t\t/Users/me/.gvm/gos/go1.17/src/runtime/proc.go:2201\n#\t0x403e80f\truntime.startTemplateThread+0x8f\t/Users/me/.gvm/gos/go1.17/src/runtime/proc.go:2271\n#\t0x403a90b\truntime.main+0x1cb\t\t\t/Users/me/.gvm/gos/go1.17/src/runtime/proc.go:234\n" + schema: + description: | + Response body contains a report formatted in plain text. + The report contains comments that translate addresses to + function names and line numbers for debugging. + externalDocs: + description: Golang pprof package + url: https://pkg.go.dev/net/http/pprof + format: Go runtime profile + type: string + description: > + [Go runtime profile](https://pkg.go.dev/runtime/pprof) report + compatible + + with [pprof](https://github.com/google/pprof) analysis and + visualization tools. + + If debug is enabled (`?debug=1`), response body contains a + human-readable profile. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve the threadcreate runtime profile + tags: + - Debug + x-codeSamples: + - label: 'Shell: go tool pprof' + lang: Shell + source: > + # Analyze the profile in interactive mode. + + + go tool pprof http://localhost:8086/debug/pprof/threadcreate + + + # `pprof` returns the following prompt: + + # Entering interactive mode (type "help" for commands, "o" for + options) + + # (pprof) + + + # At the prompt, get the top N entries. + + + (pprof) top10 + /debug/pprof/trace: + get: + description: > + Collects profile data and returns trace execution events for the current + program. + operationId: GetDebugPprofTrace + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Number of seconds to collect profile data. + in: query + name: seconds + schema: + format: int64 + type: string + responses: + '200': + content: + application/octet-stream: + schema: + externalDocs: + description: Golang trace package + url: https://pkg.go.dev/runtime/trace + format: binary + type: string + description: | + [Trace file](https://pkg.go.dev/runtime/trace) compatible + with the [Golang `trace` command](https://pkg.go.dev/cmd/trace). + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve the runtime execution trace + tags: + - Debug + x-codeSamples: + - label: 'Shell: go tool trace' + lang: Shell + source: | + # Download the trace file. + + curl http://localhost:8086/debug/pprof/trace -o trace + + # Analyze the trace. + + go tool trace ./trace + /api/v2/delete: + post: + operationId: PostDelete + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Specifies the organization to delete data from. + in: query + name: org + schema: + description: Only points from this organization are deleted. + type: string + - description: Specifies the bucket to delete data from. + in: query + name: bucket + schema: + description: Only points from this bucket are deleted. + type: string + - description: Specifies the organization ID of the resource. + in: query + name: orgID + schema: + type: string + - description: Specifies the bucket ID to delete data from. + in: query + name: bucketID + schema: + description: Only points from this bucket ID are deleted. + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DeletePredicateRequest' + description: Deletes data from an InfluxDB bucket. + required: true + responses: + '204': + description: delete has been accepted + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Invalid request. + '403': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: no token was sent or does not have sufficient permissions. + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: the bucket or organization is not found. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: internal server error + summary: Delete data + tags: + - Delete + /api/v2/flags: + get: + operationId: GetFlags + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Flags' + description: Feature flags for the currently authenticated user + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Return the feature flags for the currently authenticated user + tags: + - Users + /health: + get: + description: Returns the health of the instance. + operationId: GetHealth + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/HealthCheck' + description: | + The instance is healthy. + The response body contains the health check items and status. + '503': + content: + application/json: + schema: + $ref: '#/components/schemas/HealthCheck' + description: The instance is unhealthy. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve the health of the instance + tags: + - Health + /api/v2/labels: + get: + operationId: GetLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: query + name: orgID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of labels + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels + tags: + - Labels + post: + operationId: PostLabels + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelCreateRequest' + description: Label to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: Added label + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a label + tags: + - Labels + /api/v2/labels/{labelID}: + delete: + operationId: DeleteLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Label not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label + tags: + - Labels + get: + operationId: GetLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the label to update. + in: path + name: labelID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: A label + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a label + tags: + - Labels + patch: + operationId: PatchLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the label to update. + in: path + name: labelID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelUpdate' + description: Label update + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: Updated label + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Label not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a label + tags: + - Labels + /legacy/authorizations: + get: + operationId: GetLegacyAuthorizations + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Only show legacy authorizations that belong to a user ID. + in: query + name: userID + schema: + type: string + - description: Only show legacy authorizations that belong to a user name. + in: query + name: user + schema: + type: string + - description: Only show legacy authorizations that belong to an organization ID. + in: query + name: orgID + schema: + type: string + - description: Only show legacy authorizations that belong to a organization name. + in: query + name: org + schema: + type: string + - description: Only show legacy authorizations with a specified token (auth name). + in: query + name: token + schema: + type: string + - description: Only show legacy authorizations with a specified auth ID. + in: query + name: authID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorizations' + description: A list of legacy authorizations + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: List all legacy authorizations + tags: + - Legacy Authorizations + post: + operationId: PostLegacyAuthorizations + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LegacyAuthorizationPostRequest' + description: Legacy authorization to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: Legacy authorization created + '400': + $ref: '#/components/responses/ServerError' + description: Invalid request + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Create a legacy authorization + tags: + - Legacy Authorizations + servers: + - url: /private + /legacy/authorizations/{authID}: + delete: + operationId: DeleteLegacyAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the legacy authorization to delete. + in: path + name: authID + required: true + schema: + type: string + responses: + '204': + description: Legacy authorization deleted + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Delete a legacy authorization + tags: + - Legacy Authorizations + get: + operationId: GetLegacyAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the legacy authorization to get. + in: path + name: authID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: Legacy authorization details + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Retrieve a legacy authorization + tags: + - Legacy Authorizations + patch: + operationId: PatchLegacyAuthorizationsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the legacy authorization to update. + in: path + name: authID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AuthorizationUpdateRequest' + description: Legacy authorization to update + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Authorization' + description: The active or inactive legacy authorization + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Update a legacy authorization to be active or inactive + tags: + - Legacy Authorizations + servers: + - url: /private + /legacy/authorizations/{authID}/password: + post: + operationId: PostLegacyAuthorizationsIDPassword + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the legacy authorization to update. + in: path + name: authID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PasswordResetBody' + description: New password + required: true + responses: + '204': + description: Legacy authorization password set + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Set a legacy authorization password + tags: + - Legacy Authorizations + servers: + - url: /private + /api/v2/maps/mapToken: + get: + operationId: getMapboxToken + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Token' + description: Temporary token for Mapbox. + '401': + $ref: '#/components/responses/ServerError' + '500': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Get a mapbox token + /api/v2/me: + get: + operationId: GetMe + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/UserResponse' + description: The currently authenticated user. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve the currently authenticated user + tags: + - Users + /api/v2/me/password: + put: + operationId: PutMePassword + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PasswordResetBody' + description: New password + required: true + responses: + '204': + description: Password successfully updated + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unsuccessful authentication + security: + - BasicAuthentication: [] + summary: Update a password + tags: + - Users + /metrics: + get: + description: > + Returns metrics about the workload performance of an InfluxDB instance. + + + Use this endpoint to get performance, resource, and usage metrics. + + + #### Related guides + + + - For the list of metrics categories, see [InfluxDB OSS + metrics](https://docs.influxdata.com/influxdb/v2.2/reference/internals/metrics/). + + - Learn how to use InfluxDB to [scrape Prometheus + metrics](https://docs.influxdata.com/influxdb/v2.2write-data/developer-tools/scrape-prometheus-metrics/). + + - Learn how InfluxDB [parses the Prometheus exposition + format](https://docs.influxdata.com/influxdb/v2.2/reference/prometheus-metrics/). + operationId: GetMetrics + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + text/plain: + examples: + expositionResponse: + summary: Metrics in plain text + value: > + # HELP go_threads Number of OS threads created. + + # TYPE go_threads gauge + + go_threads 19 + + # HELP http_api_request_duration_seconds Time taken to + respond to HTTP request + + # TYPE http_api_request_duration_seconds histogram + + http_api_request_duration_seconds_bucket{handler="platform",method="GET",path="/:fallback_path",response_code="200",status="2XX",user_agent="curl",le="0.005"} + 4 + + http_api_request_duration_seconds_bucket{handler="platform",method="GET",path="/:fallback_path",response_code="200",status="2XX",user_agent="curl",le="0.01"} + 4 + + http_api_request_duration_seconds_bucket{handler="platform",method="GET",path="/:fallback_path",response_code="200",status="2XX",user_agent="curl",le="0.025"} + 5 + schema: + externalDocs: + description: Prometheus exposition formats + url: https://prometheus.io/docs/instrumenting/exposition_formats + format: Prometheus text-based exposition + type: string + description: > + Success. The response body contains metrics in + + [Prometheus plain-text exposition + format](https://prometheus.io/docs/instrumenting/exposition_formats) + + Metrics contain a name, an optional set of key-value pairs, and a + value. + + + The following descriptors precede each metric: + + + - `HELP`: description of the metric + + - `TYPE`: [Prometheus metric + type](https://prometheus.io/docs/concepts/metric_types/) (`counter`, + `gauge`, `histogram`, or `summary`) + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Retrieve workload performance metrics + tags: + - Metrics + /api/v2/notificationEndpoints: + get: + operationId: GetNotificationEndpoints + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - description: >- + Only show notification endpoints that belong to specific + organization ID. + in: query + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoints' + description: A list of notification endpoints + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all notification endpoints + tags: + - NotificationEndpoints + post: + operationId: CreateNotificationEndpoint + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PostNotificationEndpoint' + description: Notification endpoint to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoint' + description: Notification endpoint created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a notification endpoint + tags: + - NotificationEndpoints + /api/v2/notificationEndpoints/{endpointID}: + delete: + operationId: DeleteNotificationEndpointsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The endpoint was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a notification endpoint + tags: + - NotificationEndpoints + get: + operationId: GetNotificationEndpointsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoint' + description: The notification endpoint requested + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a notification endpoint + tags: + - NotificationEndpoints + patch: + operationId: PatchNotificationEndpointsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpointUpdate' + description: Check update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoint' + description: An updated notification endpoint + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The notification endpoint was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a notification endpoint + tags: + - NotificationEndpoints + put: + operationId: PutNotificationEndpointsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoint' + description: A new notification endpoint to replace the existing endpoint with + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationEndpoint' + description: An updated notification endpoint + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The notification endpoint was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a notification endpoint + tags: + - NotificationEndpoints + /api/v2/notificationEndpoints/{endpointID}/labels: + get: + operationId: GetNotificationEndpointsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a notification endpoint + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a notification endpoint + tags: + - NotificationEndpoints + post: + operationId: PostNotificationEndpointIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The label was added to the notification endpoint + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a notification endpoint + tags: + - NotificationEndpoints + /api/v2/notificationEndpoints/{endpointID}/labels/{labelID}: + delete: + operationId: DeleteNotificationEndpointsIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification endpoint ID. + in: path + name: endpointID + required: true + schema: + type: string + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Endpoint or label not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a notification endpoint + tags: + - NotificationEndpoints + /api/v2/notificationRules: + get: + operationId: GetNotificationRules + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - description: >- + Only show notification rules that belong to a specific organization + ID. + in: query + name: orgID + required: true + schema: + type: string + - description: Only show notifications that belong to the specific check ID. + in: query + name: checkID + schema: + type: string + - description: >- + Only return notification rules that "would match" statuses which + contain the tag key value pairs provided. + in: query + name: tag + schema: + example: env:prod + pattern: ^[a-zA-Z0-9_]+:[a-zA-Z0-9_]+$ + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRules' + description: A list of notification rules + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all notification rules + tags: + - NotificationRules + post: + operationId: CreateNotificationRule + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PostNotificationRule' + description: Notification rule to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRule' + description: Notification rule created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a notification rule + tags: + - NotificationRules + /api/v2/notificationRules/{ruleID}: + delete: + operationId: DeleteNotificationRulesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The check was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a notification rule + tags: + - NotificationRules + get: + operationId: GetNotificationRulesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRule' + description: The notification rule requested + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a notification rule + tags: + - NotificationRules + patch: + operationId: PatchNotificationRulesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRuleUpdate' + description: Notification rule update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRule' + description: An updated notification rule + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The notification rule was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a notification rule + tags: + - NotificationRules + put: + operationId: PutNotificationRulesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRule' + description: Notification rule update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/NotificationRule' + description: An updated notification rule + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: The notification rule was not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a notification rule + tags: + - NotificationRules + /api/v2/notificationRules/{ruleID}/labels: + get: + operationId: GetNotificationRulesIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a notification rule + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a notification rule + tags: + - NotificationRules + post: + operationId: PostNotificationRuleIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The label was added to the notification rule + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a notification rule + tags: + - NotificationRules + /api/v2/notificationRules/{ruleID}/labels/{labelID}: + delete: + operationId: DeleteNotificationRulesIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + - description: The ID of the label to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Rule or label not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete label from a notification rule + tags: + - NotificationRules + /api/v2/notificationRules/{ruleID}/query: + get: + operationId: GetNotificationRulesIDQuery + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The notification rule ID. + in: path + name: ruleID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/FluxResponse' + description: The notification rule query requested + '400': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Invalid request + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Notification rule not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a notification rule query + tags: + - Rules + /api/v2/orgs: + get: + operationId: GetOrgs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - $ref: '#/components/parameters/Descending' + - description: Filter organizations to a specific organization name. + in: query + name: org + schema: + type: string + - description: Filter organizations to a specific organization ID. + in: query + name: orgID + schema: + type: string + - description: Filter organizations to a specific user ID. + in: query + name: userID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Organizations' + description: A list of organizations + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all organizations + tags: + - Organizations + post: + operationId: PostOrgs + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PostOrganizationRequest' + description: Organization to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Organization' + description: Organization created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create an organization + tags: + - Organizations + /api/v2/orgs/{orgID}: + delete: + operationId: DeleteOrgsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the organization to delete. + in: path + name: orgID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Organization not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete an organization + tags: + - Organizations + get: + operationId: GetOrgsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the organization to get. + in: path + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Organization' + description: Organization details + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve an organization + tags: + - Organizations + patch: + operationId: PatchOrgsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the organization to get. + in: path + name: orgID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PatchOrganizationRequest' + description: Organization update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Organization' + description: Organization updated + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update an organization + tags: + - Organizations + /api/v2/orgs/{orgID}/members: + get: + operationId: GetOrgsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMembers' + description: A list of organization members + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Organization not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all members of an organization + tags: + - Organizations + post: + operationId: PostOrgsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as member + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMember' + description: Added to organization created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to an organization + tags: + - Organizations + /api/v2/orgs/{orgID}/members/{userID}: + delete: + operationId: DeleteOrgsIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the member to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + responses: + '204': + description: Member removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from an organization + tags: + - Organizations + /api/v2/orgs/{orgID}/owners: + get: + operationId: GetOrgsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwners' + description: A list of organization owners + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Organization not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all owners of an organization + tags: + - Organizations + post: + operationId: PostOrgsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as owner + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: Organization owner added + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to an organization + tags: + - Organizations + /api/v2/orgs/{orgID}/owners/{userID}: + delete: + operationId: DeleteOrgsIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the owner to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + responses: + '204': + description: Owner removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from an organization + tags: + - Organizations + /api/v2/orgs/{orgID}/secrets: + get: + operationId: GetOrgsIDSecrets + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/SecretKeysResponse' + description: A list of all secret keys + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all secret keys for an organization + tags: + - Secrets + patch: + operationId: PatchOrgsIDSecrets + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Secrets' + description: Secret key value pairs to update/add + required: true + responses: + '204': + description: Keys successfully patched + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update secrets in an organization + tags: + - Secrets + /api/v2/orgs/{orgID}/secrets/{secretID}: + delete: + operationId: DeleteOrgsIDSecretsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + - description: The secret ID. + in: path + name: secretID + required: true + schema: + type: string + responses: + '204': + description: Keys successfully deleted + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Delete a secret from an organization + tags: + - Secrets + /api/v2/orgs/{orgID}/secrets/delete: + post: + deprecated: true + operationId: PostOrgsIDSecrets + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: path + name: orgID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/SecretKeys' + description: Secret key to delete + required: true + responses: + '204': + description: Keys successfully patched + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete secrets from an organization + tags: + - Secrets + /ping: + get: + description: Returns the status and InfluxDB version of the instance. + operationId: GetPing + responses: + '204': + description: | + OK. + Headers contain InfluxDB version information. + headers: + X-Influxdb-Build: + description: The type of InfluxDB build. + schema: + type: string + X-Influxdb-Version: + description: The version of InfluxDB. + schema: + type: integer + servers: [] + summary: Get the status and version of the instance + tags: + - Ping + head: + description: Returns the status and InfluxDB version of the instance. + operationId: HeadPing + responses: + '204': + description: | + OK. + Headers contain InfluxDB version information. + headers: + X-Influxdb-Build: + description: The type of InfluxDB build. + schema: + type: string + X-Influxdb-Version: + description: The version of InfluxDB. + schema: + type: integer + servers: [] + summary: Get the status and version of the instance + tags: + - Ping + /api/v2/query: + post: + description: > + Retrieves data from InfluxDB buckets. + + + To query data, you need the following: + + - **organization** – _See [View + organizations](https://docs.influxdata.com/influxdb/v2.2/organizations/view-orgs/#view-your-organization-id) + for instructions on viewing your organization ID._ + + - **API token** – _See [View + tokens](https://docs.influxdata.com/influxdb/v2.2/security/tokens/view-tokens/) + for instructions on viewing your API token._ + - **InfluxDB URL** – _See [InfluxDB + URLs](https://docs.influxdata.com/influxdb/v2.2/reference/urls/)_. + + - **Flux query** – _See [Flux](https://docs.influxdata.com/flux/v0.x/)._ + + + For more information and examples, see [Query with the InfluxDB + API](https://docs.influxdata.com/influxdb/v2.2/query-data/execute-queries/influx-api/). + operationId: PostQuery + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: >- + Indicates the content encoding (usually a compression algorithm) + that the client can understand. + in: header + name: Accept-Encoding + schema: + default: identity + description: >- + The content coding. Use `gzip` for compressed data or `identity` + for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + - in: header + name: Content-Type + schema: + enum: + - application/json + - application/vnd.flux + type: string + - description: >- + Name of the organization executing the query. Accepts either the ID + or Name. If you provide both `orgID` and `org`, `org` takes + precedence. + in: query + name: org + schema: + type: string + - description: >- + ID of the organization executing the query. If you provide both + `orgID` and `org`, `org` takes precedence. + in: query + name: orgID + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Query' + application/vnd.flux: + example: | + from(bucket: "example-bucket") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "example-measurement") + schema: + type: string + description: Flux query or specification to execute + responses: + '200': + content: + text/csv: + schema: + example: > + result,table,_start,_stop,_time,region,host,_value + mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43 + mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25 + mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62 + type: string + description: Success. Returns query results. + headers: + Content-Encoding: + description: >- + Lists any encodings (usually compression algorithms) that have + been applied to the response payload. + schema: + default: identity + description: > + Content coding: `gzip` for compressed data or `identity` for + unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + Trace-Id: + description: If generated, trace ID of the request. + schema: + description: Trace ID of a request. + type: string + '429': + description: | + #### InfluxDB Cloud: + - returns this error if a **read** or **write** request exceeds your + plan's [adjustable service quotas](https://docs.influxdata.com/influxdb/v2.2/account-management/limits/#adjustable-service-quotas) + or if a **delete** request exceeds the maximum + [global limit](https://docs.influxdata.com/influxdb/v2.2/account-management/limits/#global-limits) + - returns `Retry-After` header that describes when to try the write again. + + #### InfluxDB OSS: + - doesn't return this error. + headers: + Retry-After: + description: >- + Non-negative decimal integer indicating seconds to wait before + retrying the request. + schema: + format: int32 + type: integer + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Error processing query + summary: Query data + tags: + - Query + /api/v2/query/analyze: + post: + operationId: PostQueryAnalyze + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: header + name: Content-Type + schema: + enum: + - application/json + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Query' + description: Flux query to analyze + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/AnalyzeQueryResponse' + description: Query analyze results. Errors will be empty if the query is valid. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + headers: + X-Influx-Error: + description: Error string describing the problem + schema: + type: string + X-Influx-Reference: + description: Reference code unique to the error type + schema: + type: integer + summary: Analyze a Flux query + tags: + - Query + /api/v2/query/ast: + post: + description: Analyzes flux query and generates a query specification. + operationId: PostQueryAst + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: header + name: Content-Type + schema: + enum: + - application/json + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LanguageRequest' + description: Analyzed Flux query to generate abstract syntax tree. + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ASTResponse' + description: Abstract syntax tree of the flux query. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Any response other than 200 is an internal server error + summary: Generate an Abstract Syntax Tree (AST) from a query + tags: + - Query + /api/v2/query/suggestions: + get: + operationId: GetQuerySuggestions + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/FluxSuggestions' + description: Suggestions for next functions in call chain + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Any response other than 200 is an internal server error + summary: Retrieve query suggestions + tags: + - Query + /api/v2/query/suggestions/{name}: + get: + operationId: GetQuerySuggestionsName + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The name of the branching suggestion. + in: path + name: name + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/FluxSuggestion' + description: Suggestions for next functions in call chain + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Any response other than 200 is an internal server error + summary: Retrieve query suggestions for a branching suggestion + tags: + - Query + /ready: + get: + operationId: GetReady + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Ready' + description: The instance is ready + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + servers: [] + summary: Get the readiness of an instance at startup + tags: + - Ready + /api/v2/remotes: + get: + operationId: GetRemoteConnections + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: query + name: orgID + required: true + schema: + type: string + - in: query + name: name + schema: + type: string + - in: query + name: remoteURL + schema: + format: uri + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnections' + description: List of remote connections + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: List all remote connections + tags: + - RemoteConnections + post: + operationId: PostRemoteConnection + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnectionCreationRequest' + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnection' + description: Remote connection saved + '400': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Register a new remote connection + tags: + - RemoteConnections + /api/v2/remotes/{remoteID}: + delete: + operationId: DeleteRemoteConnectionByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: remoteID + required: true + schema: + type: string + responses: + '204': + description: Remote connection info deleted. + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Delete a remote connection + tags: + - RemoteConnections + get: + operationId: GetRemoteConnectionByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: remoteID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnection' + description: Remote connection + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Retrieve a remote connection + tags: + - RemoteConnections + patch: + operationId: PatchRemoteConnectionByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: remoteID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnectionUpdateRequest' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/RemoteConnection' + description: Updated information saved + '400': + $ref: '#/components/responses/ServerError' + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Update a remote connection + tags: + - RemoteConnections + /api/v2/replications: + get: + operationId: GetReplications + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID. + in: query + name: orgID + required: true + schema: + type: string + - in: query + name: name + schema: + type: string + - in: query + name: remoteID + schema: + type: string + - in: query + name: localBucketID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Replications' + description: List of replications + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: List all replications + tags: + - Replications + post: + operationId: PostReplication + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: If true, validate the replication, but don't save it. + in: query + name: validate + schema: + default: false + type: boolean + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/ReplicationCreationRequest' + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Replication' + description: Replication saved + '204': + description: Replication validated, but not saved + '400': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Register a new replication + tags: + - Replications + /api/v2/replications/{replicationID}: + delete: + operationId: DeleteReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + responses: + '204': + description: Replication deleted. + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Delete a replication + tags: + - Replications + get: + operationId: GetReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Replication' + description: Replication + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Retrieve a replication + tags: + - Replications + patch: + operationId: PatchReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + - description: If true, validate the updated information, but don't save it. + in: query + name: validate + schema: + default: false + type: boolean + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/ReplicationUpdateRequest' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Replication' + description: Updated information saved + '204': + description: Updated replication validated, but not saved + '400': + $ref: '#/components/responses/ServerError' + '404': + $ref: '#/components/responses/ServerError' + default: + $ref: '#/components/responses/ServerError' + summary: Update a replication + tags: + - Replications + /api/v2/replications/{replicationID}/validate: + post: + operationId: PostValidateReplicationByID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: replicationID + required: true + schema: + type: string + responses: + '204': + description: Replication is valid + '400': + $ref: '#/components/responses/ServerError' + description: Replication failed validation + default: + $ref: '#/components/responses/ServerError' + summary: Validate a replication + tags: + - Replications + /api/v2/resources: + get: + operationId: GetResources + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + items: + type: string + type: array + description: All resources targets + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: List all known resources + tags: + - Resources + /api/v2/restore/bucket/{bucketID}: + post: + deprecated: true + operationId: PostRestoreBucketID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The bucket ID. + in: path + name: bucketID + required: true + schema: + type: string + - in: header + name: Content-Type + schema: + default: application/octet-stream + enum: + - application/octet-stream + type: string + requestBody: + content: + text/plain: + schema: + format: byte + type: string + description: Database info serialized as protobuf. + required: true + responses: + '200': + content: + application/json: + schema: + format: byte + type: string + description: ID mappings for shards in bucket. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Overwrite storage metadata for a bucket with shard info from a backup. + tags: + - Restore + /api/v2/restore/bucketMetadata: + post: + operationId: PostRestoreBucketMetadata + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/BucketMetadataManifest' + description: Metadata manifest for a bucket. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/RestoredBucketMappings' + description: ID mappings for shards in new bucket. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Create a new bucket pre-seeded with shard info from a backup. + tags: + - Restore + /api/v2/restore/kv: + post: + operationId: PostRestoreKV + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: > + The value tells InfluxDB what compression is applied to the line + protocol in the request payload. + + To make an API request with a GZIP payload, send `Content-Encoding: + gzip` as a request header. + in: header + name: Content-Encoding + schema: + default: identity + description: >- + The content coding. Use `gzip` for compressed data or `identity` + for unmodified, uncompressed data. + enum: + - gzip + - identity + type: string + - in: header + name: Content-Type + schema: + default: application/octet-stream + enum: + - application/octet-stream + type: string + requestBody: + content: + text/plain: + schema: + format: binary + type: string + description: Full KV snapshot. + required: true + responses: + '200': + content: + application/json: + schema: + properties: + token: + description: >- + token is the root token for the instance after restore + (this is overwritten during the restore) + type: string + type: object + description: KV store successfully overwritten. + '204': + description: KV store successfully overwritten. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Overwrite the embedded KV store on the server with a backed-up snapshot. + tags: + - Restore + /api/v2/restore/shards/{shardID}: + post: + operationId: PostRestoreShardId + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: > + The value tells InfluxDB what compression is applied to the line + protocol in the request payload. + + To make an API request with a GZIP payload, send `Content-Encoding: + gzip` as a request header. + in: header + name: Content-Encoding + schema: + default: identity + description: >- + Specifies that the line protocol in the body is encoded with gzip + or not encoded with identity. + enum: + - gzip + - identity + type: string + - in: header + name: Content-Type + schema: + default: application/octet-stream + enum: + - application/octet-stream + type: string + - description: The shard ID. + in: path + name: shardID + required: true + schema: + type: string + requestBody: + content: + text/plain: + schema: + format: binary + type: string + description: TSM snapshot. + required: true + responses: + '204': + description: TSM snapshot successfully restored. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Restore a TSM snapshot into a shard. + tags: + - Restore + /api/v2/restore/sql: + post: + operationId: PostRestoreSQL + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: > + The value tells InfluxDB what compression is applied to the line + protocol in the request payload. + + To make an API request with a GZIP payload, send `Content-Encoding: + gzip` as a request header. + in: header + name: Content-Encoding + schema: + default: identity + description: >- + Specifies that the line protocol in the body is encoded with gzip + or not encoded with identity. + enum: + - gzip + - identity + type: string + - in: header + name: Content-Type + schema: + default: application/octet-stream + enum: + - application/octet-stream + type: string + requestBody: + content: + text/plain: + schema: + format: binary + type: string + description: Full SQL snapshot. + required: true + responses: + '204': + description: SQL store successfully overwritten. + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: >- + Overwrite the embedded SQL store on the server with a backed-up + snapshot. + tags: + - Restore + /api/v2/scrapers: + get: + operationId: GetScrapers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Specifies the name of the scraper target. + in: query + name: name + schema: + type: string + - description: >- + List of scraper target IDs to return. If both `id` and `owner` are + specified, only `id` is used. + in: query + name: id + schema: + items: + type: string + type: array + - description: Specifies the organization ID of the scraper target. + in: query + name: orgID + schema: + type: string + - description: Specifies the organization name of the scraper target. + in: query + name: org + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetResponses' + description: All scraper targets + summary: List all scraper targets + tags: + - Scraper Targets + post: + operationId: PostScrapers + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetRequest' + description: Scraper target to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetResponse' + description: Scraper target created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: Create a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}: + delete: + operationId: DeleteScrapersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The identifier of the scraper target. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '204': + description: Scraper target deleted + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: Delete a scraper target + tags: + - Scraper Targets + get: + operationId: GetScrapersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The identifier of the scraper target. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetResponse' + description: The scraper target + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: Retrieve a scraper target + tags: + - Scraper Targets + patch: + operationId: PatchScrapersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The identifier of the scraper target. + in: path + name: scraperTargetID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetRequest' + description: Scraper target update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ScraperTargetResponse' + description: Scraper target updated + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Internal server error + summary: Update a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/labels: + get: + operationId: GetScrapersIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of labels for a scraper target. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a scraper target + tags: + - Scraper Targets + post: + operationId: PostScrapersIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The newly added label + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/labels/{labelID}: + delete: + operationId: DeleteScrapersIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + - description: The label ID. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Scraper target not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/members: + get: + operationId: GetScrapersIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMembers' + description: A list of scraper target members + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all users with member privileges for a scraper target + tags: + - Scraper Targets + post: + operationId: PostScrapersIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as member + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMember' + description: Member added to scraper targets + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/members/{userID}: + delete: + operationId: DeleteScrapersIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of member to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '204': + description: Member removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/owners: + get: + operationId: GetScrapersIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwners' + description: A list of scraper target owners + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all owners of a scraper target + tags: + - Scraper Targets + post: + operationId: PostScrapersIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as owner + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: Scraper target owner added + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to a scraper target + tags: + - Scraper Targets + /api/v2/scrapers/{scraperTargetID}/owners/{userID}: + delete: + operationId: DeleteScrapersIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of owner to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The scraper target ID. + in: path + name: scraperTargetID + required: true + schema: + type: string + responses: + '204': + description: Owner removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from a scraper target + tags: + - Scraper Targets + /api/v2/setup: + get: + description: >- + Returns `true` if no default user, organization, or bucket has been + created. + operationId: GetSetup + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/IsOnboarding' + description: allowed true or false + summary: Check if database has default user, org, bucket + tags: + - Setup + post: + description: Post an onboarding request to set up initial user, org and bucket. + operationId: PostSetup + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/OnboardingRequest' + description: Source to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/OnboardingResponse' + description: Created default user, bucket, org + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Set up initial user, org and bucket + tags: + - Setup + /api/v2/signin: + post: + description: >- + Authenticates ***Basic Auth*** credentials for a user. If successful, + creates a new UI session for the user. + operationId: PostSignin + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '204': + description: Success. User authenticated. + '401': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unauthorized access. + '403': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: User account is disabled. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unsuccessful authentication. + security: + - BasicAuthentication: [] + summary: Create a user session. + tags: + - Signin + /api/v2/signout: + post: + description: Expires the current UI session for the user. + operationId: PostSignout + parameters: + - $ref: '#/components/parameters/TraceSpan' + responses: + '204': + description: Session successfully expired + '401': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unauthorized access + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unsuccessful session expiry + summary: Expire the current UI session + tags: + - Signout + /api/v2/sources: + get: + operationId: GetSources + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The name of the organization. + in: query + name: org + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Sources' + description: A list of sources + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all sources + tags: + - Sources + post: + operationId: PostSources + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Source' + description: Source to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Source' + description: Created Source + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a source + tags: + - Sources + /api/v2/sources/{sourceID}: + delete: + operationId: DeleteSourcesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The source ID. + in: path + name: sourceID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: View not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a source + tags: + - Sources + get: + operationId: GetSourcesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The source ID. + in: path + name: sourceID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Source' + description: A source + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Source not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a source + tags: + - Sources + patch: + operationId: PatchSourcesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The source ID. + in: path + name: sourceID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Source' + description: Source update + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Source' + description: Created Source + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Source not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a Source + tags: + - Sources + /api/v2/sources/{sourceID}/buckets: + get: + operationId: GetSourcesIDBuckets + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The source ID. + in: path + name: sourceID + required: true + schema: + type: string + - description: The name of the organization. + in: query + name: org + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Buckets' + description: A source + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Source not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Get buckets in a source + tags: + - Sources + - Buckets + /api/v2/sources/{sourceID}/health: + get: + operationId: GetSourcesIDHealth + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The source ID. + in: path + name: sourceID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/HealthCheck' + description: The source is healthy + '503': + content: + application/json: + schema: + $ref: '#/components/schemas/HealthCheck' + description: The source is not healthy + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Get the health of a source + tags: + - Sources + /api/v2/stacks: + get: + operationId: ListStacks + parameters: + - description: The organization ID of the stacks + in: query + name: orgID + required: true + schema: + type: string + - description: A collection of names to filter the list by. + in: query + name: name + schema: + type: string + - description: A collection of stackIDs to filter the list by. + in: query + name: stackID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + properties: + stacks: + items: + $ref: '#/components/schemas/Stack' + type: array + type: object + description: Success. Returns the list of stacks. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List installed templates + tags: + - Templates + post: + operationId: CreateStack + requestBody: + content: + application/json: + schema: + properties: + description: + type: string + name: + type: string + orgID: + type: string + urls: + items: + type: string + type: array + title: PostStackRequest + type: object + description: The stack to create. + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Stack' + description: Success. Returns the newly created stack. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a new stack + tags: + - Templates + /api/v2/stacks/{stack_id}: + delete: + operationId: DeleteStack + parameters: + - description: The identifier of the stack. + in: path + name: stack_id + required: true + schema: + type: string + - description: The identifier of the organization. + in: query + name: orgID + required: true + schema: + type: string + responses: + '204': + description: The stack and its associated resources were deleted. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a stack and associated resources + tags: + - Templates + get: + operationId: ReadStack + parameters: + - description: The identifier of the stack. + in: path + name: stack_id + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Stack' + description: Returns the stack. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a stack + tags: + - Templates + patch: + operationId: UpdateStack + parameters: + - description: The identifier of the stack. + in: path + name: stack_id + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + properties: + additionalResources: + items: + properties: + kind: + type: string + resourceID: + type: string + templateMetaName: + type: string + required: + - kind + - resourceID + type: object + type: array + description: + nullable: true + type: string + name: + nullable: true + type: string + templateURLs: + items: + type: string + nullable: true + type: array + title: PatchStackRequest + type: object + description: The stack to update. + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Stack' + description: Returns the updated stack. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a stack + tags: + - Templates + /api/v2/stacks/{stack_id}/uninstall: + post: + operationId: UninstallStack + parameters: + - description: The identifier of the stack. + in: path + name: stack_id + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Stack' + description: Returns the uninstalled stack. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Uninstall a stack + tags: + - Templates + /api/v2/tasks: + get: + operationId: GetTasks + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: Returns task with a specific name. + in: query + name: name + schema: + type: string + - description: Return tasks after a specified ID. + in: query + name: after + schema: + type: string + - description: Filter tasks to a specific user ID. + in: query + name: user + schema: + type: string + - description: Filter tasks to a specific organization name. + in: query + name: org + schema: + type: string + - description: Filter tasks to a specific organization ID. + in: query + name: orgID + schema: + type: string + - description: Filter tasks by a status--"inactive" or "active". + in: query + name: status + schema: + enum: + - active + - inactive + type: string + - description: The number of tasks to return + in: query + name: limit + schema: + default: 100 + maximum: 500 + minimum: 1 + type: integer + - description: Type of task, unset by default. + in: query + name: type + required: false + schema: + default: '' + enum: + - basic + - system + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Tasks' + description: A list of tasks + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all tasks + tags: + - Tasks + post: + operationId: PostTasks + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/TaskCreateRequest' + description: Task to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Task' + description: Task created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a new task + tags: + - Tasks + /api/v2/tasks/{taskID}: + delete: + description: Deletes a task and all associated records + operationId: DeleteTasksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the task to delete. + in: path + name: taskID + required: true + schema: + type: string + responses: + '204': + description: Task deleted + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a task + tags: + - Tasks + get: + operationId: GetTasksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Task' + description: Task details + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a task + tags: + - Tasks + patch: + description: Update a task. This will cancel all queued runs. + operationId: PatchTasksID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/TaskUpdateRequest' + description: Task update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Task' + description: Task updated + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a task + tags: + - Tasks + /api/v2/tasks/{taskID}/labels: + get: + operationId: GetTasksIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a task + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a task + tags: + - Tasks + post: + operationId: PostTasksIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: A list of all labels for a task + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a task + tags: + - Tasks + /api/v2/tasks/{taskID}/labels/{labelID}: + delete: + operationId: DeleteTasksIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + - description: The label ID. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Task not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a task + tags: + - Tasks + /api/v2/tasks/{taskID}/logs: + get: + operationId: GetTasksIDLogs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Logs' + description: All logs for a task + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve all logs for a task + tags: + - Tasks + /api/v2/tasks/{taskID}/members: + get: + operationId: GetTasksIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMembers' + description: A list of users who have member privileges for a task + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all task members + tags: + - Tasks + post: + operationId: PostTasksIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as member + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMember' + description: Added to task members + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to a task + tags: + - Tasks + /api/v2/tasks/{taskID}/members/{userID}: + delete: + operationId: DeleteTasksIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the member to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + responses: + '204': + description: Member removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from a task + tags: + - Tasks + /api/v2/tasks/{taskID}/owners: + get: + operationId: GetTasksIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwners' + description: A list of users who have owner privileges for a task + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all owners of a task + tags: + - Tasks + post: + operationId: PostTasksIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as owner + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: Added to task owners + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to a task + tags: + - Tasks + /api/v2/tasks/{taskID}/owners/{userID}: + delete: + operationId: DeleteTasksIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the owner to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + responses: + '204': + description: Owner removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from a task + tags: + - Tasks + /api/v2/tasks/{taskID}/runs: + get: + operationId: GetTasksIDRuns + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the task to get runs for. + in: path + name: taskID + required: true + schema: + type: string + - description: Returns runs after a specific ID. + in: query + name: after + schema: + type: string + - description: The number of runs to return + in: query + name: limit + schema: + default: 100 + maximum: 500 + minimum: 1 + type: integer + - description: Filter runs to those scheduled after this time, RFC3339 + in: query + name: afterTime + schema: + format: date-time + type: string + - description: Filter runs to those scheduled before this time, RFC3339 + in: query + name: beforeTime + schema: + format: date-time + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Runs' + description: A list of task runs + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List runs for a task + tags: + - Tasks + post: + operationId: PostTasksIDRuns + parameters: + - $ref: '#/components/parameters/TraceSpan' + - in: path + name: taskID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/RunManually' + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Run' + description: Run scheduled to start + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Manually start a task run, overriding the current schedule + tags: + - Tasks + /api/v2/tasks/{taskID}/runs/{runID}: + delete: + operationId: DeleteTasksIDRunsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + - description: The run ID. + in: path + name: runID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Cancel a running task + tags: + - Tasks + get: + operationId: GetTasksIDRunsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + - description: The run ID. + in: path + name: runID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Run' + description: The run record + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a single run for a task + tags: + - Tasks + /api/v2/tasks/{taskID}/runs/{runID}/logs: + get: + operationId: GetTasksIDRunsIDLogs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: ID of task to get logs for. + in: path + name: taskID + required: true + schema: + type: string + - description: ID of run to get logs for. + in: path + name: runID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Logs' + description: All logs for a run + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve all logs for a run + tags: + - Tasks + /api/v2/tasks/{taskID}/runs/{runID}/retry: + post: + operationId: PostTasksIDRunsIDRetry + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The task ID. + in: path + name: taskID + required: true + schema: + type: string + - description: The run ID. + in: path + name: runID + required: true + schema: + type: string + requestBody: + content: + application/json; charset=utf-8: + schema: + type: object + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Run' + description: Run that has been queued + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retry a task run + tags: + - Tasks + /api/v2/telegraf/plugins: + get: + operationId: GetTelegrafPlugins + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The type of plugin desired. + in: query + name: type + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/TelegrafPlugins' + description: A list of Telegraf plugins. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all Telegraf plugins + tags: + - Telegraf Plugins + /api/v2/telegrafs: + get: + operationId: GetTelegrafs + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The organization ID the Telegraf config belongs to. + in: query + name: orgID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Telegrafs' + description: A list of Telegraf configurations + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all Telegraf configurations + tags: + - Telegrafs + post: + operationId: PostTelegrafs + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/TelegrafPluginRequest' + description: Telegraf configuration to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Telegraf' + description: Telegraf configuration created + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Create a Telegraf configuration + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}: + delete: + operationId: DeleteTelegrafsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf configuration ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a Telegraf configuration + tags: + - Telegrafs + get: + operationId: GetTelegrafsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf configuration ID. + in: path + name: telegrafID + required: true + schema: + type: string + - in: header + name: Accept + required: false + schema: + default: application/toml + enum: + - application/toml + - application/json + - application/octet-stream + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Telegraf' + application/octet-stream: + example: |- + [agent] + interval = "10s" + schema: + type: string + application/toml: + example: |- + [agent] + interval = "10s" + schema: + type: string + description: Telegraf configuration details + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Retrieve a Telegraf configuration + tags: + - Telegrafs + put: + operationId: PutTelegrafsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/TelegrafPluginRequest' + description: Telegraf configuration update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Telegraf' + description: An updated Telegraf configurations + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Update a Telegraf configuration + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/labels: + get: + operationId: GetTelegrafsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a Telegraf config + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a Telegraf config + tags: + - Telegrafs + post: + operationId: PostTelegrafsIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The label added to the Telegraf config + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a Telegraf config + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/labels/{labelID}: + delete: + operationId: DeleteTelegrafsIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + - description: The label ID. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Telegraf config not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a Telegraf config + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/members: + get: + operationId: GetTelegrafsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMembers' + description: A list of Telegraf config members + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all users with member privileges for a Telegraf config + tags: + - Telegrafs + post: + operationId: PostTelegrafsIDMembers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as member + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceMember' + description: Member added to Telegraf config + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a member to a Telegraf config + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/members/{userID}: + delete: + operationId: DeleteTelegrafsIDMembersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the member to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '204': + description: Member removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove a member from a Telegraf config + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/owners: + get: + operationId: GetTelegrafsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf configuration ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwners' + description: Returns Telegraf configuration owners as a ResourceOwners list + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all owners of a Telegraf configuration + tags: + - Telegrafs + post: + operationId: PostTelegrafsIDOwners + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The Telegraf configuration ID. + in: path + name: telegrafID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AddResourceMemberRequestBody' + description: User to add as owner + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/ResourceOwner' + description: >- + Telegraf configuration owner was added. Returns a ResourceOwner that + references the User. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add an owner to a Telegraf configuration + tags: + - Telegrafs + /api/v2/telegrafs/{telegrafID}/owners/{userID}: + delete: + operationId: DeleteTelegrafsIDOwnersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the owner to remove. + in: path + name: userID + required: true + schema: + type: string + - description: The Telegraf config ID. + in: path + name: telegrafID + required: true + schema: + type: string + responses: + '204': + description: Owner removed + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Remove an owner from a Telegraf config + tags: + - Telegrafs + /api/v2/templates/apply: + post: + description: Applies or performs a dry-run of template in an organization. + operationId: ApplyTemplate + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/TemplateApply' + application/x-jsonnet: + schema: + $ref: '#/components/schemas/TemplateApply' + text/yml: + schema: + $ref: '#/components/schemas/TemplateApply' + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/TemplateSummary' + description: > + Success. The package dry-run succeeded. No new resources were + created. Returns a diff and summary of the dry-run. The diff and + summary won't contain IDs for resources that didn't exist at the + time of the dry-run. + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/TemplateSummary' + description: > + Success. The package applied successfully. Returns a diff and + summary of the run. The summary contains newly created resources. + The diff compares the initial state to the state after the package + applied. This corresponds to `"dryRun": true`. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Apply or dry-run a template + tags: + - Templates + /api/v2/templates/export: + post: + operationId: ExportTemplate + requestBody: + content: + application/json: + schema: + oneOf: + - $ref: '#/components/schemas/TemplateExportByID' + - $ref: '#/components/schemas/TemplateExportByName' + description: Export resources as an InfluxDB template. + required: false + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Template' + application/x-yaml: + schema: + $ref: '#/components/schemas/Template' + description: >- + The template was created successfully. Returns the newly created + template. + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Export a new template + tags: + - Templates + /api/v2/users: + get: + operationId: GetUsers + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/Offset' + - $ref: '#/components/parameters/Limit' + - $ref: '#/components/parameters/After' + - in: query + name: name + schema: + type: string + - in: query + name: id + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Users' + description: A list of users + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: List all users + tags: + - Users + post: + operationId: PostUsers + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/User' + description: User to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/UserResponse' + description: User created + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Create a user + tags: + - Users + /api/v2/users/{userID}: + delete: + operationId: DeleteUsersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the user to delete. + in: path + name: userID + required: true + schema: + type: string + responses: + '204': + description: User deleted + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Delete a user + tags: + - Users + get: + operationId: GetUsersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The user ID. + in: path + name: userID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/UserResponse' + description: User details + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Retrieve a user + tags: + - Users + patch: + operationId: PatchUsersID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The ID of the user to update. + in: path + name: userID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/User' + description: User update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/UserResponse' + description: User updated + default: + $ref: '#/components/responses/ServerError' + description: Unexpected error + summary: Update a user + tags: + - Users + /api/v2/users/{userID}/password: + post: + operationId: PostUsersIDPassword + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The user ID. + in: path + name: userID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/PasswordResetBody' + description: New password + required: true + responses: + '204': + description: Password successfully updated + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unsuccessful authentication + security: + - BasicAuthentication: [] + summary: Update a password + tags: + - Users + /api/v2/variables: + get: + operationId: GetVariables + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The name of the organization. + in: query + name: org + schema: + type: string + - description: The organization ID. + in: query + name: orgID + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Variables' + description: A list of variables for an organization + '400': + $ref: '#/components/responses/ServerError' + description: Invalid request + default: + $ref: '#/components/responses/ServerError' + description: Internal server error + summary: List all variables + tags: + - Variables + post: + operationId: PostVariables + parameters: + - $ref: '#/components/parameters/TraceSpan' + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable to create + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable created + default: + $ref: '#/components/responses/ServerError' + description: Internal server error + summary: Create a variable + tags: + - Variables + /api/v2/variables/{variableID}: + delete: + operationId: DeleteVariablesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + responses: + '204': + description: Variable deleted + default: + $ref: '#/components/responses/ServerError' + description: Internal server error + summary: Delete a variable + tags: + - Variables + get: + operationId: GetVariablesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable found + '404': + $ref: '#/components/responses/ServerError' + description: Variable not found + default: + $ref: '#/components/responses/ServerError' + description: Internal server error + summary: Retrieve a variable + tags: + - Variables + patch: + operationId: PatchVariablesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable update to apply + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable updated + default: + $ref: '#/components/responses/ServerError' + description: Internal server error + summary: Update a variable + tags: + - Variables + put: + operationId: PutVariablesID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable to replace + required: true + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/Variable' + description: Variable updated + default: + $ref: '#/components/responses/ServerError' + description: Internal server error + summary: Replace a variable + tags: + - Variables + /api/v2/variables/{variableID}/labels: + get: + operationId: GetVariablesIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + responses: + '200': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelsResponse' + description: A list of all labels for a variable + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: List all labels for a variable + tags: + - Variables + post: + operationId: PostVariablesIDLabels + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/LabelMapping' + description: Label to add + required: true + responses: + '201': + content: + application/json: + schema: + $ref: '#/components/schemas/LabelResponse' + description: The newly added label + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Add a label to a variable + tags: + - Variables + /api/v2/variables/{variableID}/labels/{labelID}: + delete: + operationId: DeleteVariablesIDLabelsID + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: The variable ID. + in: path + name: variableID + required: true + schema: + type: string + - description: The label ID to delete. + in: path + name: labelID + required: true + schema: + type: string + responses: + '204': + description: Delete has been accepted + '404': + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Variable not found + default: + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + description: Unexpected error + summary: Delete a label from a variable + tags: + - Variables + /api/v2/write: + post: + description: > + Writes data to a bucket. + + + Use this endpoint to send data in [line + protocol](https://docs.influxdata.com/influxdb/v2.2/reference/syntax/line-protocol/) + format to InfluxDB. + + InfluxDB parses and validates line protocol in the request body, + + responds with success or failure, and then handles the write + asynchronously. + + + #### Required permissions + + + - `write-buckets` or `write-bucket BUCKET_ID` + + + `BUCKET_ID` is the ID of the destination bucket. + + + #### Rate limits (with InfluxDB Cloud) + + + `write` rate limits apply. + + For more information, see [limits and adjustable + quotas](https://docs.influxdata.com/influxdb/cloud/account-management/limits/). + + + #### Related guides + + + - [Write data with the InfluxDB + API](https://docs.influxdata.com/influxdb/v2.2/write-data/developer-tools/api). + + - [Optimize writes to + InfluxDB](https://docs.influxdata.com/influxdb/v2.2/write-data/best-practices/optimize-writes/). + + - [Troubleshoot issues writing + data](https://docs.influxdata.com/influxdb/v2.2/write-data/troubleshoot/) + operationId: PostWrite + parameters: + - $ref: '#/components/parameters/TraceSpan' + - description: | + The compression applied to the line protocol in the request payload. + To send a GZIP payload, pass `Content-Encoding: gzip` header. + in: header + name: Content-Encoding + schema: + default: identity + description: > + Content coding. + + Use `gzip` for compressed data or `identity` for unmodified, + uncompressed data. + enum: + - gzip + - identity + type: string + - description: > + The format of the data in the request body. + + To send a line protocol payload, pass `Content-Type: text/plain; + charset=utf-8`. + in: header + name: Content-Type + schema: + default: text/plain; charset=utf-8 + description: > + `text/plain` is the content type for line protocol. `UTF-8` is the + default character set. + enum: + - text/plain + - text/plain; charset=utf-8 + type: string + - description: | + The size of the entity-body, in bytes, sent to InfluxDB. + If the length is greater than the `max body` configuration option, + the server responds with status code `413`. + in: header + name: Content-Length + schema: + description: The length in decimal number of octets. + type: integer + - description: | + The content type that the client can understand. + Writes only return a response body if they fail--for example, + due to a formatting problem or quota limit. + + #### InfluxDB Cloud + + - Returns only `application/json` for format and limit errors. + - Returns only `text/html` for some quota limit errors. + + #### InfluxDB OSS + + - Returns only `application/json` for format and limit errors. + + #### Related guides + - [Troubleshoot issues writing data](https://docs.influxdata.com/influxdb/v2.2/write-data/troubleshoot/). + in: header + name: Accept + schema: + default: application/json + description: Error content type. + enum: + - application/json + type: string + - description: > + The destination organization for writes. + + The database writes all points in the batch to this organization. + + If you provide both `orgID` and `org` parameters, `org` takes + precedence. + in: query + name: org + required: true + schema: + description: The organization name or ID. + type: string + - description: | + The ID of the destination organization for writes. + If both `orgID` and `org` are specified, `org` takes precedence. + in: query + name: orgID + schema: + type: string + - description: The destination bucket for writes. + in: query + name: bucket + required: true + schema: + description: InfluxDB writes all points in the batch to this bucket. + type: string + - description: The precision for unix timestamps in the line protocol batch. + in: query + name: precision + schema: + $ref: '#/components/schemas/WritePrecision' + requestBody: + content: + text/plain: + examples: + plain-utf8: + value: > + airSensors,sensor_id=TLM0201 + temperature=73.97038159354763,humidity=35.23103248356096,co=0.48445310567793615 + 1630424257000000000 + + airSensors,sensor_id=TLM0202 + temperature=75.30007505999716,humidity=35.651929918691714,co=0.5141876544505826 + 1630424257000000000 + schema: + format: byte + type: string + description: > + Data in line protocol format. + + + To send compressed data, do the following: + + 1. Use [GZIP](https://www.gzip.org/) to compress the line protocol data. + 2. In your request, send the compressed data and the + `Content-Encoding: gzip` header. + + #### Related guides + + + - [Best practices for optimizing + writes](https://docs.influxdata.com/influxdb/v2.2/write-data/best-practices/optimize-writes/). + required: true + responses: + '204': + description: > + Success. InfluxDB validated the request and the data format and + + accepted the data for writing to the bucket. + + Because data is written to InfluxDB asynchronously, data may not yet + be written to a bucket. + + + #### Related guides + + + - [How to check for write + errors](https://docs.influxdata.com/influxdb/v2.2/write-data/troubleshoot/). + '400': + content: + application/json: + examples: + measurementSchemaFieldTypeConflict: + summary: >- + InfluxDB Cloud field type conflict thrown by an explicit + bucket schema + value: + code: invalid + message: >- + partial write error (2 written): unable to parse + 'air_sensor,service=S1,sensor=L1 + temperature="90.5",humidity=70.0 1632850122': schema: + field type for field "temperature" not permitted by + schema; got String but expected Float + schema: + $ref: '#/components/schemas/LineProtocolError' + description: | + Bad request. The line protocol data in the request is malformed. + The response body contains the first malformed line in the data. + InfluxDB rejected the batch and did not write any data. + '401': + content: + application/json: + examples: + tokenNotAuthorized: + summary: >- + Token is not authorized to access the organization or + resource + value: + code: unauthorized + message: unauthorized access + schema: + $ref: '#/components/schemas/Error' + description: | + Unauthorized. The error may indicate one of the following: + * The `Authorization: Token` header is missing or malformed. + * The API token value is missing from the header. + * The token does not have sufficient permissions to write to this organization and bucket. + '404': + content: + application/json: + examples: + resource-not-found: + summary: Not found error + value: + code: not found + message: bucket "air_sensor" not found + schema: + $ref: '#/components/schemas/Error' + description: >- + Not found. A requested resource was not found. The response body + contains the requested resource type, e.g. `organization name` or + `bucket`, and name. + '413': + content: + application/json: + examples: + dataExceedsSizeLimitOSS: + summary: InfluxDB OSS response + value: > + {"code":"request too large","message":"unable to read data: + points batch is too large"} + schema: + $ref: '#/components/schemas/LineProtocolLengthError' + text/html: + examples: + dataExceedsSizeLimit: + summary: InfluxDB Cloud response + value: | + + 413 Request Entity Too Large + +

    413 Request Entity Too Large

    +
    +
    nginx
    + + + schema: + type: string + description: | + The request payload is too large. + InfluxDB rejected the batch and did not write any data. + + #### InfluxDB Cloud: + + - Returns this error if the payload exceeds the 50MB size limit. + - Returns `Content-Type: text/html` for this error. + + #### InfluxDB OSS: + + - Returns this error only if the [Go (golang) `ioutil.ReadAll()`](https://pkg.go.dev/io/ioutil#ReadAll) function raises an error. + - Returns `Content-Type: application/json` for this error. + '429': + description: | + Too many requests. + + #### InfluxDB Cloud + + - Returns this error if a **read** or **write** request exceeds your + plan's [adjustable service quotas](https://docs.influxdata.com/influxdb/cloud/account-management/limits/#adjustable-service-quotas) + or if a **delete** request exceeds the maximum + [global limit](https://docs.influxdata.com/influxdb/cloud/account-management/limits/#global-limits). + - Returns `Retry-After` header that describes when to try the write again. + + #### InfluxDB OSS + + - Doesn't return this error. + headers: + Retry-After: + description: >- + Non-negative decimal integer indicating seconds to wait before + retrying the request. + schema: + format: int32 + type: integer + '500': + content: + application/json: + examples: + internalError: + summary: Internal error example + value: + code: internal error + schema: + $ref: '#/components/schemas/Error' + description: Internal server error. + '503': + description: | + Service unavailable. + + #### InfluxDB Cloud + + - Returns this error if series cardinality exceeds your plan's + [adjustable service quotas](https://docs.influxdata.com/influxdb/cloud/account-management/limits/#adjustable-service-quotas). + See [how to resolve high series cardinality](https://docs.influxdata.com/influxdb/v2.2/write-data/best-practices/resolve-high-cardinality/). + + #### InfluxDB OSS + + - Returns this error if + the server is temporarily unavailable to accept writes. + - Returns `Retry-After` header that describes when to try the write again. + headers: + Retry-After: + description: >- + Non-negative decimal integer indicating seconds to wait before + retrying the request. + schema: + format: int32 + type: integer + default: + $ref: '#/components/responses/ServerError' + summary: Write data + tags: + - Write +security: + - TokenAuthentication: [] +servers: + - url: / +tags: + - description: > + Use one of the following schemes to authenticate to the InfluxDB API: + + + - [Token authentication](#section/Authentication/TokenAuthentication) + + - [Basic authentication](#section/Authentication/BasicAuthentication) + + - [Querystring + authentication](#section/Authentication/QuerystringAuthentication) + + + name: Authentication + x-traitTag: true + - description: > + Create and manage API tokens. + + An **authorization** associates a list of permissions to an + + **organization** and provides a token for API access. + + Optionally, you can restrict an authorization and its token to a specific + user. + + + ### Related guides + + - [Authorize API requests](/influxdb/v2.2/api-guide/api_intro/#authentication). + - [Manage API tokens](/influxdb/v2.2/security/tokens/). + - [Assign a token to a specific user](/influxdb/v2.2/security/tokens/create-token/). + name: Authorizations + - name: Backup + - name: Buckets + - name: Cells + - name: Checks + - name: Config + - name: Dashboards + - name: DBRPs + - description: > + Generates profiling and trace reports. + + + Use routes under `/debug/pprof` to analyze the Go runtime of InfluxDB. + + These endpoints generate [Go runtime + profiles](https://pkg.go.dev/runtime/pprof) + + and **trace** reports. + + **Profiles** are collections of stack traces that show call sequences + + leading to instances of a particular event, such as allocation. + + + For more information about **pprof profile** and **trace** reports, + + see the following resources: + - [Google pprof tool](https://github.com/google/pprof) + - [Golang diagnostics](https://go.dev/doc/diagnostics) + name: Debug + - name: Delete + - description: > + InfluxDB API endpoints use standard HTTP request and response headers. + + + **Note**: Not all operations support all headers. + + + ### Request headers + + + | Header | Value type | + Description | + + |:------------------------ |:--------------------- + |:-------------------------------------------| + + | `Accept` | string | The content type that + the client can understand. | + + | `Authorization` | string | The authorization + scheme and credential. | + + | `Content-Encoding` | string | The compression + applied to the line protocol in the request payload. | + + | `Content-Length` | integer | The size of the + entity-body, in bytes, sent to the database. | + + | `Content-Type` | string | The format of the + data in the request body. | + name: Headers + x-traitTag: true + - name: Health + - name: Labels + - name: Legacy Authorizations + - name: Metrics + - name: NotificationEndpoints + - name: NotificationRules + - name: Organizations + - name: Ping + - description: | + Retrieve data, analyze queries, and get query suggestions. + name: Query + - description: > + See the [**API Quick Start**](/influxdb/v2.2/api-guide/api_intro/) + + to get up and running authenticating with tokens, writing to buckets, and + querying data. + + + [**InfluxDB API client + libraries**](/influxdb/v2.2/api-guide/client-libraries/) + + are available for popular languages and ready to import into your + application. + name: Quick start + x-traitTag: true + - name: Ready + - name: RemoteConnections + - name: Replications + - name: Resources + - description: > + InfluxDB API endpoints use standard HTTP status codes for success and + failure responses. + + The response body may include additional details. + + For details about a specific operation's response, + + see **Responses** and **Response Samples** for that operation. + + + API operations may return the following HTTP status codes: + + + |  Code  | Status | Description | + + |:-----------:|:------------------------ |:--------------------- | + + | `200` | Success | | + + | `204` | No content | For a `POST` request, `204` + indicates that InfluxDB accepted the request and request data is valid. + Asynchronous operations, such as `write`, might not have completed yet. | + + | `400` | Bad request | `Authorization` header is + missing or malformed or the API token does not have permission for the + operation. | + + | `401` | Unauthorized | May indicate one of the + following:
  • `Authorization: Token` header is missing or + malformed
  • API token value is missing from the header
  • API + token does not have permission. For more information about token types and + permissions, see [Manage API tokens](/influxdb/v2.1/security/tokens/)
  • + | + + | `404` | Not found | Requested resource was not + found. `message` in the response body provides details about the requested + resource. | + + | `413` | Request entity too large | Request payload exceeds the + size limit. | + + | `422` | Unprocessible entity | Request data is invalid. `code` + and `message` in the response body provide details about the problem. | + + | `429` | Too many requests | API token is temporarily over + the request quota. The `Retry-After` header describes when to try the + request again. | + + | `500` | Internal server error | | + + | `503` | Service unavailable | Server is temporarily + unavailable to process the request. The `Retry-After` header describes + when to try the request again. | + name: Response codes + x-traitTag: true + - name: Restore + - name: Routes + - name: Rules + - name: Scraper Targets + - name: Secrets + - name: Setup + - name: Signin + - name: Signout + - name: Sources + - name: Tasks + - name: Telegraf Plugins + - name: Telegrafs + - name: Templates + - name: Users + - name: Variables + - name: Views + - description: | + Write time series data to buckets. + name: Write +x-tagGroups: + - name: Overview + tags: + - Quick start + - Authentication + - Headers + - Response codes + - name: Data I/O endpoints + tags: + - Write + - Query + - Tasks + - name: Resource endpoints + tags: + - Buckets + - Dashboards + - Tasks + - Resources + - name: Security and access endpoints + tags: + - Authorizations + - Organizations + - Users + - name: System information endpoints + tags: + - Config + - Debug + - Health + - Metrics + - Ping + - Ready + - Routes + - name: All endpoints + tags: + - Authorizations + - Backup + - Buckets + - Cells + - Checks + - Config + - Dashboards + - DBRPs + - Debug + - Delete + - Health + - Labels + - Legacy Authorizations + - Metrics + - NotificationEndpoints + - NotificationRules + - Organizations + - Ping + - Query + - Ready + - RemoteConnections + - Replications + - Resources + - Restore + - Routes + - Rules + - Scraper Targets + - Secrets + - Setup + - Signin + - Signout + - Sources + - Tasks + - Telegraf Plugins + - Telegrafs + - Templates + - Users + - Variables + - Views + - Write diff --git a/api-docs/v2.2/swaggerV1Compat.yml b/api-docs/v2.2/swaggerV1Compat.yml new file mode 100644 index 000000000..3b8926c1a --- /dev/null +++ b/api-docs/v2.2/swaggerV1Compat.yml @@ -0,0 +1,502 @@ +openapi: 3.0.0 +info: + title: InfluxDB OSS v1 compatibility API documentation + version: 0.1.0 + description: | + The InfluxDB 1.x compatibility `/write` and `/query` endpoints work with + InfluxDB 1.x client libraries and third-party integrations like Grafana + and others. + + + If you want to use the latest InfluxDB `/api/v2` API instead, + see the [InfluxDB v2 API documentation](/influxdb/cloud/api/). +servers: + - url: / +paths: + /write: + post: + operationId: PostWriteV1 + tags: + - Write + summary: Write time series data into InfluxDB in a V1-compatible format + requestBody: + description: Line protocol body + required: true + content: + text/plain: + schema: + type: string + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/AuthUserV1' + - $ref: '#/components/parameters/AuthPassV1' + - in: query + name: db + schema: + type: string + required: true + description: >- + Bucket to write to. If none exists, InfluxDB creates a bucket with a + default 3-day retention policy. + - in: query + name: rp + schema: + type: string + description: Retention policy name. + - in: query + name: precision + schema: + type: string + description: Write precision. + - in: header + name: Content-Encoding + description: >- + When present, its value indicates to the database that compression + is applied to the line protocol body. + schema: + type: string + description: >- + Specifies that the line protocol in the body is encoded with gzip + or not encoded with identity. + default: identity + enum: + - gzip + - identity + responses: + '204': + description: >- + Write data is correctly formatted and accepted for writing to the + bucket. + '400': + description: >- + Line protocol poorly formed and no points were written. Response + can be used to determine the first malformed line in the body + line-protocol. All data in body was rejected and not written. + content: + application/json: + schema: + $ref: '#/components/schemas/LineProtocolError' + '401': + description: >- + Token does not have sufficient permissions to write to this + organization and bucket or the organization and bucket do not exist. + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + '403': + description: No token was sent and they are required. + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + '413': + description: >- + Write has been rejected because the payload is too large. Error + message returns max size supported. All data in body was rejected + and not written. + content: + application/json: + schema: + $ref: '#/components/schemas/LineProtocolLengthError' + '429': + description: >- + Token is temporarily over quota. The Retry-After header describes + when to try the write again. + headers: + Retry-After: + description: >- + A non-negative decimal integer indicating the seconds to delay + after the response is received. + schema: + type: integer + format: int32 + '503': + description: >- + Server is temporarily unavailable to accept writes. The Retry-After + header describes when to try the write again. + headers: + Retry-After: + description: >- + A non-negative decimal integer indicating the seconds to delay + after the response is received. + schema: + type: integer + format: int32 + default: + description: Internal server error + content: + application/json: + schema: + $ref: '#/components/schemas/Error' + /query: + post: + operationId: PostQueryV1 + tags: + - Query + summary: Query InfluxDB in a V1 compatible format + requestBody: + description: InfluxQL query to execute. + content: + text/plain: + schema: + type: string + parameters: + - $ref: '#/components/parameters/TraceSpan' + - $ref: '#/components/parameters/AuthUserV1' + - $ref: '#/components/parameters/AuthPassV1' + - in: header + name: Accept + schema: + type: string + description: >- + Specifies how query results should be encoded in the response. + **Note:** With `application/csv`, query results include epoch + timestamps instead of RFC3339 timestamps. + default: application/json + enum: + - application/json + - application/csv + - text/csv + - application/x-msgpack + - in: header + name: Accept-Encoding + description: >- + The Accept-Encoding request HTTP header advertises which content + encoding, usually a compression algorithm, the client is able to + understand. + schema: + type: string + description: >- + Specifies that the query response in the body should be encoded + with gzip or not encoded with identity. + default: identity + enum: + - gzip + - identity + - in: header + name: Content-Type + schema: + type: string + enum: + - application/vnd.influxql + - in: query + name: db + schema: + type: string + required: true + description: Bucket to query. + - in: query + name: rp + schema: + type: string + description: Retention policy name. + - in: query + name: q + description: Defines the influxql query to run. + schema: + type: string + responses: + '200': + description: Query results + headers: + Content-Encoding: + description: >- + The Content-Encoding entity header is used to compress the + media-type. When present, its value indicates which encodings + were applied to the entity-body + schema: + type: string + description: >- + Specifies that the response in the body is encoded with gzip + or not encoded with identity. + default: identity + enum: + - gzip + - identity + Trace-Id: + description: >- + The Trace-Id header reports the request's trace ID, if one was + generated. + schema: + type: string + description: Specifies the request's trace ID. + content: + application/csv: + schema: + $ref: '#/components/schemas/InfluxQLCSVResponse' + text/csv: + schema: + $ref: '#/components/schemas/InfluxQLCSVResponse' + application/json: + schema: + $ref: '#/components/schemas/InfluxQLResponse' + application/x-msgpack: + schema: + type: string + format: binary + '429': + description: >- + Token is temporarily over quota. The Retry-After header describes + when to try the read again. + headers: + Retry-After: + description: >- + A non-negative decimal integer indicating the seconds to delay + after the response is received. + schema: + type: integer + format: int32 + default: + description: Error processing query + content: + application/json: + schema: + $ref: '#/components/schemas/Error' +components: + parameters: + TraceSpan: + in: header + name: Zap-Trace-Span + description: OpenTracing span context + example: + trace_id: '1' + span_id: '1' + baggage: + key: value + required: false + schema: + type: string + AuthUserV1: + in: query + name: u + required: false + schema: + type: string + description: Username. + AuthPassV1: + in: query + name: p + required: false + schema: + type: string + description: User token. + schemas: + InfluxQLResponse: + properties: + results: + type: array + items: + type: object + properties: + statement_id: + type: integer + series: + type: array + items: + type: object + properties: + name: + type: string + columns: + type: array + items: + type: integer + values: + type: array + items: + type: array + items: {} + InfluxQLCSVResponse: + type: string + example: > + name,tags,time,test_field,test_tag + test_measurement,,1603740794286107366,1,tag_value + test_measurement,,1603740870053205649,2,tag_value + test_measurement,,1603741221085428881,3,tag_value + Error: + properties: + code: + description: Code is the machine-readable error code. + readOnly: true + type: string + enum: + - internal error + - not found + - conflict + - invalid + - unprocessable entity + - empty value + - unavailable + - forbidden + - too many requests + - unauthorized + - method not allowed + message: + readOnly: true + description: Message is a human-readable message. + type: string + required: + - code + - message + LineProtocolError: + properties: + code: + description: Code is the machine-readable error code. + readOnly: true + type: string + enum: + - internal error + - not found + - conflict + - invalid + - empty value + - unavailable + message: + readOnly: true + description: Message is a human-readable message. + type: string + op: + readOnly: true + description: >- + Op describes the logical code operation during error. Useful for + debugging. + type: string + err: + readOnly: true + description: >- + Err is a stack of errors that occurred during processing of the + request. Useful for debugging. + type: string + line: + readOnly: true + description: First line within sent body containing malformed data + type: integer + format: int32 + required: + - code + - message + - op + - err + LineProtocolLengthError: + properties: + code: + description: Code is the machine-readable error code. + readOnly: true + type: string + enum: + - invalid + message: + readOnly: true + description: Message is a human-readable message. + type: string + maxLength: + readOnly: true + description: Max length in bytes for a body of line-protocol. + type: integer + format: int32 + required: + - code + - message + - maxLength + securitySchemes: + TokenAuthentication: + type: apiKey + name: Authorization + in: header + description: > + Use the [Token + authentication](#section/Authentication/TokenAuthentication) + + scheme to authenticate to the InfluxDB API. + + + + In your API requests, send an `Authorization` header. + + For the header value, provide the word `Token` followed by a space and + an InfluxDB API token. + + The word `Token` is case-sensitive. + + + + ### Syntax + + + `Authorization: Token YOUR_INFLUX_TOKEN` + + + + For examples and more information, see the following: + - [`/authorizations`](#tag/Authorizations) endpoint. + - [Authorize API requests](/influxdb/cloud/api-guide/api_intro/#authentication). + - [Manage API tokens](/influxdb/cloud/security/tokens/). + BasicAuthentication: + type: http + scheme: basic + description: > + Use the HTTP [Basic + authentication](#section/Authentication/BasicAuthentication) + + scheme with clients that support the InfluxDB 1.x convention of username + and password (that don't support the `Authorization: Token` scheme): + + + + For examples and more information, see how to [authenticate with a + username and password](/influxdb/cloud/reference/api/influxdb-1x/). + QuerystringAuthentication: + type: apiKey + in: query + name: u=&p= + description: > + Use the [Querystring + authentication](#section/Authentication/QuerystringAuthentication) + + scheme with InfluxDB 1.x API parameters to provide credentials through + the query string. + + + + For examples and more information, see how to [authenticate with a + username and password](/influxdb/cloud/reference/api/influxdb-1x/). +security: + - TokenAuthentication: [] + - BasicAuthentication: [] + - QuerystringAuthentication: [] +tags: + - name: Authentication + description: > + The InfluxDB 1.x API requires authentication for all requests. + + InfluxDB Cloud uses InfluxDB API tokens to authenticate requests. + + + + For more information, see the following: + + - [Token authentication](#section/Authentication/TokenAuthentication) + + - [Basic authentication](#section/Authentication/BasicAuthentication) + + - [Querystring + authentication](#section/Authentication/QuerystringAuthentication) + + + + x-traitTag: true + - name: Query + - name: Write +x-tagGroups: + - name: Overview + tags: + - Authentication + - name: Data I/O endpoints + tags: + - Write + - Query + - name: All endpoints + tags: + - Query + - Write diff --git a/assets/js/content-interactions.js b/assets/js/content-interactions.js index 22e50b7d3..04662df32 100644 --- a/assets/js/content-interactions.js +++ b/assets/js/content-interactions.js @@ -17,7 +17,8 @@ var elementWhiteList = [ ".truncate-toggle", ".children-links a", ".list-links a", - "a.url-trigger" + "a.url-trigger", + "a.fullscreen-close" ] function scrollToAnchor(target) { @@ -28,6 +29,15 @@ function scrollToAnchor(target) { }, 400, 'swing', function () { window.location.hash = target; }); + + // Unique accordion functionality + // If the target is an accordion element, open the accordion after scrolling + if ($target.hasClass('expand')) { + if ($(target + ' .expand-label .expand-toggle').hasClass('open')) {} + else { + $(target + '> .expand-label').trigger('click'); + }; + }; } } @@ -85,7 +95,7 @@ tabbedContent('.tabs-wrapper', '.tabs p a', '.tab-content'); // Retrieve the user's programming language (client library) preference. function getApiLibPreference() { - return Cookies.get('influx-docs-api-lib'); + return Cookies.get('influx-docs-api-lib') || ''; } function getTabQueryParam() { @@ -139,13 +149,36 @@ $(".truncate-toggle").click(function(e) { $(this).closest('.truncate').toggleClass('closed'); }) -////////////////////////////// Expand Accordians /////////////////////////////// +////////////////////////////// Expand Accordions /////////////////////////////// $('.expand-label').click(function() { $(this).children('.expand-toggle').toggleClass('open') $(this).next('.expand-content').slideToggle(200) }) +// Expand accordions on load based on URL anchor +function openAccordionByHash() { + var anchor = window.location.hash; + + function expandElement() { + if ($(anchor).parents('.expand').length > 0) { + return $(anchor).closest('.expand').children('.expand-label'); + } else if ($(anchor).hasClass('expand')){ + return $(anchor).children('.expand-label'); + } + }; + + if (expandElement() != null) { + if (expandElement().children('.expand-toggle').hasClass('open')) {} + else { + expandElement().children('.expand-toggle').trigger('click'); + }; + }; +}; + +// Open accordions by hash on page load. +openAccordionByHash() + ////////////////////////// Inject tooltips on load ////////////////////////////// $('.tooltip').each( function(){ diff --git a/assets/js/fullscreen-code.js b/assets/js/fullscreen-code.js new file mode 100644 index 000000000..7d15dd6f2 --- /dev/null +++ b/assets/js/fullscreen-code.js @@ -0,0 +1,48 @@ +var codeBlockSelector = ".article--content pre"; +var codeBlocks = $(codeBlockSelector); + +// Check if codeblock content requires scrolling (overflow) +function hasOverflow(element) { + if (element.offsetHeight < element.scrollHeight || element.offsetWidth < element.scrollWidth) { + return true + } else { + return false + } +} + +// Wrap codeblocks that overflow with a new 'codeblock' div +$(codeBlocks).each(function() { + if (hasOverflow( $(this)[0] )) { + $(this).wrap("
    "); + } else {} +}); + +// Append a clickable fullscreen toggle button to all codeblock divs +$('.codeblock').append(""); + +/* +On click, open the fullscreen code modal and append a clone of the selected codeblock. +Disable scrolling on the body. +Disable user selection on everything but the fullscreen codeblock. +*/ +$('.fullscreen-toggle').click(function() { + var code = $(this).prev('pre').clone(); + + $('#fullscreen-code-placeholder').replaceWith(code[0]); + $('body').css('overflow', 'hidden'); + $('body > div:not(.fullscreen-code)').css('user-select', 'none'); + $('.fullscreen-code').fadeIn(); +}) + +/* +On click, close the fullscreen code block. +Reenable scrolling on the body. +Reenable user selection on everything. +Close the modal and replace the code block with the placeholder element. +*/ +$('.fullscreen-close').click(function() { + $('body').css('overflow', 'auto'); + $('body > div:not(.fullscreen-code)').css('user-select', ''); + $('.fullscreen-code').fadeOut(); + $('.fullscreen-code pre').replaceWith('
    ') +}); diff --git a/assets/js/home-interactions.js b/assets/js/home-interactions.js new file mode 100644 index 000000000..a90df14cd --- /dev/null +++ b/assets/js/home-interactions.js @@ -0,0 +1,22 @@ +$('.exp-btn').click(function() { + var targetBtnElement = $(this).parent() + $('.exp-btn > p', targetBtnElement).fadeOut(100); + setTimeout(function() { + $('.exp-btn-links', targetBtnElement).fadeIn(200) + $('.exp-btn', targetBtnElement).addClass('open'); + $('.close-btn', targetBtnElement).fadeIn(200); + }, 100); +}) + +$('.close-btn').click(function() { + var targetBtnElement = $(this).parent().parent() + $('.exp-btn-links', targetBtnElement).fadeOut(100) + $('.exp-btn', targetBtnElement).removeClass('open'); + $(this).fadeOut(100); + setTimeout(function() { + $('p', targetBtnElement).fadeIn(100); + }, 100); +}) + +/////////////////////////////// EXPANDING BUTTONS ////////////////////////////// + diff --git a/assets/js/influxdb-url.js b/assets/js/influxdb-url.js index cc09b030c..81be1af65 100644 --- a/assets/js/influxdb-url.js +++ b/assets/js/influxdb-url.js @@ -139,21 +139,21 @@ function updateUrls(prevUrls, newUrls) { oss: {} } - Object.keys(prevUrls).forEach(function(k) { - try { - prevUrlsParsed[k] = new URL(prevUrls[k]) - } catch { - prevUrlsParsed[k] = { host: prevUrls[k] } - } - }) + Object.keys(prevUrls).forEach(function(k) { + try { + prevUrlsParsed[k] = new URL(prevUrls[k]) + } catch { + prevUrlsParsed[k] = { host: prevUrls[k] } + } + }) - Object.keys(newUrls).forEach(function(k) { - try { - newUrlsParsed[k] = new URL(newUrls[k]) - } catch { - newUrlsParsed[k] = { host: newUrls[k] } - } - }) + Object.keys(newUrls).forEach(function(k) { + try { + newUrlsParsed[k] = new URL(newUrls[k]) + } catch { + newUrlsParsed[k] = { host: newUrls[k] } + } + }) /** * Match and replace host with host @@ -175,37 +175,43 @@ function updateUrls(prevUrls, newUrls) { replacements.forEach(function (o) { if (o.replace.origin != o.with.origin) { + var fuzzyOrigin = new RegExp(o.replace.origin + "(:[0-9]+)?", "g"); $(elementSelector).each(function() { $(this).html( - $(this).html().replace(RegExp(o.replace.origin, "g"), function(match){ - return o.with.origin || match; + $(this).html().replace(fuzzyOrigin, function(m){ + return o.with.origin || m; }) ); }) } }); + + function replaceWholename(startStr, endStr, replacement) { + var startsWithSeparator = new RegExp('[/.]'); + var endsWithSeparator = new RegExp('[-.:]'); + if(!startsWithSeparator.test(startStr) && !endsWithSeparator.test(endStr)) { + var newHost = startStr + replacement + endStr + return startStr + replacement + endStr; + } + } + replacements .map(function(o) { return {replace: o.replace.host, with: o.with.host} }) .forEach(function (o) { - if (o.replace != o.with) { + if (o.replace != o.with) { + var fuzzyHost = new RegExp("(.?)" + o.replace + "(.?)", "g"); $(elementSelector).each(function() { - /** - * Hostname pattern - * 1. Lookbehind (?\/\?]/ - var protocol = url.match(/http(s?):\/\//) ? url.match(/http(s?):\/\//)[0] : ""; - var domain = url.replace(protocol, "") - - if (validProtocol.test(protocol) == false) { - return {valid: false, error: "Invalid protocol, use http[s]"} - } else if (domain.length == 0 || invalidDomain.test(domain) == true) { - return {valid: false, error: "Invalid domain"} - } else { + try { + new URL(url); return {valid: true, error: ""} + } catch(e) { + var validProtocol = /^http(s?)/ + var protocol = url.match(/http(s?):\/\//) ? url.match(/http(s?):\/\//)[0] : ""; + var domain = url.replace(protocol, "") + /** validDomain = (Named host | IPv6 host | IPvFuture host)(:Port)? **/ + var validDomain = new RegExp(`([a-z0-9\-._~%]+` + + `|\[[a-f0-9:.]+\]` + + `|\[v[a-f0-9][a-z0-9\-._~%!$&'()*+,;=:]+\])` + + `(:[0-9]+)?`); + if (validProtocol.test(protocol) == false) { + return {valid: false, error: "Invalid protocol, use http[s]"} + } else if (validDomain.test(domain) == false) { + return {valid: false, error: "Invalid domain"} + } else if (e) { + return {valid: false, error: "Invalid URL"} + } } } @@ -396,17 +433,28 @@ $('#custom-url-field').blur(function() { applyCustomUrl() }) +/** Delay execution of a function `fn` for a number of milliseconds `ms` + * e.g., delay a validation handler to avoid annoying the user. + */ +function delay(fn, ms) { + let timer = 0 + return function(...args) { + clearTimeout(timer) + timer = setTimeout(fn.bind(this, ...args), ms || 0) + } +} + +function handleUrlValidation() { + let url = $('#custom-url-field').val() + let urlValidation = validateUrl(url) + if (urlValidation.valid) { + hideValidationMessage() + } else { + showValidationMessage(urlValidation) + } +} // When in erred state, revalidate custom URL on keyup -$(document).on("keyup", ".error #custom-url-field", function() { - console.log("keyed up") - let url = $('#custom-url-field').val() - let urlValidation = validateUrl(url) - if (urlValidation.valid) { - hideValidationMessage() - } else { - showValidationMessage(urlValidation) - } -}) +$(document).on("keyup", "#custom-url-field", delay(handleUrlValidation, 500)); // Populate the custom InfluxDB URL field on page load if ( Cookies.get('influxdb_custom_url') != undefined ) { diff --git a/assets/js/search-interactions.js b/assets/js/search-interactions.js index 4628111bd..4f8fdd8ac 100644 --- a/assets/js/search-interactions.js +++ b/assets/js/search-interactions.js @@ -1,10 +1,10 @@ // Fade content wrapper when focusing on search input $('#algolia-search-input').focus(function() { - $('.content-wrapper, .group-wrapper').fadeTo(300, .35); + $('.content-wrapper').fadeTo(300, .35); }) // Hide search dropdown when leaving search input $('#algolia-search-input').blur(function() { - $('.content-wrapper, .group-wrapper').fadeTo(200, 1); + $('.content-wrapper').fadeTo(200, 1); $('.ds-dropdown-menu').hide(); }) diff --git a/assets/styles/layouts/_algolia-search-overrides.scss b/assets/styles/layouts/_algolia-search-overrides.scss index 1603da86c..e9e7e7dfa 100644 --- a/assets/styles/layouts/_algolia-search-overrides.scss +++ b/assets/styles/layouts/_algolia-search-overrides.scss @@ -1,6 +1,11 @@ .algolia-autocomplete { width: 100%; + /* Search input field */ + #algolia-search-input { + background: $sidebar-search-bg !important; + } + /* Main dropdown wrapper */ .ds-dropdown-menu { width: 74vw; diff --git a/assets/styles/layouts/_api-overrides.scss b/assets/styles/layouts/_api-overrides.scss index ac99fc5d4..a1e9ed530 100644 --- a/assets/styles/layouts/_api-overrides.scss +++ b/assets/styles/layouts/_api-overrides.scss @@ -1,5 +1,5 @@ @import "tools/color-palette"; -@import "tools/icomoon-v2"; +@import "tools/fonts"; // Fonts $rubik: 'Rubik', sans-serif; @@ -103,7 +103,7 @@ $bold: 700; } #redoc { - h1,h2,h3,h4,h5,h6 { + h1,h2,h3 { font-weight: $medium !important; } } diff --git a/assets/styles/layouts/_article.scss b/assets/styles/layouts/_article.scss index a6ff04ce6..cab6b380e 100644 --- a/assets/styles/layouts/_article.scss +++ b/assets/styles/layouts/_article.scss @@ -5,7 +5,9 @@ } .article--content{ - max-width: 820px; + max-width: 850px; + font-size: 1.1rem; + h1,h2,h3,h4,h5,h6 { color: $article-heading; a { @@ -24,24 +26,24 @@ } h1 { font-weight: normal; - font-size: 2.65rem; + font-size: 2.75rem; margin: .4em 0 .2em; } h2 { - font-size: 2rem; + font-size: 2.1rem; margin: -.25rem 0 .5rem; padding-top: 1.75rem; font-weight: $medium; color: $article-heading-alt; } h3 { - font-size: 1.65rem; + font-size: 1.75rem; font-weight: $medium; margin: -1rem 0 .5rem; padding-top: 1.75rem; } h4 { - font-size: 1.25rem; + font-size: 1.35rem; font-style: italic; font-weight: $medium; margin: -1.25rem 0 .5rem; @@ -49,12 +51,12 @@ color: $article-heading-alt; } h5 { - font-size: 1rem; + font-size: 1.1rem; margin: -1.25rem 0 .25rem; padding-top: 1.75rem; } h6 { - font-size: 1rem; + font-size: 1.1rem; font-style: italic; margin: -1.25rem 0 .25rem; padding-top: 1.75rem; @@ -62,7 +64,7 @@ p,li { color: $article-text; - line-height: 1.7rem; + line-height: 1.8rem; } p { @@ -187,8 +189,8 @@ .nowrap { white-space: nowrap } .all-caps { text-transform: uppercase; - font-size: .95rem; - letter-spacing: .07em; + font-size: 1.05rem; + letter-spacing: .1em; font-weight: $medium !important; } diff --git a/assets/styles/layouts/_fullscreen-code.scss b/assets/styles/layouts/_fullscreen-code.scss new file mode 100644 index 000000000..23dab82db --- /dev/null +++ b/assets/styles/layouts/_fullscreen-code.scss @@ -0,0 +1,40 @@ +////////////////////////// Fullscreen codeblock styles ///////////////////////// +.fullscreen-code { + display: none; + z-index: 1000; + position: fixed; + top: 0; + left: 0; + height: 100vh; + width: 100vw; + padding: 2rem; + background: $article-code-bg; + overflow: scroll !important; + + .fullscreen-close { + position: fixed; + padding: .1rem; + right: .75rem; + top: .5rem; + display: block; + color: $article-code; + font-size: 2rem; + text-decoration: none; + background: $article-code-bg; + border-radius: $radius; + + span { + opacity: 0.5; + transition: opacity 0.2s; + } + + &:hover span {opacity: 1}; + } + + pre { + display: block; + line-height: 1.75rem; + + @import "article/code-api-methods"; + } +} diff --git a/assets/styles/layouts/_global.scss b/assets/styles/layouts/_global.scss index 786a5adb9..f7c642605 100644 --- a/assets/styles/layouts/_global.scss +++ b/assets/styles/layouts/_global.scss @@ -1,21 +1,11 @@ -$rubik: 'Rubik', sans-serif; -$code: 'IBM Plex Mono', monospace;; - -// Font weights -$medium: 500; -$bold: 700; - - html { height: 100%; } body { min-height: 100%; - font-family: 'Rubik', sans-serif; + font-family: $proxima; background: $body-bg; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale; } * { diff --git a/assets/styles/layouts/_homepage.scss b/assets/styles/layouts/_homepage.scss index 10e73ed14..09f76efa9 100644 --- a/assets/styles/layouts/_homepage.scss +++ b/assets/styles/layouts/_homepage.scss @@ -1,120 +1,67 @@ +////////////////////////////// HOMEPAGE VARIABLES ////////////////////////////// + +$home-body-width: 1300px; + //////////////////////////////// HOMEPAGE STYLES /////////////////////////////// +body.home { + background-image: url('/img/hero-bg-light-1-diamond.png'); + background-size: 65%; + background-repeat: no-repeat; +} + .home { - display: flex; - flex-direction: column; - align-items: flex-start; + color: $article-bold; - .section { - display: flex; + .section{ width: 100%; - padding: 0rem 3rem; - flex-grow: 1; - - .row{ - flex-direction: row; - } - - .half { width: 50%; } - .third { width: 33.33%; } - .quarter { width: 25%; } - .two-thirds { width: 66.67%; } - .three-quarters { width: 75%; } - } - - ///////////////////////////// HERO SECTION STYLES //////////////////////////// - - .hero { + margin: 0 auto; + padding: 2rem 2rem 0; + max-width: $home-body-width; + display: block; position: relative; - padding-top: 3.5rem; - padding-bottom: 4.5rem; - @include gradient($grad-WarpSpeed) - color: $g20-white; - z-index: 0; - // overflow: hidden; - - h2 { - margin: 1.25rem 0 .5rem; - font-weight: 300; - font-size: 3rem; - } - p { - font-size: 1.1rem; - line-height: 1.85rem; - } - - #hero-img { - position: absolute; - max-width: 50%; - max-height: 135%; - bottom: -25%; - right: 0; - } - - .actions { - display: flex; - margin: 1.25rem 0 .75rem 0; - } - - a.btn { - position: relative; - display: inline-block; - flex: 1 0; - margin: 0 .5rem .5rem 0; - padding: 1.25rem 2.25rem; - color: $article-btn-text !important; - border-radius: $radius; - font-weight: $medium; - font-size: 1.1rem; - text-decoration: none; - text-align: center; - z-index: 1; - @include gradient($home-btn-gradient); - - &:after { - content: ""; - position: absolute; - display: block; - top: 0; - right: 0; - width: 100%; - height: 100%; - border-radius: $radius; - @include gradient($home-btn-gradient-hover, 270deg); - opacity: 0; - transition: opacity .2s; - z-index: -1; - } - - &:hover { - cursor: pointer; - - &:after { - opacity: 1; - } - } - } } //////////////////////////// SEARCH SECTION STYLES /////////////////////////// .search { - padding-top: 2rem; - padding-bottom: 2.25rem; - .sidebar--search { - max-width: 55%; font-size: 1.1rem; input { - padding: .75em 2.35rem .75rem 1rem + padding: .75em 2.35rem .75rem 1rem; + border-radius: 6px; + position: relative; + box-shadow: none; + + &::placeholder {color: rgba($sidebar-search-text, .65);} } &:after { font-size: 2rem; - top: .35rem; + top: .45rem; right: .45rem; } + + .algolia-autocomplete { + position: relative; + + &:after { + content: ""; + position: absolute; + display: block; + border-radius: 6px; + top: 0; + left: 0; + box-shadow: 2px 2px 6px $sidebar-search-shadow; + height: 100%; + width: 100%; + mix-blend-mode: multiply; + z-index: -1; + } + } + .algolia-autocomplete.algolia-autocomplete-left, .algolia-autocomplete.algolia-autocomplete-right { + .ds-dropdown-menu { top: auto !important; left: 0 !important; @@ -122,7 +69,7 @@ &:after { content: ""; - box-shadow: 2px 2px 10px $sidebar-search-shadow; + box-shadow: 2px 2px 6px $sidebar-search-shadow; height: 100%; width: 100%; mix-blend-mode: multiply; @@ -132,283 +79,547 @@ } } - /////////////////////////////// PRODUCT CARDS //////////////////////////////// + ///////////////////////////////// SPAN STYLES //////////////////////////////// - .group-wrapper { - display: flex; - flex-grow: 1; - width: 100%; - border-radius: $radius; - background-color: $sidebar-search-bg; - box-shadow: 2px 2px 10px $sidebar-search-shadow; - overflow: hidden; - color: $article-text; - - h2 { - margin-top: .5rem; - font-weight: $medium; - color: $article-heading-alt; - a {color: inherit; &:hover{color: inherit;}} - } - p { - line-height: 1.45rem; - max-width: 700px; - } - a { - color: $article-link; - text-decoration: none; - font-weight: $medium; - &:hover{ color: $article-link-hover; } - } + span { + &.magenta {color: $br-new-magenta;} + &.orange {color: $r-dreamsicle;} + &.blue {color: $b-pool;} } - .card { - display: flex; - flex-direction: column; - justify-content: space-between; - padding: 1.25rem 1.5rem .75rem; - flex: 1 0; + ///////////////////////////// EXPANDABLE BUTTONS ///////////////////////////// - .card-content p { margin-bottom: 0; } + .exp-btn-wrapper { + position: relative; + display: block; - span.version { - font-size: 1.15rem; - opacity: .65; - } - } - - #flux, #resources { - padding-bottom: 2.5rem; - .card .card-content { - h2 {margin-bottom: .5rem;} - p {margin: 0 0 1rem;} - } - .card.cta-btns { - flex-direction: row; - margin-bottom: .75rem; - justify-content: flex-end; + .exp-btn { + background: $br-dark-blue; + border-radius: 6px; + color: $br-teal; + padding: 1.5rem 2rem; + font-weight: $medium; align-items: center; - a.btn { - position: relative; - display: inline-block; + min-width: 340px; + min-height: 70px; + cursor: pointer; + transition: color .2s, background .2s, padding .2s; + + &:hover, &.open { + background: $br-teal; + color: $br-dark-blue; + } + + p { + margin: 0 !important; text-align: center; - margin: .5rem .5rem .5rem 0; - padding: .75rem 1.5rem; - color: $article-btn-text !important; - border-radius: $radius; - font-size: .95rem; - z-index: 1; - @include gradient($article-btn-gradient); + } + & > * {width: 100%;} - &:after { - content: ""; - position: absolute; - display: block; - top: 0; - right: 0; - width: 100%; - height: 100%; - border-radius: $radius; - @include gradient($article-btn-gradient-hover); - opacity: 0; - transition: opacity .2s; - z-index: -1; + &.open { + padding: 0; + li a {padding: 1rem 2rem;} + } + } + + .exp-btn-links { + border-radius: 6px; + color: $br-dark-blue; + margin: 0; + padding: 0; + list-style: none; + width: 100%; + height: 100%; + display: none; + + li { + background: $br-teal; + + &:first-child { + border-radius: 6px 6px 0 0; + border-bottom: 1px solid rgba($br-dark-blue, .5); } + &:last-child { + border-radius: 0 0 6px 6px; + border-bottom: none; + } + a { + display: block; + padding: 0rem 2rem; + color: $br-dark-blue; + text-decoration: none; + text-align: center; + transition: padding .2s; + } + } + } + + .close-btn { + position: absolute; + top: 38%; + right: -32px; + color: rgba($br-teal, .6); + font-size: 1.5rem; + display: none; + cursor: pointer; + transition: color .2s; + + &:hover { + opacity: 1; + color: $br-teal; + } + } + } - &:hover { - cursor: pointer; + //////////////////////////////// PRODUCT CARDS /////////////////////////////// + .product-cards { + display: flex; + flex-direction: row; + + .card { + padding: 3rem; + background: $sidebar-search-bg; + border-radius: 30px; + box-shadow: 1px 1px 7px $sidebar-search-shadow; + flex: 1 1 0; + display: flex; + flex-direction: column; + + &:first-child {margin-right: 1rem} + &:last-child {margin-left: 1rem} + + h3 { + margin: 0 0 1.5rem; + line-height: 1.1em; + font-size: 2.75rem; + } + + p { + margin-bottom: 2rem; + }; + + .card-links { + margin-top: auto; + + a { + position: relative; + display: block; + color: $article-text; + font-weight: $medium; + text-decoration: none; + margin-bottom: .3rem; + &:after { - opacity: 1; + content: ""; + display: block; + margin-top: .15rem; + border-top: 2px solid $br-new-magenta; + width: 0; + transition: width .2s; + } + + &:hover{ + color: $br-new-magenta; + &:after {width: 30%} } } } } } - #tick-cards { + ///////////////////////// GENERAL BLUE SECTION STYLES //////////////////////// + + .section.blue { + + h2, h3, h4 { + color: $br-teal; + } + + .padding-wrapper { + width: 100%; + max-width: $home-body-width; + color: $g20-white; + background: $br-dark-blue; + background-size: cover; + border-radius: 30px; + } + + &.flush-left .padding-wrapper { + padding: 2rem; + background-image: url('/svgs/home-bg-circle-right.svg')} + &.flush-right .padding-wrapper { + padding: 2rem; + background-image: url('/svgs/home-bg-circle-left.svg'); + } + } + + ////////////////////////////// INFLUXDB SECTION ////////////////////////////// + + #influxdb { + + padding-top: .5rem; display: flex; - flex-wrap: wrap; - position: relative; - color: $g20-white; - @include gradient($home-tick-bg-gradient, 45deg) - h2 { color: $article-heading-alt; } + .actions { + display: flex; + justify-content: center; + align-items: center; + max-width: 50%; + padding: 0 3rem 3rem; + flex: 1 1 0; + } - a { - display: inline-block; - position: relative; - &:after{ - content: ""; - margin-top: .15rem; - width: 0; - display: block; - height: 2px; - @include gradient($grad-whiteFade) - transition: width .2s; - } - &:hover{ - &:after { width: 100%; } + h2 { + margin: 0; + font-size: 3.5rem; + line-height: 1.1em; + & + p { + font-size: 1.2rem; + margin: .5rem 0 2rem; } } - .card { + h3 { + margin: 0; + color: $g20-white; + font-size: 2.25rem; + + & + p {margin: .5rem 0;} + } + + .hero-img { + background-image: url('/img/wind-turbine.jpg'); + background-size: cover; + margin: -.5rem .75rem 0 1rem; + z-index: -1; + min-height: 600px; + border-radius: 0 0 30px 30px; + flex: 1 1 0; + } + } + + #influxdb-btn { + .exp-btn { + @include gradient($grad-burningDusk, 270deg); + color: $g20-white; position: relative; z-index: 1; - color: $article-text; - transition: color .2s; &:after { content: ""; - position: absolute; - display: block; - bottom: 0; + top: 0; left: 0; + position: absolute; width: 100%; height: 100%; - opacity: 0; + @include gradient($grad-coolDusk); + border-radius: 6px; z-index: -1; - transition: opacity .4s; + opacity: 0; + transition: opacity .2s; } - &.telegraf:after {@include gradient($telegraf-home-card-gradient);} - &.influxdb:after {@include gradient($default-home-card-gradient);} - &.chronograf:after {@include gradient($chronograf-home-card-gradient);} - &.kapacitor:after {@include gradient($kapacitor-home-card-gradient);} - - &:hover { - color: $g20-white; - h2, a {color: $g20-white;} - &:after { opacity: 1; } + &:hover, &.open { + &:after {opacity: 1;} } + &.open { + padding: 0; + li a {padding: 1rem 2rem;} + } + } + .exp-btn-links { + color: $g20-white; + li { + @include gradient($grad-coolDusk); + + &:first-child { + border-bottom: 1px solid rgba($body-bg, .5); + a:after {border-radius: 6px 6px 0 0;} + } + + &:last-child a:after {border-radius: 0 0 6px 6px;} + + a { + color: $g20-white; + position: relative; + z-index: 1; + + &:after { + content: ""; + top: 0; + left: 0; + position: absolute; + width: 100%; + height: 100%; + @include gradient($grad-burningDusk, 270deg); + border-radius: 6px; + z-index: -1; + opacity: 0; + transition: opacity .2s; + } + + &:hover:after {opacity: 1} + } + } + } + + .close-btn { + color: rgba($br-new-magenta, .6); + &:hover {color: $br-new-magenta;} } } - #enterprise { - padding-top: 2.5rem; - padding-bottom: 2.5rem; - } + ///////////////////////////////// API SECTION //////////////////////////////// - //////////////////////////// HOMEPAGE MEDIA QUERIES //////////////////////////// + #api-guides { + .padding-wrapper { + display: flex; + justify-content: space-between; + align-items: center; + padding: 3.5rem; - @include media(large) { - overflow-x: hidden; - .hero #hero-img{ - max-height: 130%; - max-width: 70%; - right: -20%; - bottom: -30%; + .text {margin-right: 2rem;} + + h3 { + margin: 0; + font-size: 1.8rem; + } + + p { + margin: .5rem 0; + line-height: 1.5rem; + } } } - @media (max-width: 1020px) { - #tick-stack #tick-cards .card { width: 50%; flex: none;} - .section { - .quarter { width: 33.33%; } - .three-quarters { width: 66.64%; } + ///////////////////////////////// LEARN ITEMS //////////////////////////////// + + #learn-more { + margin-bottom: 2rem; + + h4 { + font-size: 1.8rem; + margin: 1rem 0 2rem; } - #flux .card.flux-btns { + + .learn-items { + display: flex; + flex-direction: row; + justify-content: flex-start; + + .item { + max-width: 25%; + flex: 1 1 0; + display: flex; + flex-direction: column; + margin: 0 .75rem; + + .icon { + svg {max-height: 60px; max-width: 60px} + .c1 {fill: $home-icon-c1;} + .c2 {fill: $home-icon-c2;} + .magenta {fill: $br-new-magenta;} + } + + h5 { + font-size: 1.4rem; + margin: 1rem 0 0; + } + + p { + margin: .5rem 0 1.5rem; + line-height: 1.7rem; + } + a { + position: relative; + color: $br-new-magenta; + font-weight: $medium; + text-decoration: none; + + &:after { + content: ""; + display: block; + margin-top: .25rem; + border-top: 2px solid $br-new-magenta; + width: 0; + transition: width .2s; + } + + &:hover:after {width: 30%} + } + + & > *:last-child {margin-top: auto} + } + } + } + + ////////////////////////////// TICK STACK STYLES ///////////////////////////// + + #tick { + padding-bottom: 0; + + .padding-wrapper { + display: flex; + flex-direction: row; + align-items: center; + padding: 2rem 3rem; + + h4 { + margin: 0; + font-size: 1.5rem; + & > a {color: inherit; text-decoration: none;} + & + p {margin: .5rem 0;} + } + h5 { + margin: 0 0 .5rem; + text-transform: uppercase; + letter-spacing: .06rem; + // font-weight: $medium; + } + + .tick-title { + padding-right: 3rem; + } + + .tick-links { + border-left: 1px solid rgba($br-teal, .3); + padding-left: 3rem; + display: flex; + + ul { + padding: 0; + margin-right: 4rem; + list-style: none; + + &:last-child {margin-right: 0;} + + li a { + color: $g20-white; + line-height: 1.6rem; + text-decoration: none; + + &:hover {color: $br-teal;} + span { + font-size: .75em; + opacity: .5; + } + } + } + } + } + } + + #copyright { + width: 100vw; + max-width: $home-body-width; + padding: 1rem 3rem; + color: rgba($article-text, .5); + + p { + margin: 0; + text-align: right; + font-size: .9rem; + } + } + + /////////////////////////// HOMEPAGE MEDIA QUERIES /////////////////////////// + + @media (max-width: 900px) { + #tick .padding-wrapper{ flex-direction: column; - align-items: flex-end; - } - } + align-items: flex-start; + padding-top: 3rem; - @media (max-width: 920px) { - .section { - padding-left: 1.5rem; - padding-right: 1.5rem; - &.hero { padding-top: 2rem; padding-bottom: 3rem;} - &.search, - &#enterprise, &#flux { padding-top: 1.5rem; padding-bottom: 1.5rem; } - } - .hero { - #hero-img{ display: none; } - .half { width: 100%; } - } - .search { - .sidebar--search { max-width: 100%; } + .tick-links { + padding-left: 0; + border: none; + } } } @include media(medium) { - .search .algolia-autocomplete.algolia-autocomplete-right, .algolia-autocomplete.algolia-autocomplete-right { - .ds-dropdown-menu { + #influxdb { + .actions { + max-width: 100%; + padding-top: 3rem; + text-align: center; + } + .hero-img {display: none} + } + #api-guides { + .padding-wrapper{ + flex-direction: column; + align-items: flex-start; + padding: 3rem; + } + .exp-btn-wrapper {width: 100%;} + .exp-btn { + margin-top: 2rem; width: 100%; + background: $br-teal; + color: $br-dark-blue; } } + .product-cards { + flex-direction: column; + .card { + margin-bottom: 2rem; + &:first-child {margin-right: 0;} + &:last-child {margin-left: 0;} + } + } + #learn-more { + margin-bottom: 0; + + h4 {margin-top: 0;} + + .learn-items { + flex-wrap: wrap; + .item { + max-width: 45%; + flex: 1 1 50%; + margin-bottom: 2rem; + } + } + } + } @include media(small) { - .section { + .section {padding: 1rem 1rem 0;} + .search.section {padding-top: .25rem;} + .exp-btn-wrapper .exp-btn {min-width: revert;} + .product-cards .card { + padding: 2rem; + margin-bottom: 1rem; - .quarter, .three-quarters { width: 100%; } - - &.hero { - order: 2; - padding-top: 1.5rem; - padding-bottom: 2rem; - h2 { font-size: 2rem; margin-top: .5rem; } - p { font-size: 1rem; line-height: 1.5rem; } - .actions { flex-direction: column; } - } - &.search { - order: 1; - padding: 0 1rem .5rem; - width: 100%; - - .sidebar--search { - max-width: 100%; - font-size: 1rem; - - input { - padding: .5em 2.15rem .5rem .75rem - } - - &:after { - top: .15rem; - right: .25rem; - font-size: 1.75rem; - } - .algolia-autocomplete.algolia-autocomplete-right, .algolia-autocomplete.algolia-autocomplete-right { - .ds-dropdown-menu { - width: 100vw; - left: -1rem !important; - right: inherit; - } - } - } - } - &#flux, &#resources { - order: 3; padding-left: 0; padding-right: 0; - .card.cta-btns { - padding-top: 0; - a.btn { - display: block; - width: 100%; - margin: 0 0 .5rem; - } - } - } - &#tick-stack { - order: 4; - padding-left: 0; - padding-right: 0; - #tick-cards { - flex-direction: column; - .card { - width: 100%; - border-top: 1px solid rgba($article-text, .15); - } - } - } - &#enterprise { order: 5; padding-left: 0; padding-right: 0; } + h3 {font-size: 2rem;} } - .group-wrapper {flex-direction: column;} - .row { - flex-direction: column; + #influxdb { + .actions {padding: 2rem;} + h2 {font-size: 2.65rem;} } + #api-guides { + .padding-wrapper { + padding: 2rem; + h3 {font-size: 1.6rem;} + p {font-size: 1.1rem;} + } + } + #learn-more { + h4 {margin-left: 1rem;} + .learn-items { + flex-direction: column; + .item {max-width: 100%;} + } + } + #tick .padding-wrapper {padding: 2rem;} } + @media (max-width: 480px) { + #tick .padding-wrapper .tick-links {flex-direction: column;} + } } diff --git a/assets/styles/layouts/_inline-icons.scss b/assets/styles/layouts/_inline-icons.scss index 2ad92f07b..125450623 100644 --- a/assets/styles/layouts/_inline-icons.scss +++ b/assets/styles/layouts/_inline-icons.scss @@ -311,6 +311,7 @@ display: inline-block; margin: 0; padding: 0; + font-family: $rubik; font-weight: 500; font-size: 1.15rem; min-width: 225px; diff --git a/assets/styles/layouts/_sidebar.scss b/assets/styles/layouts/_sidebar.scss index 85cbeb8b3..5df83d687 100644 --- a/assets/styles/layouts/_sidebar.scss +++ b/assets/styles/layouts/_sidebar.scss @@ -13,15 +13,16 @@ display: block; font-family: 'icomoon-v2'; position: absolute; - top: .15rem; + top: .25rem; right: .25rem; color: $article-text; - font-size: 1.75rem; + font-size: 1.8rem; } input { - font-family: $rubik; - font-weight: $medium; + font-family: $proxima; + font-weight: $medium; + font-size: 1.1rem; background: $sidebar-search-bg; border-radius: $radius; border: 1px solid $sidebar-search-bg; @@ -38,9 +39,8 @@ border-radius: $radius; } &::placeholder { - color: rgba($sidebar-search-text, .45); + color: rgba($sidebar-search-text, .35); font-weight: normal; - font-style: italic; } } } @@ -117,7 +117,7 @@ ul { list-style: none; - padding-left: 2rem; + padding-left: 2.3rem; border-left: 2px solid $nav-border; } @@ -171,7 +171,7 @@ .nav-category > a { color: $nav-category; - font-size: 1.1rem; + font-size: 1.2rem; &:hover { color: $nav-category-hover; } @@ -192,11 +192,11 @@ } .children-toggle { - width: 1rem; - height: 1rem; + width: 1.12rem; + height: 1.12rem; position: absolute; - top: .05rem; - left: -1.4rem; + top: .1rem; + left: -1.6rem; display: block; background: $nav-border; border-radius: 50%; @@ -210,15 +210,15 @@ } &:before { top: 4px; - left: 7px; - height: 8px; + left: 8px; + height: 10px; width: 2px; } &:after { - top: 7px; + top: 8px; left: 4px; height: 2px; - width: 8px; + width: 10px; } &:hover { @@ -239,10 +239,9 @@ h4 { margin: 2rem 0 0 -1rem; color: rgba($article-heading-alt, .5); - font-style: italic; font-weight: 700; text-transform: uppercase; - font-size: .85rem; + font-size: .95rem; letter-spacing: .08rem; &.platform, &.flux { diff --git a/assets/styles/layouts/_top-nav.scss b/assets/styles/layouts/_top-nav.scss index 2e65bde15..fe2299ff3 100644 --- a/assets/styles/layouts/_top-nav.scss +++ b/assets/styles/layouts/_top-nav.scss @@ -8,7 +8,7 @@ .influx-home { font-family: 'icomoon-v2'; - font-size: 1.9rem; + font-size: 1.4rem; color: $topnav-link; text-decoration: none; vertical-align: middle; @@ -16,7 +16,7 @@ color: $topnav-link-hover; } .icon-influx-logotype { - margin-left: .15rem; + margin-left: .6rem; } } @@ -28,7 +28,6 @@ .docs-home { display: inline-block; vertical-align: text-top; - font-style: italic; font-weight: $medium; font-size: 1.1rem; color: $topnav-link; @@ -39,6 +38,8 @@ } .topnav-left { + margin-right: .15rem; + padding: .25rem .15rem; z-index: 1; } @@ -65,6 +66,7 @@ @include gradient($default-dropdown-gradient); background-attachment: local !important; font-weight: $medium; + font-size: 1.05rem; border-radius: $radius; overflow: hidden; cursor: pointer; @@ -103,10 +105,11 @@ li { &:before { display: inline-block; - font-size: .8rem; + font-size: .85rem; color: $g2-kevlar; - font-style: italic; + text-transform: uppercase; font-weight: bold; + letter-spacing: .04rem; opacity: .65; mix-blend-mode: multiply; } diff --git a/assets/styles/layouts/_url-selector.scss b/assets/styles/layouts/_url-selector.scss index fcbd9e83e..3f5a53c04 100644 --- a/assets/styles/layouts/_url-selector.scss +++ b/assets/styles/layouts/_url-selector.scss @@ -131,6 +131,45 @@ margin: .5rem .5rem .5rem 0; padding: 0; list-style: none; + + &.clusters { + padding-left: 1.75rem; + } + } + + p.region { + + .fake-radio { + position: relative; + display: inline-block; + height: 1.15em; + width: 1.15em; + margin: 0 0.3rem 0 0.1rem; + border-radius: $radius; + border: 1.5px solid transparent; + background: rgba($article-text, 0.05); + border: 1.5px solid rgba($article-text, 0.2); + vertical-align: text-top; + cursor: pointer; + + &:after { + content: ""; + position: absolute; + display: block; + height: .5rem; + width: .5rem; + top: .23rem; + left: .23rem; + border-radius: 50%; + background: rgba($article-text, .3); + opacity: 0; + transition: opacity .2s; + } + + &.checked:after { + opacity: 1; + } + } } } } diff --git a/assets/styles/layouts/article/_blocks.scss b/assets/styles/layouts/article/_blocks.scss index dc522c755..4992b7014 100644 --- a/assets/styles/layouts/article/_blocks.scss +++ b/assets/styles/layouts/article/_blocks.scss @@ -7,7 +7,7 @@ blockquote, border-width: 0 0 0 4px; border-style: solid; border-radius: 0 $radius $radius 0; - font-size: .95rem; + font-size: 1.05rem; ul,ol { &:last-child { margin-bottom: 1.85rem; } diff --git a/assets/styles/layouts/article/_buttons.scss b/assets/styles/layouts/article/_buttons.scss index bb99b0872..3c2f0e27b 100644 --- a/assets/styles/layouts/article/_buttons.scss +++ b/assets/styles/layouts/article/_buttons.scss @@ -7,7 +7,7 @@ a.btn { padding: 0.85rem 1.5rem; color: $article-btn-text !important; border-radius: $radius; - font-size: .95rem; + font-size: 1.05rem; z-index: 1; @include gradient($article-btn-gradient); diff --git a/assets/styles/layouts/article/_captions.scss b/assets/styles/layouts/article/_captions.scss index e01397b0d..4c8276ead 100644 --- a/assets/styles/layouts/article/_captions.scss +++ b/assets/styles/layouts/article/_captions.scss @@ -1,19 +1,14 @@ .caption { margin: -2rem 0 2rem; padding-left: .25rem; - font-size: .85rem; + font-size: .95rem; font-style: italic; opacity: .8; + color: $article-text; p { line-height: 1.25rem; } } -.code-tabs-wrapper, .code-tab-content { - & + .caption { - margin-top: -2.75rem; - } -} - p { & + .caption { padding: 0; diff --git a/assets/styles/layouts/article/_code-api-methods.scss b/assets/styles/layouts/article/_code-api-methods.scss new file mode 100644 index 000000000..67d30d0e9 --- /dev/null +++ b/assets/styles/layouts/article/_code-api-methods.scss @@ -0,0 +1,14 @@ +.api { + margin-right: .35rem; + padding: .15rem .5rem .25rem; + border-radius: $radius; + color: $g20-white; + font-weight: bold; + font-size: 1rem; + + &.get { background: $gr-viridian; } + &.post { background: $b-ocean; } + &.patch { background: $y-topaz; } + &.delete { background: $r-ruby; } + &.put {background: $br-pulsar; } +} \ No newline at end of file diff --git a/assets/styles/layouts/article/_code.scss b/assets/styles/layouts/article/_code.scss index b88fe081e..8bfd7b09c 100644 --- a/assets/styles/layouts/article/_code.scss +++ b/assets/styles/layouts/article/_code.scss @@ -12,7 +12,7 @@ p,li,table { border-radius: $radius; color: $article-code; white-space: nowrap; - font-size: .95rem; + font-size: 1rem; font-style: normal; } } @@ -66,25 +66,33 @@ pre { overflow-y: hidden; code { padding: 0; - font-size: .95rem; - line-height: 1.5rem; + font-size: 1rem; + line-height: 1.7rem; white-space: pre; } + + @import "code-api-methods"; } -pre .api { - margin-right: .35rem; - padding: .15rem .5rem .25rem; - border-radius: $radius; - color: $g20-white; - font-weight: bold; - font-size: .9rem; +///////////////////////// Codeblocks fullscreen toggle ///////////////////////// + +.codeblock { + position: relative; + + .fullscreen-toggle { + cursor: pointer; + position: absolute; + top: .5rem; + right: .5rem; + line-height: 0; + font-size: 1.15rem; + color: $article-code; + opacity: .5; + transition: opacity .2s; + + &:hover {opacity: 1} + } - &.get { background: $gr-viridian; } - &.post { background: $b-ocean; } - &.patch { background: $y-topaz; } - &.delete { background: $r-ruby; } - &.put {background: $br-pulsar; } } //////////////////////////////////////////////////////////////////////////////// diff --git a/assets/styles/layouts/article/_feedback.scss b/assets/styles/layouts/article/_feedback.scss index 7d9f5c1bc..771f268da 100644 --- a/assets/styles/layouts/article/_feedback.scss +++ b/assets/styles/layouts/article/_feedback.scss @@ -31,8 +31,8 @@ &.community:before { content: "\e900"; color: $article-heading-alt; - margin: 0 .25rem 0 -.25rem; - font-size: 1.65rem; + margin: 0 .5rem 0 -.25rem; + font-size: 1.2rem; font-family: 'icomoon-v2'; vertical-align: middle; } @@ -55,7 +55,7 @@ a { display: block; padding-left: 1rem; - font-size: .85rem; + font-size: .95rem; &.btn { color: $article-text !important; @@ -75,12 +75,11 @@ &.edit:before { content: "\e92f"; - font-size: .75rem; - vertical-align: top; + font-size: .85rem; } &.issue:before { content: "\e934"; - font-size: .95rem; + font-size: 1rem; } } } diff --git a/assets/styles/layouts/article/_html-diagrams.scss b/assets/styles/layouts/article/_html-diagrams.scss index c467e1f7f..af17df69b 100644 --- a/assets/styles/layouts/article/_html-diagrams.scss +++ b/assets/styles/layouts/article/_html-diagrams.scss @@ -43,7 +43,7 @@ } ///////////////////////////////// Shard diagram //////////////////////////////// -#shard-diagram { +#shard-diagram, #data-retention { display: flex; flex-direction: column; max-width: 550px; @@ -71,6 +71,26 @@ border-left: 1px solid $article-text; } } + + .one-quarter {width: 25%; height: .75rem;} + .three-quarters {width: 75%; height: .75rem;} + .border-left {border-left: 1px solid $article-text;} + .retention-label { + position: relative; + &:before { + content: ""; + display: inline-block; + width: .65rem; + margin-right: .5rem; + border-top: 1px solid $article-text; + vertical-align: middle; + } + } + .deleted-label { + color: $r-ruby; + text-align: center; + font-size: .9rem; + } } .shard-groups { display: flex; @@ -96,9 +116,14 @@ padding: .65rem 1rem; color: #fff; border-radius: .25rem; - @include gradient($article-table-header, 90deg) + @include gradient($article-table-header, 90deg); background-attachment: fixed; } + + &.deleted { + opacity: .3; + .shard {@include gradient($grad-red-dark)} + } } } } diff --git a/assets/styles/layouts/article/_tabbed-content.scss b/assets/styles/layouts/article/_tabbed-content.scss index 3be7ebb48..f47001bb5 100644 --- a/assets/styles/layouts/article/_tabbed-content.scss +++ b/assets/styles/layouts/article/_tabbed-content.scss @@ -12,7 +12,7 @@ flex-grow: 1; margin: 2px; position: relative; - font-size: 0.875rem; + font-size: 1rem; font-weight: $medium; padding: .65rem 1.25rem; display: inline-block; @@ -72,7 +72,7 @@ margin: 0; border-radius: $radius $radius 0 0; display: inline-block; - font-size: 0.875rem; + font-size: 1rem; background: $article-bg; color: rgba($article-tab-code-text, .5); &:hover { @@ -97,7 +97,7 @@ margin: .75rem 0 3rem; width: 100%; - & > :not(table, .fs-diagram) { + & > :not(table, .fs-diagram, img) { width: 100%; margin-left: 0; } diff --git a/assets/styles/layouts/article/_tables.scss b/assets/styles/layouts/article/_tables.scss index 6d043b0ca..a52b34008 100644 --- a/assets/styles/layouts/article/_tables.scss +++ b/assets/styles/layouts/article/_tables.scss @@ -31,8 +31,9 @@ table { } td { - font-size: .95rem; + font-size: 1.05rem; line-height: 1.5em; + code {font-size: .95rem;} } tr{ @@ -49,6 +50,15 @@ table { } img { margin-bottom: 0; } + + &.cloud-urls { + a { white-space: nowrap; } + p { + margin: 0 0 .5rem 0; + &:last-child { margin-bottom: 0 } + } + .cluster-name { font-weight: $medium; color: $article-bold; } + } } table + table { @@ -64,5 +74,5 @@ table + table { p.table-group-key { margin: 1rem 0 -.75rem; font-weight: $medium; - font-size: .87rem; + font-size: .95rem; } diff --git a/assets/styles/layouts/article/_tags.scss b/assets/styles/layouts/article/_tags.scss index cb5e96bfd..6f4235e82 100644 --- a/assets/styles/layouts/article/_tags.scss +++ b/assets/styles/layouts/article/_tags.scss @@ -8,11 +8,11 @@ .tag { background: $body-bg; margin: .12rem 0; - padding: .35rem .6rem; + padding: .4rem .65rem; font-style: italic; font-weight: $medium; color: rgba($article-text, .75) !important; - font-size: .8rem; + font-size: .9rem; border-radius: 1rem; &:after { diff --git a/assets/styles/layouts/article/_title.scss b/assets/styles/layouts/article/_title.scss index 37c575719..b54a64008 100644 --- a/assets/styles/layouts/article/_title.scss +++ b/assets/styles/layouts/article/_title.scss @@ -13,7 +13,7 @@ padding: 0 .65em 0 .75em; color: $article-heading; background: rgba($article-heading, .07); - font-size: .9rem; + font-size: .95rem; font-weight: $medium; border-radius: 1em; display: inline-block; diff --git a/assets/styles/styles-default.scss b/assets/styles/styles-default.scss index 7be4055d3..899646c58 100644 --- a/assets/styles/styles-default.scss +++ b/assets/styles/styles-default.scss @@ -1,8 +1,7 @@ // InfluxData Docs Default Theme (Light) // Import Tools -@import "tools/icomoon-v2", - "tools/icon", +@import "tools/fonts", "tools/media-queries.scss", "tools/mixins.scss", "tools/tooltips", @@ -27,7 +26,8 @@ "layouts/url-selector", "layouts/feature-callouts", "layouts/v1-overrides", - "layouts/notifications"; + "layouts/notifications", + "layouts/fullscreen-code"; // Import Product-specifc color schemes @import "product-overrides/telegraf", diff --git a/assets/styles/themes/_theme-dark.scss b/assets/styles/themes/_theme-dark.scss index df9c6ec21..3d0b42b26 100644 --- a/assets/styles/themes/_theme-dark.scss +++ b/assets/styles/themes/_theme-dark.scss @@ -25,7 +25,7 @@ $theme-switch-dark: none; // Search $sidebar-search-bg: $grey15; -$sidebar-search-shadow: rgba($g0-obsidian, .05); +$sidebar-search-shadow: rgba($g0-obsidian, .5); $sidebar-search-highlight: $b-pool; $sidebar-search-text: $g20-white; @@ -179,10 +179,8 @@ $landing-btn-grad: $grad-blue; $landing-btn-grad-hover: $grad-blue-light; // Home page colors -$home-btn-gradient: $grad-NineteenEightyFour; -$home-btn-gradient-hover: $grad-PastelGothic; -$home-tick-bg-gradient: $grad-cool-grey-abyss; -$default-home-card-gradient: $grad-Miyazakisky; +$home-icon-c1: $g20-white; +$home-icon-c2: $body-bg; // Tooltip colors $tooltip-color: $br-chartreuse; diff --git a/assets/styles/themes/_theme-light.scss b/assets/styles/themes/_theme-light.scss index 06dfe4497..16c4a116f 100644 --- a/assets/styles/themes/_theme-light.scss +++ b/assets/styles/themes/_theme-light.scss @@ -18,7 +18,7 @@ $body-bg: #f3f4fb !default; $radius: 2px !default; // TopNav Colors -$topnav-link: $g8-storm !default; +$topnav-link: #020a47 !default; $topnav-link-hover: $b-dodger !default; $default-dropdown-gradient: $grad-PastelGothic !default; $theme-switch-light: none !default; @@ -28,7 +28,7 @@ $theme-switch-dark: inline-block !default; $sidebar-search-bg: $g20-white !default; $sidebar-search-shadow: #cfd1e5 !default; $sidebar-search-highlight: $b-pool !default; -$sidebar-search-text: $g8-storm !default; +$sidebar-search-text: $g6-smoke !default; // Left Navigation $nav-category: $b-dodger !default; @@ -50,8 +50,8 @@ $product-enterprise: $br-pulsar !default; $article-bg: $g20-white !default; $article-heading: $br-pulsar !default; $article-heading-alt: $g5-pepper !default; -$article-text: $g6-smoke !default; -$article-bold: $g6-smoke !default; +$article-text: $br-dark-blue !default; +$article-bold: $br-dark-blue !default; $article-link: $b-pool !default; $article-link-hover: $br-magenta !default; $article-shadow: #cfd1e5 !default; @@ -179,10 +179,8 @@ $landing-btn-grad: $grad-blue !default; $landing-btn-grad-hover: $grad-blue-light !default; // Home page colors -$home-btn-gradient: $grad-NineteenEightyFour !default; -$home-btn-gradient-hover: $grad-PastelGothic !default; -$home-tick-bg-gradient: $grad-grey-mist !default; -$default-home-card-gradient: $grad-PastelGothic !default; +$home-icon-c1: $br-dark-blue !default; +$home-icon-c2: $g20-white !default; // Tooltip colors $tooltip-color: $p-amethyst !default; diff --git a/assets/styles/tools/_color-palette.scss b/assets/styles/tools/_color-palette.scss index 15f437560..5eca3ad52 100644 --- a/assets/styles/tools/_color-palette.scss +++ b/assets/styles/tools/_color-palette.scss @@ -1,5 +1,11 @@ // Influx Color Palette +// Brand Colors 2022 +$br-dark-blue: #020a47; +$br-new-magenta: #d30971; +$br-new-purple: #9b2aff; +$br-teal: #5ee4e4; + // Brand Colors $br-chartreuse: #D6F622; $br-deeppurple: #13002D; @@ -165,4 +171,10 @@ $grey85: #D5D5DD; $grey95: #F1F1F3; $white: #FFFFFF; -$grad-cool-grey-abyss: $grey10, $grey15; \ No newline at end of file +$grad-cool-grey-abyss: $grey10, $grey15; + +////////////////////////////// NEW BRAND GRADIENTS ///////////////////////////// +$grad-burningDusk: $br-new-magenta, $br-new-purple; +$grad-coolDusk: #771cc7, #b2025b; +$grad-tealDream: $b-pool, $br-teal; +$grad-tealDeepSleep: $b-ocean, #0ab8b8; diff --git a/assets/styles/tools/_fonts.scss b/assets/styles/tools/_fonts.scss new file mode 100644 index 000000000..71a44c09e --- /dev/null +++ b/assets/styles/tools/_fonts.scss @@ -0,0 +1,38 @@ +$rubik: 'Rubik', sans-serif; +$proxima: 'Proxima Nova', sans-serif; +$code: 'IBM Plex Mono', monospace;; + +// Font weights +$medium: 500; +$bold: 700; + +// Global font size and rendering +body { + font-size: 18px; + -webkit-font-smoothing: antialiased; + -moz-osx-font-smoothing: grayscale; +} + +@font-face { + font-family: "Proxima Nova"; + src: url("fonts/proxima-nova.otf") format("opentype"); + font-weight: 300; +} +@font-face { + font-family: 'Proxima Nova'; + src: url('fonts/proxima-nova-medium.otf') format('opentype'); + font-weight: 400; +} +@font-face { + font-family: 'Proxima Nova'; + src: url('fonts/proxima-nova-semibold.otf') format('opentype'); + font-weight: 500 600; +} +@font-face { + font-family: 'Proxima Nova'; + src: url('fonts/proxima-nova-bold.otf') format('opentype'); + font-weight: 700; +} + +@import "tools/icon-fonts/icomoon-v2"; +@import "tools/icon-fonts/icon"; \ No newline at end of file diff --git a/assets/styles/tools/_icomoon-v2.scss b/assets/styles/tools/icon-fonts/_icomoon-v2.scss similarity index 100% rename from assets/styles/tools/_icomoon-v2.scss rename to assets/styles/tools/icon-fonts/_icomoon-v2.scss diff --git a/assets/styles/tools/_icon.scss b/assets/styles/tools/icon-fonts/_icon.scss similarity index 100% rename from assets/styles/tools/_icon.scss rename to assets/styles/tools/icon-fonts/_icon.scss diff --git a/config.staging.toml b/config.staging.toml index dca5748bd..927cbdd71 100644 --- a/config.staging.toml +++ b/config.staging.toml @@ -25,6 +25,7 @@ hrefTargetBlank = true smartDashes = false [taxonomies] + "influxdb/v2.2/tag" = "influxdb/v2.1/tags" "influxdb/v2.1/tag" = "influxdb/v2.1/tags" "influxdb/v2.0/tag" = "influxdb/v2.0/tags" "influxdb/cloud/tag" = "influxdb/cloud/tags" diff --git a/config.toml b/config.toml index 0867ca62f..49cd97863 100644 --- a/config.toml +++ b/config.toml @@ -21,6 +21,7 @@ hrefTargetBlank = true smartDashes = false [taxonomies] + "influxdb/v2.2/tag" = "influxdb/v2.1/tags" "influxdb/v2.1/tag" = "influxdb/v2.1/tags" "influxdb/v2.0/tag" = "influxdb/v2.0/tags" "influxdb/cloud/tag" = "influxdb/cloud/tags" diff --git a/content/chronograf/v1.6/about_the_project/cla.md b/content/chronograf/v1.6/about_the_project/cla.md index e8329ed42..6805d670b 100644 --- a/content/chronograf/v1.6/about_the_project/cla.md +++ b/content/chronograf/v1.6/about_the_project/cla.md @@ -5,7 +5,8 @@ menu: chronograf_1_6: weight: 30 parent: About the project - url: https://www.influxdata.com/legal/cla/ + params: + url: https://www.influxdata.com/legal/cla/ --- Before you can contribute to the Chronograf project, you need to submit the [InfluxData Contributor License Agreement (CLA)](https://www.influxdata.com/legal/cla/) available on the InfluxData main site. diff --git a/content/chronograf/v1.6/about_the_project/contributing.md b/content/chronograf/v1.6/about_the_project/contributing.md index 5cdfc6ec5..f6f143d89 100644 --- a/content/chronograf/v1.6/about_the_project/contributing.md +++ b/content/chronograf/v1.6/about_the_project/contributing.md @@ -5,7 +5,8 @@ menu: name: Contributing weight: 20 parent: About the project - url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md + params: + url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md --- See [Contributing to Chronograf](https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md) in the Chronograf GitHub project to learn how you can contribute to the Chronograf project. diff --git a/content/chronograf/v1.6/about_the_project/licenses.md b/content/chronograf/v1.6/about_the_project/licenses.md index bbc830bd7..951057d94 100644 --- a/content/chronograf/v1.6/about_the_project/licenses.md +++ b/content/chronograf/v1.6/about_the_project/licenses.md @@ -5,7 +5,8 @@ menu: Name: Open source license weight: 40 parent: About the project - url: https://github.com/influxdata/chronograf/blob/master/LICENSE + params: + url: https://github.com/influxdata/chronograf/blob/master/LICENSE --- The [open source license for Chronograf](https://github.com/influxdata/chronograf/blob/master/LICENSE) is available in the Chronograf GitHub project. diff --git a/content/chronograf/v1.7/about_the_project/cla.md b/content/chronograf/v1.7/about_the_project/cla.md index b5cc29262..bfec0fe28 100644 --- a/content/chronograf/v1.7/about_the_project/cla.md +++ b/content/chronograf/v1.7/about_the_project/cla.md @@ -5,7 +5,8 @@ menu: chronograf_1_7: weight: 30 parent: About the project - url: https://www.influxdata.com/legal/cla/ + params: + url: https://www.influxdata.com/legal/cla/ --- Before you can contribute to the Chronograf project, you need to submit the [InfluxData Contributor License Agreement (CLA)](https://www.influxdata.com/legal/cla/) available on the InfluxData main site. diff --git a/content/chronograf/v1.7/about_the_project/contributing.md b/content/chronograf/v1.7/about_the_project/contributing.md index 5ac5823f4..5fa0aa9b6 100644 --- a/content/chronograf/v1.7/about_the_project/contributing.md +++ b/content/chronograf/v1.7/about_the_project/contributing.md @@ -5,7 +5,8 @@ menu: name: Contribute weight: 20 parent: About the project - url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md + params: + url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md --- See [Contributing to Chronograf](https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md) in the Chronograf GitHub project to learn how you can contribute to the Chronograf project. diff --git a/content/chronograf/v1.7/about_the_project/licenses.md b/content/chronograf/v1.7/about_the_project/licenses.md index 714c93116..c9ced67ae 100644 --- a/content/chronograf/v1.7/about_the_project/licenses.md +++ b/content/chronograf/v1.7/about_the_project/licenses.md @@ -5,7 +5,8 @@ menu: Name: Open source license weight: 40 parent: About the project - url: https://github.com/influxdata/chronograf/blob/master/LICENSE + params: + url: https://github.com/influxdata/chronograf/blob/master/LICENSE --- The [open source license for Chronograf](https://github.com/influxdata/chronograf/blob/master/LICENSE) is available in the Chronograf GitHub project. diff --git a/content/chronograf/v1.7/administration/prebuilt-dashboards.md b/content/chronograf/v1.7/administration/prebuilt-dashboards.md index eefd25fc3..0503e5927 100644 --- a/content/chronograf/v1.7/administration/prebuilt-dashboards.md +++ b/content/chronograf/v1.7/administration/prebuilt-dashboards.md @@ -30,12 +30,12 @@ The Docker dashboard displays the following information: ### Plugins -- [`docker` plugin](/{{< latest "telegraf" >}}/plugins/#docker) -- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#disk) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) +- [`docker` plugin](/{{< latest "telegraf" >}}/plugins/#input-docker) +- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#input-disk) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) ## Kubernetes Node The Kubernetes Node dashboard displays the following information: @@ -53,7 +53,7 @@ The Kubernetes Node dashboard displays the following information: - K8s - Kubelet Memory Bytes ### Plugins -- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes) +- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes) ## Kubernetes Overview The Kubernetes Node dashboard displays the following information: @@ -72,7 +72,7 @@ The Kubernetes Node dashboard displays the following information: ### Plugins -- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes) +- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes) ## Kubernetes Pod The Kubernetes Pod dashboard displays the following information: @@ -87,7 +87,7 @@ The Kubernetes Pod dashboard displays the following information: - K8s - Pod TX Bytes/Second ### Plugins -- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes) +- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes) ## Riak The Riak dashboard displays the following information: @@ -101,7 +101,7 @@ The Riak dashboard displays the following information: - Riak - Read Repairs/Minute ### Plugins -- [`riak` plugin](/{{< latest "telegraf" >}}/plugins/#riak) +- [`riak` plugin](/{{< latest "telegraf" >}}/plugins/#input-riak) ## Consul The Consul dashboard displays the following information: @@ -110,7 +110,7 @@ The Consul dashboard displays the following information: - Consul - Number of Warning Health Checks ### Plugins -- [`consul` plugin](/{{< latest "telegraf" >}}/plugins/#consul) +- [`consul` plugin](/{{< latest "telegraf" >}}/plugins/#input-consul) ## Consul Telemetry The Consul Telemetry dashboard displays the following information: @@ -125,7 +125,7 @@ The Consul Telemetry dashboard displays the following information: - Consul - Number of Serf Events ### Plugins -[`consul` plugin](/{{< latest "telegraf" >}}/plugins/#consul) +[`consul` plugin](/{{< latest "telegraf" >}}/plugins/#input-consul) ## Mesos @@ -140,7 +140,7 @@ The Mesos dashboard displays the following information: - Mesos Master Uptime ### Plugins -- [`mesos` plugin](/{{< latest "telegraf" >}}/plugins/#mesos) +- [`mesos` plugin](/{{< latest "telegraf" >}}/plugins/#input-mesos) ## RabbitMQ The RabbitMQ dashboard displays the following information: @@ -151,7 +151,7 @@ The RabbitMQ dashboard displays the following information: ### Plugins -- [`rabbitmq` plugin](/{{< latest "telegraf" >}}/plugins/#rabbitmq) +- [`rabbitmq` plugin](/{{< latest "telegraf" >}}/plugins/#input-rabbitmq) ## System @@ -170,14 +170,14 @@ The System dashboard displays the following information: ### Plugins -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) -- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#disk) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net) -- [`processes` plugin](/{{< latest "telegraf" >}}/plugins/#processes) -- [`swap` plugin](/{{< latest "telegraf" >}}/plugins/#swap) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) +- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#input-disk) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net) +- [`processes` plugin](/{{< latest "telegraf" >}}/plugins/#input-processes) +- [`swap` plugin](/{{< latest "telegraf" >}}/plugins/#input-swap) @@ -198,7 +198,7 @@ The VMware vSphere Overview dashboard gives an overview of your VMware vSphere C - VM CPU % Ready for :clustername: ### Plugins -- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vmware-vsphere) +- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vmware-vsphere) ## Apache The Apache dashboard displays the following information: @@ -221,12 +221,12 @@ The Apache dashboard displays the following information: ### Plugins -- [`apache` plugin](/{{< latest "telegraf" >}}/plugins/#apache) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net) -- [`logparser` plugin](/{{< latest "telegraf" >}}/plugins/#logparser) +- [`apache` plugin](/{{< latest "telegraf" >}}/plugins/#input-apache) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net) +- [`logparser` plugin](/{{< latest "telegraf" >}}/plugins/#input-logparser) ## ElasticSearch The ElasticSearch dashboard displays the following information: @@ -243,7 +243,7 @@ The ElasticSearch dashboard displays the following information: - ElasticSearch - JVM Heap Usage ### Plugins -- [`elasticsearch` plugin](/{{< latest "telegraf" >}}/plugins/#elasticsearch) +- [`elasticsearch` plugin](/{{< latest "telegraf" >}}/plugins/#input-elasticsearch) ## InfluxDB @@ -272,12 +272,12 @@ The InfluxDB dashboard displays the following information: ### Plugins -- [`influxdb` plugin](/{{< latest "telegraf" >}}/plugins/#influxdb) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net) +- [`influxdb` plugin](/{{< latest "telegraf" >}}/plugins/#input-influxdb) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net) @@ -299,7 +299,7 @@ The Memcached dashboard displays the following information: - Memcached - Evictions/10 Seconds ### Plugins -- [`memcached` plugin](/{{< latest "telegraf" >}}/plugins/#memcached) +- [`memcached` plugin](/{{< latest "telegraf" >}}/plugins/#input-memcached) ## NSQ @@ -315,7 +315,7 @@ The NSQ dashboard displays the following information: - NSQ - Topic Egress ### Plugins -- [`nsq` plugin](/{{< latest "telegraf" >}}/plugins/#nsq) +- [`nsq` plugin](/{{< latest "telegraf" >}}/plugins/#input-nsq) ## PostgreSQL The PostgreSQL dashboard displays the following information: @@ -340,11 +340,11 @@ The PostgreSQL dashboard displays the following information: ### Plugins -- [`postgresql` plugin](/{{< latest "telegraf" >}}/plugins/#postgresql) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) +- [`postgresql` plugin](/{{< latest "telegraf" >}}/plugins/#input-postgresql) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) ## HAProxy @@ -367,7 +367,7 @@ The HAProxy dashboard displays the following information: - HAProxy - Backend Error Responses/Second ### Plugins -- [`haproxy` plugin](/{{< latest "telegraf" >}}/plugins/#haproxy) +- [`haproxy` plugin](/{{< latest "telegraf" >}}/plugins/#input-haproxy) ## NGINX @@ -379,7 +379,7 @@ The NGINX dashboard displays the following information: - NGINX - Active Client State ### Plugins -- [`nginx` plugin](/{{< latest "telegraf" >}}/plugins/#nginx) +- [`nginx` plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx) ## Redis The Redis dashboard displays the following information: @@ -390,7 +390,7 @@ The Redis dashboard displays the following information: - Redis - Memory ### Plugins -- [`redis` plugin](/{{< latest "telegraf" >}}/plugins/#redis) +- [`redis` plugin](/{{< latest "telegraf" >}}/plugins/#input-redis) ## VMware vSphere VMs @@ -406,7 +406,7 @@ The VMWare vSphere VMs dashboard gives an overview of your VMware vSphere virtua - Total Disk Latency for :vmname: ### Plugins -- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vsphere) +- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vsphere) ## VMware vSphere Hosts @@ -422,7 +422,7 @@ The VMWare vSphere Hosts dashboard displays the following information: - Total Disk Latency for :esxhostname: ### Plugins -- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vsphere) +- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vsphere) ## PHPfpm The PHPfpm dashboard displays the following information: @@ -433,7 +433,7 @@ The PHPfpm dashboard displays the following information: - PHPfpm - Max Children Reached ### Plugins -- [`phpfpm` plugin](/{{< latest "telegraf" >}}/plugins/#nginx) +- [`phpfpm` plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx) ## Win System The Win System dashboard displays the following information: @@ -445,7 +445,7 @@ The Win System dashboard displays the following information: - System - Load ### Plugins -- [`win_services` plugin](/{{< latest "telegraf" >}}/plugins/#windows-services) +- [`win_services` plugin](/{{< latest "telegraf" >}}/plugins/#input-windows-services) ## MySQL @@ -472,9 +472,9 @@ The MySQL dashboard displays the following information: - InnoDB Data ### Plugins -- [`mySQL` plugin](/{{< latest "telegraf" >}}/plugins/#mysql) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) +- [`mySQL` plugin](/{{< latest "telegraf" >}}/plugins/#input-mysql) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) ## Ping The Ping dashboard displays the following information: @@ -483,4 +483,4 @@ The Ping dashboard displays the following information: - Ping - Response Times (ms) ### Plugins -- [`ping` plugin](/{{< latest "telegraf" >}}/plugins/#ping) +- [`ping` plugin](/{{< latest "telegraf" >}}/plugins/#input-ping) diff --git a/content/chronograf/v1.7/guides/using-precreated-dashboards.md b/content/chronograf/v1.7/guides/using-precreated-dashboards.md index 5c1724843..828c5c009 100644 --- a/content/chronograf/v1.7/guides/using-precreated-dashboards.md +++ b/content/chronograf/v1.7/guides/using-precreated-dashboards.md @@ -65,7 +65,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## apache -**Required Telegraf plugin:** [Apache input plugin](/{{< latest "telegraf" >}}/plugins/#apache) +**Required Telegraf plugin:** [Apache input plugin](/{{< latest "telegraf" >}}/plugins/#input-apache) `apache.json` @@ -75,7 +75,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## consul -**Required Telegraf plugin:** [Consul input plugin](/{{< latest "telegraf" >}}/plugins/#consul) +**Required Telegraf plugin:** [Consul input plugin](/{{< latest "telegraf" >}}/plugins/#input-consul) `consul_http.json` @@ -95,7 +95,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## docker -**Required Telegraf plugin:** [Docker input plugin](/{{< latest "telegraf" >}}/plugins/#docker) +**Required Telegraf plugin:** [Docker input plugin](/{{< latest "telegraf" >}}/plugins/#input-docker) `docker.json` @@ -115,7 +115,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## elasticsearch -**Required Telegraf plugin:** [Elasticsearch input plugin](/{{< latest "telegraf" >}}/plugins/#elasticsearch) +**Required Telegraf plugin:** [Elasticsearch input plugin](/{{< latest "telegraf" >}}/plugins/#input-elasticsearch) `elasticsearch.json` @@ -132,7 +132,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## haproxy -**Required Telegraf plugin:** [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins/#haproxy) +**Required Telegraf plugin:** [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins/#input-haproxy) `haproxy.json` @@ -154,7 +154,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## iis -**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#win_perf_counters) +**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#input-win_perf_counters) `win_websvc.json` @@ -162,7 +162,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## influxdb -**Required Telegraf plugin:** [InfluxDB input plugin](/{{< latest "telegraf" >}}/plugins/#influxdb) +**Required Telegraf plugin:** [InfluxDB input plugin](/{{< latest "telegraf" >}}/plugins/#input-influxdb) `influxdb_database.json` @@ -207,7 +207,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## Memcached (`memcached`) -**Required Telegraf plugin:** [Memcached input plugin](/{{< latest "telegraf" >}}/plugins/#memcached) +**Required Telegraf plugin:** [Memcached input plugin](/{{< latest "telegraf" >}}/plugins/#input-memcached) `memcached.json` @@ -227,7 +227,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## mesos -**Required Telegraf plugin:** [Mesos input plugin](/{{< latest "telegraf" >}}/plugins/#mesos) +**Required Telegraf plugin:** [Mesos input plugin](/{{< latest "telegraf" >}}/plugins/#input-mesos) `mesos.json` @@ -242,7 +242,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## mongodb -**Required Telegraf plugin:** [MongoDB input plugin](/{{< latest "telegraf" >}}/plugins/#mongodb) +**Required Telegraf plugin:** [MongoDB input plugin](/{{< latest "telegraf" >}}/plugins/#input-mongodb) `mongodb.json` @@ -254,7 +254,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## mysql -**Required Telegraf plugin:** [MySQL input plugin](/{{< latest "telegraf" >}}/plugins/#mysql) +**Required Telegraf plugin:** [MySQL input plugin](/{{< latest "telegraf" >}}/plugins/#input-mysql) `mysql.json` @@ -265,7 +265,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## nginx -**Required Telegraf plugin:** [NGINX input plugin](/{{< latest "telegraf" >}}/plugins/#nginx) +**Required Telegraf plugin:** [NGINX input plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx) `nginx.json` @@ -276,7 +276,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## nsq -**Required Telegraf plugin:** [NSQ input plugin](/{{< latest "telegraf" >}}/plugins/#nsq) +**Required Telegraf plugin:** [NSQ input plugin](/{{< latest "telegraf" >}}/plugins/#input-nsq) `nsq_channel.json` @@ -297,7 +297,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## phpfpm -**Required Telegraf plugin:** [PHPfpm input plugin](/{{< latest "telegraf" >}}/plugins/#phpfpm) +**Required Telegraf plugin:** [PHPfpm input plugin](/{{< latest "telegraf" >}}/plugins/#input-phpfpm) `phpfpm.json` @@ -309,7 +309,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## ping -**Required Telegraf plugin:** [Ping input plugin](/{{< latest "telegraf" >}}/plugins/#ping) +**Required Telegraf plugin:** [Ping input plugin](/{{< latest "telegraf" >}}/plugins/#input-ping) `ping.json` @@ -318,7 +318,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## postgresql -**Required Telegraf plugin:** [PostgreSQL input plugin](/{{< latest "telegraf" >}}/plugins/#postgresql) +**Required Telegraf plugin:** [PostgreSQL input plugin](/{{< latest "telegraf" >}}/plugins/#input-postgresql) `postgresql.json` @@ -329,7 +329,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## rabbitmq -**Required Telegraf plugin:** [RabbitMQ input plugin](/{{< latest "telegraf" >}}/plugins/#rabbitmq) +**Required Telegraf plugin:** [RabbitMQ input plugin](/{{< latest "telegraf" >}}/plugins/#input-rabbitmq) `rabbitmq.json` @@ -340,7 +340,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## redis -**Required Telegraf plugin:** [Redis input plugin](/{{< latest "telegraf" >}}/plugins/#redis) +**Required Telegraf plugin:** [Redis input plugin](/{{< latest "telegraf" >}}/plugins/#input-redis) `redis.json` @@ -352,7 +352,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## riak -**Required Telegraf plugin:** [Riak input plugin](/{{< latest "telegraf" >}}/plugins/#riak) +**Required Telegraf plugin:** [Riak input plugin](/{{< latest "telegraf" >}}/plugins/#input-riak) `riak.json` @@ -371,7 +371,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ### cpu -**Required Telegraf plugin:** [CPU input plugin](/{{< latest "telegraf" >}}/plugins/#cpu) +**Required Telegraf plugin:** [CPU input plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) `cpu.json` @@ -381,13 +381,13 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ `disk.json` -**Required Telegraf plugin:** [Disk input plugin](/{{< latest "telegraf" >}}/plugins/#disk) +**Required Telegraf plugin:** [Disk input plugin](/{{< latest "telegraf" >}}/plugins/#input-disk) * "System - Disk used %" ### diskio -**Required Telegraf plugin:** [DiskIO input plugin](/{{< latest "telegraf" >}}/plugins/#diskio) +**Required Telegraf plugin:** [DiskIO input plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) `diskio.json` @@ -396,7 +396,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ### mem -**Required Telegraf plugin:** [Mem input plugin](/{{< latest "telegraf" >}}/plugins/#mem) +**Required Telegraf plugin:** [Mem input plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) `mem.json` @@ -404,7 +404,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ### net -**Required Telegraf plugin:** [Net input plugin](/{{< latest "telegraf" >}}/plugins/#net) +**Required Telegraf plugin:** [Net input plugin](/{{< latest "telegraf" >}}/plugins/#input-net) `net.json` @@ -413,7 +413,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ### netstat -**Required Telegraf plugin:** [Netstat input plugin](/{{< latest "telegraf" >}}/plugins/#netstat) +**Required Telegraf plugin:** [Netstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-netstat) `netstat.json` @@ -422,7 +422,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ### processes -**Required Telegraf plugin:** [Processes input plugin](/{{< latest "telegraf" >}}/plugins/#processes) +**Required Telegraf plugin:** [Processes input plugin](/{{< latest "telegraf" >}}/plugins/#input-processes) `processes.json` @@ -430,7 +430,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ### procstat -**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#procstat) +**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-procstat) `procstat.json` @@ -439,7 +439,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ### system -**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#procstat) +**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-procstat) `load.json` @@ -447,7 +447,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## varnish -**Required Telegraf plugin:** [Varnish](/{{< latest "telegraf" >}}/plugins/#varnish) +**Required Telegraf plugin:** [Varnish](/{{< latest "telegraf" >}}/plugins/#input-varnish) `varnish.json` @@ -456,7 +456,7 @@ See [Telegraf configuration](https://github.com/influxdata/telegraf/blob/master/ ## win_system -**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#win_perf_counters) +**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#input-win_perf_counters) `win_cpu.json` diff --git a/content/chronograf/v1.8/about_the_project/cla.md b/content/chronograf/v1.8/about_the_project/cla.md index 4be7081b5..b921f30e4 100644 --- a/content/chronograf/v1.8/about_the_project/cla.md +++ b/content/chronograf/v1.8/about_the_project/cla.md @@ -6,7 +6,8 @@ menu: chronograf_1_8: weight: 30 parent: About the project - url: https://www.influxdata.com/legal/cla/ + params: + url: https://www.influxdata.com/legal/cla/ --- Before you can contribute to the Chronograf project, you need to submit the [InfluxData Contributor License Agreement (CLA)](https://www.influxdata.com/legal/cla/) available on the InfluxData main site. diff --git a/content/chronograf/v1.8/about_the_project/contributing.md b/content/chronograf/v1.8/about_the_project/contributing.md index efcb17698..8de232445 100644 --- a/content/chronograf/v1.8/about_the_project/contributing.md +++ b/content/chronograf/v1.8/about_the_project/contributing.md @@ -6,7 +6,8 @@ menu: name: Contribute weight: 20 parent: About the project - url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md + params: + url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md --- See [Contributing to Chronograf](https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md) in the Chronograf GitHub project to learn how you can contribute to the Chronograf project. diff --git a/content/chronograf/v1.8/about_the_project/licenses.md b/content/chronograf/v1.8/about_the_project/licenses.md index 0db15ab52..886d7ade2 100644 --- a/content/chronograf/v1.8/about_the_project/licenses.md +++ b/content/chronograf/v1.8/about_the_project/licenses.md @@ -6,7 +6,8 @@ menu: Name: Open source license weight: 40 parent: About the project - url: https://github.com/influxdata/chronograf/blob/master/LICENSE + params: + url: https://github.com/influxdata/chronograf/blob/master/LICENSE --- The [open source license for Chronograf](https://github.com/influxdata/chronograf/blob/master/LICENSE) is available in the Chronograf GitHub project. diff --git a/content/chronograf/v1.8/administration/prebuilt-dashboards.md b/content/chronograf/v1.8/administration/prebuilt-dashboards.md index f8a8c0b76..6eec669bb 100644 --- a/content/chronograf/v1.8/administration/prebuilt-dashboards.md +++ b/content/chronograf/v1.8/administration/prebuilt-dashboards.md @@ -30,12 +30,12 @@ The Docker dashboard displays the following information: ### Plugins -- [`docker` plugin](/{{< latest "telegraf" >}}/plugins/#docker) -- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#disk) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) +- [`docker` plugin](/{{< latest "telegraf" >}}/plugins/#input-docker) +- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#input-disk) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) ## Kubernetes Node The Kubernetes Node dashboard displays the following information: @@ -53,7 +53,7 @@ The Kubernetes Node dashboard displays the following information: - K8s - Kubelet Memory Bytes ### Plugins -- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes) +- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes) ## Kubernetes Overview The Kubernetes Node dashboard displays the following information: @@ -72,7 +72,7 @@ The Kubernetes Node dashboard displays the following information: ### Plugins -- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes) +- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes) ## Kubernetes Pod The Kubernetes Pod dashboard displays the following information: @@ -87,7 +87,7 @@ The Kubernetes Pod dashboard displays the following information: - K8s - Pod TX Bytes/Second ### Plugins -- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes) +- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes) ## Riak The Riak dashboard displays the following information: @@ -101,7 +101,7 @@ The Riak dashboard displays the following information: - Riak - Read Repairs/Minute ### Plugins -- [`riak` plugin](/{{< latest "telegraf" >}}/plugins/#riak) +- [`riak` plugin](/{{< latest "telegraf" >}}/plugins/#input-riak) ## Consul The Consul dashboard displays the following information: @@ -110,7 +110,7 @@ The Consul dashboard displays the following information: - Consul - Number of Warning Health Checks ### Plugins -- [`consul` plugin](/{{< latest "telegraf" >}}/plugins/#consul) +- [`consul` plugin](/{{< latest "telegraf" >}}/plugins/#input-consul) ## Consul Telemetry The Consul Telemetry dashboard displays the following information: @@ -125,7 +125,7 @@ The Consul Telemetry dashboard displays the following information: - Consul - Number of Serf Events ### Plugins -[`consul` plugin](/{{< latest "telegraf" >}}/plugins/#consul) +[`consul` plugin](/{{< latest "telegraf" >}}/plugins/#input-consul) ## Mesos @@ -140,7 +140,7 @@ The Mesos dashboard displays the following information: - Mesos Master Uptime ### Plugins -- [`mesos` plugin](/{{< latest "telegraf" >}}/plugins/#mesos) +- [`mesos` plugin](/{{< latest "telegraf" >}}/plugins/#input-mesos) ## RabbitMQ The RabbitMQ dashboard displays the following information: @@ -151,7 +151,7 @@ The RabbitMQ dashboard displays the following information: ### Plugins -- [`rabbitmq` plugin](/{{< latest "telegraf" >}}/plugins/#rabbitmq) +- [`rabbitmq` plugin](/{{< latest "telegraf" >}}/plugins/#input-rabbitmq) ## System @@ -170,14 +170,14 @@ The System dashboard displays the following information: ### Plugins -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) -- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#disk) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net) -- [`processes` plugin](/{{< latest "telegraf" >}}/plugins/#processes) -- [`swap` plugin](/{{< latest "telegraf" >}}/plugins/#swap) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) +- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#input-disk) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net) +- [`processes` plugin](/{{< latest "telegraf" >}}/plugins/#input-processes) +- [`swap` plugin](/{{< latest "telegraf" >}}/plugins/#input-swap) @@ -198,7 +198,7 @@ The VMware vSphere Overview dashboard gives an overview of your VMware vSphere C - VM CPU % Ready for :clustername: ### Plugins -- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vmware-vsphere) +- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vmware-vsphere) ## Apache The Apache dashboard displays the following information: @@ -221,12 +221,12 @@ The Apache dashboard displays the following information: ### Plugins -- [`apache` plugin](/{{< latest "telegraf" >}}/plugins/#apache) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net) -- [`logparser` plugin](/{{< latest "telegraf" >}}/plugins/#logparser) +- [`apache` plugin](/{{< latest "telegraf" >}}/plugins/#input-apache) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net) +- [`logparser` plugin](/{{< latest "telegraf" >}}/plugins/#input-logparser) ## ElasticSearch The ElasticSearch dashboard displays the following information: @@ -243,7 +243,7 @@ The ElasticSearch dashboard displays the following information: - ElasticSearch - JVM Heap Usage ### Plugins -- [`elasticsearch` plugin](/{{< latest "telegraf" >}}/plugins/#elasticsearch) +- [`elasticsearch` plugin](/{{< latest "telegraf" >}}/plugins/#input-elasticsearch) ## InfluxDB @@ -272,12 +272,12 @@ The InfluxDB dashboard displays the following information: ### Plugins -- [`influxdb` plugin](/{{< latest "telegraf" >}}/plugins/#influxdb) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net) +- [`influxdb` plugin](/{{< latest "telegraf" >}}/plugins/#input-influxdb) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net) @@ -299,7 +299,7 @@ The Memcached dashboard displays the following information: - Memcached - Evictions/10 Seconds ### Plugins -- [`memcached` plugin](/{{< latest "telegraf" >}}/plugins/#memcached) +- [`memcached` plugin](/{{< latest "telegraf" >}}/plugins/#input-memcached) ## NSQ @@ -315,7 +315,7 @@ The NSQ dashboard displays the following information: - NSQ - Topic Egress ### Plugins -- [`nsq` plugin](/{{< latest "telegraf" >}}/plugins/#nsq) +- [`nsq` plugin](/{{< latest "telegraf" >}}/plugins/#input-nsq) ## PostgreSQL The PostgreSQL dashboard displays the following information: @@ -340,11 +340,11 @@ The PostgreSQL dashboard displays the following information: ### Plugins -- [`postgresql` plugin](/{{< latest "telegraf" >}}/plugins/#postgresql) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) +- [`postgresql` plugin](/{{< latest "telegraf" >}}/plugins/#input-postgresql) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) ## HAProxy @@ -367,7 +367,7 @@ The HAProxy dashboard displays the following information: - HAProxy - Backend Error Responses/Second ### Plugins -- [`haproxy` plugin](/{{< latest "telegraf" >}}/plugins/#haproxy) +- [`haproxy` plugin](/{{< latest "telegraf" >}}/plugins/#input-haproxy) ## NGINX @@ -379,7 +379,7 @@ The NGINX dashboard displays the following information: - NGINX - Active Client State ### Plugins -- [`nginx` plugin](/{{< latest "telegraf" >}}/plugins/#nginx) +- [`nginx` plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx) ## Redis The Redis dashboard displays the following information: @@ -390,7 +390,7 @@ The Redis dashboard displays the following information: - Redis - Memory ### Plugins -- [`redis` plugin](/{{< latest "telegraf" >}}/plugins/#redis) +- [`redis` plugin](/{{< latest "telegraf" >}}/plugins/#input-redis) ## VMware vSphere VMs @@ -406,7 +406,7 @@ The VMWare vSphere VMs dashboard gives an overview of your VMware vSphere virtua - Total Disk Latency for :vmname: ### Plugins -- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vsphere) +- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vsphere) ## VMware vSphere Hosts @@ -422,7 +422,7 @@ The VMWare vSphere Hosts dashboard displays the following information: - Total Disk Latency for :esxhostname: ### Plugins -- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vsphere) +- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vsphere) ## PHPfpm The PHPfpm dashboard displays the following information: @@ -433,7 +433,7 @@ The PHPfpm dashboard displays the following information: - PHPfpm - Max Children Reached ### Plugins -- [`phpfpm` plugin](/{{< latest "telegraf" >}}/plugins/#nginx) +- [`phpfpm` plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx) ## Win System The Win System dashboard displays the following information: @@ -445,7 +445,7 @@ The Win System dashboard displays the following information: - System - Load ### Plugins -- [`win_services` plugin](/{{< latest "telegraf" >}}/plugins/#windows-services) +- [`win_services` plugin](/{{< latest "telegraf" >}}/plugins/#input-windows-services) ## MySQL @@ -472,9 +472,9 @@ The MySQL dashboard displays the following information: - InnoDB Data ### Plugins -- [`mySQL` plugin](/{{< latest "telegraf" >}}/plugins/#mysql) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) +- [`mySQL` plugin](/{{< latest "telegraf" >}}/plugins/#input-mysql) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) ## Ping The Ping dashboard displays the following information: @@ -483,4 +483,4 @@ The Ping dashboard displays the following information: - Ping - Response Times (ms) ### Plugins -- [`ping` plugin](/{{< latest "telegraf" >}}/plugins/#ping) +- [`ping` plugin](/{{< latest "telegraf" >}}/plugins/#input-ping) diff --git a/content/chronograf/v1.8/guides/using-precreated-dashboards.md b/content/chronograf/v1.8/guides/using-precreated-dashboards.md index 779ba37ff..3b2118f4e 100644 --- a/content/chronograf/v1.8/guides/using-precreated-dashboards.md +++ b/content/chronograf/v1.8/guides/using-precreated-dashboards.md @@ -69,7 +69,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## apache -**Required Telegraf plugin:** [Apache input plugin](/{{< latest "telegraf" >}}/plugins/#apache-http-server) +**Required Telegraf plugin:** [Apache input plugin](/{{< latest "telegraf" >}}/plugins/#input-apache) `apache.json` @@ -79,7 +79,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## consul -**Required Telegraf plugin:** [Consul input plugin](/{{< latest "telegraf" >}}/plugins/#consul) +**Required Telegraf plugin:** [Consul input plugin](/{{< latest "telegraf" >}}/plugins/#input-consul) `consul_http.json` @@ -99,7 +99,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## docker -**Required Telegraf plugin:** [Docker input plugin](/{{< latest "telegraf" >}}/plugins/#docker) +**Required Telegraf plugin:** [Docker input plugin](/{{< latest "telegraf" >}}/plugins/#input-docker) `docker.json` @@ -119,7 +119,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## elasticsearch -**Required Telegraf plugin:** [Elasticsearch input plugin](/{{< latest "telegraf" >}}/plugins/#elasticsearch) +**Required Telegraf plugin:** [Elasticsearch input plugin](/{{< latest "telegraf" >}}/plugins/#input-elasticsearch) `elasticsearch.json` @@ -136,7 +136,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## haproxy -**Required Telegraf plugin:** [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins/#haproxy) +**Required Telegraf plugin:** [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins/#input-haproxy) `haproxy.json` @@ -158,7 +158,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## iis -**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#windows-performance-counters) +**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#input-windows-performance-counters) `win_websvc.json` @@ -166,7 +166,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## influxdb -**Required Telegraf plugin:** [InfluxDB input plugin](/{{< latest "telegraf" >}}/plugins/#influxdb) +**Required Telegraf plugin:** [InfluxDB input plugin](/{{< latest "telegraf" >}}/plugins/#input-influxdb) `influxdb_database.json` @@ -211,7 +211,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## Memcached (`memcached`) -**Required Telegraf plugin:** [Memcached input plugin](/{{< latest "telegraf" >}}/plugins/#memcached) +**Required Telegraf plugin:** [Memcached input plugin](/{{< latest "telegraf" >}}/plugins/#input-memcached) `memcached.json` @@ -231,7 +231,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## mesos -**Required Telegraf plugin:** [Mesos input plugin](/{{< latest "telegraf" >}}/plugins/#mesos) +**Required Telegraf plugin:** [Mesos input plugin](/{{< latest "telegraf" >}}/plugins/#input-mesos) `mesos.json` @@ -246,7 +246,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## mongodb -**Required Telegraf plugin:** [MongoDB input plugin](/{{< latest "telegraf" >}}/plugins/#mongodb) +**Required Telegraf plugin:** [MongoDB input plugin](/{{< latest "telegraf" >}}/plugins/#input-mongodb) `mongodb.json` @@ -258,7 +258,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## mysql -**Required Telegraf plugin:** [MySQL input plugin](/{{< latest "telegraf" >}}/plugins/#mysql) +**Required Telegraf plugin:** [MySQL input plugin](/{{< latest "telegraf" >}}/plugins/#input-mysql) `mysql.json` @@ -269,7 +269,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## nginx -**Required Telegraf plugin:** [NGINX input plugin](/{{< latest "telegraf" >}}/plugins/#nginx) +**Required Telegraf plugin:** [NGINX input plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx) `nginx.json` @@ -280,7 +280,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## nsq -**Required Telegraf plugin:** [NSQ input plugin](/{{< latest "telegraf" >}}/plugins/#nsq) +**Required Telegraf plugin:** [NSQ input plugin](/{{< latest "telegraf" >}}/plugins/#input-nsq) `nsq_channel.json` @@ -301,7 +301,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## phpfpm -**Required Telegraf plugin:** [PHPfpm input plugin](/{{< latest "telegraf" >}}/plugins/#php-fpm) +**Required Telegraf plugin:** [PHPfpm input plugin](/{{< latest "telegraf" >}}/plugins/#input-php-fpm) `phpfpm.json` @@ -313,7 +313,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## ping -**Required Telegraf plugin:** [Ping input plugin](/{{< latest "telegraf" >}}/plugins/#ping) +**Required Telegraf plugin:** [Ping input plugin](/{{< latest "telegraf" >}}/plugins/#input-ping) `ping.json` @@ -322,7 +322,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## postgresql -**Required Telegraf plugin:** [PostgreSQL input plugin](/{{< latest "telegraf" >}}/plugins/#postgresql) +**Required Telegraf plugin:** [PostgreSQL input plugin](/{{< latest "telegraf" >}}/plugins/#input-postgresql) `postgresql.json` @@ -333,7 +333,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## rabbitmq -**Required Telegraf plugin:** [RabbitMQ input plugin](/{{< latest "telegraf" >}}/plugins/#rabbitmq) +**Required Telegraf plugin:** [RabbitMQ input plugin](/{{< latest "telegraf" >}}/plugins/#input-rabbitmq) `rabbitmq.json` @@ -344,7 +344,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## redis -**Required Telegraf plugin:** [Redis input plugin](/{{< latest "telegraf" >}}/plugins/#redis) +**Required Telegraf plugin:** [Redis input plugin](/{{< latest "telegraf" >}}/plugins/#input-redis) `redis.json` @@ -356,7 +356,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## riak -**Required Telegraf plugin:** [Riak input plugin](/{{< latest "telegraf" >}}/plugins/#riak) +**Required Telegraf plugin:** [Riak input plugin](/{{< latest "telegraf" >}}/plugins/#input-riak) `riak.json` @@ -375,7 +375,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### cpu -**Required Telegraf plugin:** [CPU input plugin](/{{< latest "telegraf" >}}/plugins/#cpu) +**Required Telegraf plugin:** [CPU input plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) `cpu.json` @@ -385,13 +385,13 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t `disk.json` -**Required Telegraf plugin:** [Disk input plugin](/{{< latest "telegraf" >}}/plugins/#disk) +**Required Telegraf plugin:** [Disk input plugin](/{{< latest "telegraf" >}}/plugins/#input-disk) * "System - Disk used %" ### diskio -**Required Telegraf plugin:** [DiskIO input plugin](/{{< latest "telegraf" >}}/plugins/#diskio) +**Required Telegraf plugin:** [DiskIO input plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) `diskio.json` @@ -400,7 +400,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### mem -**Required Telegraf plugin:** [Mem input plugin](/{{< latest "telegraf" >}}/plugins/#mem) +**Required Telegraf plugin:** [Mem input plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) `mem.json` @@ -408,7 +408,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### net -**Required Telegraf plugin:** [Net input plugin](/{{< latest "telegraf" >}}/plugins/#net) +**Required Telegraf plugin:** [Net input plugin](/{{< latest "telegraf" >}}/plugins/#input-net) `net.json` @@ -417,7 +417,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### netstat -**Required Telegraf plugin:** [Netstat input plugin](/{{< latest "telegraf" >}}/plugins/#netstat) +**Required Telegraf plugin:** [Netstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-netstat) `netstat.json` @@ -426,7 +426,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### processes -**Required Telegraf plugin:** [Processes input plugin](/{{< latest "telegraf" >}}/plugins/#processes) +**Required Telegraf plugin:** [Processes input plugin](/{{< latest "telegraf" >}}/plugins/#input-processes) `processes.json` @@ -434,7 +434,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### procstat -**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#procstat) +**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-procstat) `procstat.json` @@ -443,7 +443,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### system -**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#procstat) +**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-procstat) `load.json` @@ -451,7 +451,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## varnish -**Required Telegraf plugin:** [Varnish](/{{< latest "telegraf" >}}/plugins/#varnish) +**Required Telegraf plugin:** [Varnish](/{{< latest "telegraf" >}}/plugins/#input-varnish) `varnish.json` @@ -460,7 +460,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## win_system -**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#windows-performance-counters) +**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#input-windows-performance-counters) `win_cpu.json` diff --git a/content/chronograf/v1.9/about_the_project/cla.md b/content/chronograf/v1.9/about_the_project/cla.md index 6e52e85d0..ea6d53e32 100644 --- a/content/chronograf/v1.9/about_the_project/cla.md +++ b/content/chronograf/v1.9/about_the_project/cla.md @@ -6,7 +6,8 @@ menu: chronograf_1_9: weight: 30 parent: About the project - url: https://www.influxdata.com/legal/cla/ + params: + url: https://www.influxdata.com/legal/cla/ --- Before you can contribute to the Chronograf project, you need to submit the [InfluxData Contributor License Agreement (CLA)](https://www.influxdata.com/legal/cla/) available on the InfluxData main site. diff --git a/content/chronograf/v1.9/about_the_project/contributing.md b/content/chronograf/v1.9/about_the_project/contributing.md index 506455300..44e9cc21b 100644 --- a/content/chronograf/v1.9/about_the_project/contributing.md +++ b/content/chronograf/v1.9/about_the_project/contributing.md @@ -6,7 +6,8 @@ menu: name: Contribute weight: 20 parent: About the project - url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md + params: + url: https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md --- See [Contributing to Chronograf](https://github.com/influxdata/chronograf/blob/master/CONTRIBUTING.md) in the Chronograf GitHub project to learn how you can contribute to the Chronograf project. diff --git a/content/chronograf/v1.9/about_the_project/licenses.md b/content/chronograf/v1.9/about_the_project/licenses.md index c5d14d69a..4215cbd96 100644 --- a/content/chronograf/v1.9/about_the_project/licenses.md +++ b/content/chronograf/v1.9/about_the_project/licenses.md @@ -6,7 +6,8 @@ menu: Name: Open source license weight: 40 parent: About the project - url: https://github.com/influxdata/chronograf/blob/master/LICENSE + params: + url: https://github.com/influxdata/chronograf/blob/master/LICENSE --- The [open source license for Chronograf](https://github.com/influxdata/chronograf/blob/master/LICENSE) is available in the Chronograf GitHub project. diff --git a/content/chronograf/v1.9/about_the_project/release-notes-changelog.md b/content/chronograf/v1.9/about_the_project/release-notes-changelog.md index e15d25d6b..806e4121d 100644 --- a/content/chronograf/v1.9/about_the_project/release-notes-changelog.md +++ b/content/chronograf/v1.9/about_the_project/release-notes-changelog.md @@ -8,6 +8,68 @@ menu: parent: About the project --- +## v1.9.4 [2022-03-28] + +### Features + +This release renames the Flux `Query Builder` to the Flux `Script Builder` (and adds improvements), and improves on Kapacitor integration. + +#### Flux Builder improvements + +- Rename the Flux `Query Builder` to the Flux `Script Builder`, and add new functionality including: + - Ability to load truncated tags and keys into the Flux Script Builder when connected to InfluxDB Cloud. + - Script Builder tag keys and tag values depend on a selected time range. +- Make aggregation function selection optional. +- Autocomplete builtin v object in Flux editor. +- Add a warning before overriding the existing Flux Editor script. + +#### Kapacitor integration improvements + +Improved pagination and performance of the UI when you have large numbers of TICKscripts and Flux tasks. + +- Move Flux Tasks to a separate page under Alerting menu. +- Add `TICKscripts Page` under Alerting menu. +- Optimize Alert Rules API. +- Open `Alert Rule Builder` from the TICKscripts page. +- Remove `Manage Tasks` page, add `Alert Rules` page. +- Add alert rule options to not send alert on state recovery and send regardless of state change. + +### Bug fixes + +- Respect `BASE_PATH` when serving API docs. +- Propagate InfluxQL errors to UI. +- Rename Flux Query to Flux Script. +- Repair time zone selector on Host page. +- Report correct Chronograf version. +- Show failure reason on Queries page. +- Reorder Alerting side menu. + +## v1.9.3 [2022-02-02] + +{{% note %}} **NOTE:** We did not release version 1.9.2 due to a bug that impacted communication between the browser’s main thread and background workers. This bug has been fixed in the 1.9.3 release. +{{% /note %}} + +### Features +- Add ability to rename TICKscripts. +- Add the following enhancements to the `InfluxDB Admin - Queries` tab: + - `CSV download` button. + - Rename `Running` column to `Duration`. + - Add `Status` column. When hovering over the `Duration` column, status shows `Kill` confirmation button. + - Modify the `CSV` export to include the `Status` column. +- Upgrade to use new `google.golang protobuf` library. + +### Bug Fixes +- Ability to log the InfluxDB instance URL when a ping fails, making connection issues easier to identify. +- Repair enforcement of one organization between multiple tabs. +- Configure HTTP proxy from environment variables in HTTP clients. Improvements were made to: + - Token command within `chronoctl` + - OAuth client + - Kapacitor client + - Flux client + +#### Security +- Upgrade `github.com/microcosm-cc/bluemonday` to resolve CVE-2021-42576. + ## v1.9.1 [2021-10-08] ### Features diff --git a/content/chronograf/v1.9/administration/prebuilt-dashboards.md b/content/chronograf/v1.9/administration/prebuilt-dashboards.md index 2d7ed3000..606651a20 100644 --- a/content/chronograf/v1.9/administration/prebuilt-dashboards.md +++ b/content/chronograf/v1.9/administration/prebuilt-dashboards.md @@ -30,12 +30,12 @@ The Docker dashboard displays the following information: ### Plugins -- [`docker` plugin](/{{< latest "telegraf" >}}/plugins/#docker) -- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#disk) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) +- [`docker` plugin](/{{< latest "telegraf" >}}/plugins/#input-docker) +- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#input-disk) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) ## Kubernetes Node The Kubernetes Node dashboard displays the following information: @@ -53,7 +53,7 @@ The Kubernetes Node dashboard displays the following information: - K8s - Kubelet Memory Bytes ### Plugins -- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes) +- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes) ## Kubernetes Overview The Kubernetes Node dashboard displays the following information: @@ -72,7 +72,7 @@ The Kubernetes Node dashboard displays the following information: ### Plugins -- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes) +- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes) ## Kubernetes Pod The Kubernetes Pod dashboard displays the following information: @@ -87,7 +87,7 @@ The Kubernetes Pod dashboard displays the following information: - K8s - Pod TX Bytes/Second ### Plugins -- [kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes) +- [kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes) ## Riak The Riak dashboard displays the following information: @@ -101,7 +101,7 @@ The Riak dashboard displays the following information: - Riak - Read Repairs/Minute ### Plugins -- [`riak` plugin](/{{< latest "telegraf" >}}/plugins/#riak) +- [`riak` plugin](/{{< latest "telegraf" >}}/plugins/#input-riak) ## Consul The Consul dashboard displays the following information: @@ -110,7 +110,7 @@ The Consul dashboard displays the following information: - Consul - Number of Warning Health Checks ### Plugins -- [`consul` plugin](/{{< latest "telegraf" >}}/plugins/#consul) +- [`consul` plugin](/{{< latest "telegraf" >}}/plugins/#input-consul) ## Consul Telemetry The Consul Telemetry dashboard displays the following information: @@ -125,7 +125,7 @@ The Consul Telemetry dashboard displays the following information: - Consul - Number of Serf Events ### Plugins -[`consul` plugin](/{{< latest "telegraf" >}}/plugins/#consul) +[`consul` plugin](/{{< latest "telegraf" >}}/plugins/#input-consul) ## Mesos @@ -140,7 +140,7 @@ The Mesos dashboard displays the following information: - Mesos Master Uptime ### Plugins -- [`mesos` plugin](/{{< latest "telegraf" >}}/plugins/#mesos) +- [`mesos` plugin](/{{< latest "telegraf" >}}/plugins/#input-mesos) ## RabbitMQ The RabbitMQ dashboard displays the following information: @@ -151,7 +151,7 @@ The RabbitMQ dashboard displays the following information: ### Plugins -- [`rabbitmq` plugin](/{{< latest "telegraf" >}}/plugins/#rabbitmq) +- [`rabbitmq` plugin](/{{< latest "telegraf" >}}/plugins/#input-rabbitmq) ## System @@ -170,14 +170,14 @@ The System dashboard displays the following information: ### Plugins -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) -- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#disk) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net) -- [`processes` plugin](/{{< latest "telegraf" >}}/plugins/#processes) -- [`swap` plugin](/{{< latest "telegraf" >}}/plugins/#swap) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) +- [`disk` plugin](/{{< latest "telegraf" >}}/plugins/#input-disk) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net) +- [`processes` plugin](/{{< latest "telegraf" >}}/plugins/#input-processes) +- [`swap` plugin](/{{< latest "telegraf" >}}/plugins/#input-swap) @@ -198,7 +198,7 @@ The VMware vSphere Overview dashboard gives an overview of your VMware vSphere C - VM CPU % Ready for :clustername: ### Plugins -- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vmware-vsphere) +- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vmware-vsphere) ## Apache The Apache dashboard displays the following information: @@ -221,12 +221,12 @@ The Apache dashboard displays the following information: ### Plugins -- [`apache` plugin](/{{< latest "telegraf" >}}/plugins/#apache) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net) -- [`logparser` plugin](/{{< latest "telegraf" >}}/plugins/#logparser) +- [`apache` plugin](/{{< latest "telegraf" >}}/plugins/#input-apache) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net) +- [`logparser` plugin](/{{< latest "telegraf" >}}/plugins/#input-logparser) ## ElasticSearch The ElasticSearch dashboard displays the following information: @@ -243,7 +243,7 @@ The ElasticSearch dashboard displays the following information: - ElasticSearch - JVM Heap Usage ### Plugins -- [`elasticsearch` plugin](/{{< latest "telegraf" >}}/plugins/#elasticsearch) +- [`elasticsearch` plugin](/{{< latest "telegraf" >}}/plugins/#input-elasticsearch) ## InfluxDB @@ -272,12 +272,12 @@ The InfluxDB dashboard displays the following information: ### Plugins -- [`influxdb` plugin](/{{< latest "telegraf" >}}/plugins/#influxdb) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) -- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#net) +- [`influxdb` plugin](/{{< latest "telegraf" >}}/plugins/#input-influxdb) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) +- [`net` plugin](/{{< latest "telegraf" >}}/plugins/#input-net) @@ -299,7 +299,7 @@ The Memcached dashboard displays the following information: - Memcached - Evictions/10 Seconds ### Plugins -- [`memcached` plugin](/{{< latest "telegraf" >}}/plugins/#memcached) +- [`memcached` plugin](/{{< latest "telegraf" >}}/plugins/#input-memcached) ## NSQ @@ -315,7 +315,7 @@ The NSQ dashboard displays the following information: - NSQ - Topic Egress ### Plugins -- [`nsq` plugin](/{{< latest "telegraf" >}}/plugins/#nsq) +- [`nsq` plugin](/{{< latest "telegraf" >}}/plugins/#input-nsq) ## PostgreSQL The PostgreSQL dashboard displays the following information: @@ -340,11 +340,11 @@ The PostgreSQL dashboard displays the following information: ### Plugins -- [`postgresql` plugin](/{{< latest "telegraf" >}}/plugins/#postgresql) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) -- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#cpu) -- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#diskio) +- [`postgresql` plugin](/{{< latest "telegraf" >}}/plugins/#input-postgresql) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) +- [`cpu` plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) +- [`diskio` plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) ## HAProxy @@ -367,7 +367,7 @@ The HAProxy dashboard displays the following information: - HAProxy - Backend Error Responses/Second ### Plugins -- [`haproxy` plugin](/{{< latest "telegraf" >}}/plugins/#haproxy) +- [`haproxy` plugin](/{{< latest "telegraf" >}}/plugins/#input-haproxy) ## NGINX @@ -379,7 +379,7 @@ The NGINX dashboard displays the following information: - NGINX - Active Client State ### Plugins -- [`nginx` plugin](/{{< latest "telegraf" >}}/plugins/#nginx) +- [`nginx` plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx) ## Redis The Redis dashboard displays the following information: @@ -390,7 +390,7 @@ The Redis dashboard displays the following information: - Redis - Memory ### Plugins -- [`redis` plugin](/{{< latest "telegraf" >}}/plugins/#redis) +- [`redis` plugin](/{{< latest "telegraf" >}}/plugins/#input-redis) ## VMware vSphere VMs @@ -406,7 +406,7 @@ The VMWare vSphere VMs dashboard gives an overview of your VMware vSphere virtua - Total Disk Latency for :vmname: ### Plugins -- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vsphere) +- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vsphere) ## VMware vSphere Hosts @@ -422,7 +422,7 @@ The VMWare vSphere Hosts dashboard displays the following information: - Total Disk Latency for :esxhostname: ### Plugins -- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#vsphere) +- [`vsphere` plugin](/{{< latest "telegraf" >}}/plugins/#input-vsphere) ## PHPfpm The PHPfpm dashboard displays the following information: @@ -433,7 +433,7 @@ The PHPfpm dashboard displays the following information: - PHPfpm - Max Children Reached ### Plugins -- [`phpfpm` plugin](/{{< latest "telegraf" >}}/plugins/#nginx) +- [`phpfpm` plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx) ## Win System The Win System dashboard displays the following information: @@ -445,7 +445,7 @@ The Win System dashboard displays the following information: - System - Load ### Plugins -- [`win_services` plugin](/{{< latest "telegraf" >}}/plugins/#windows-services) +- [`win_services` plugin](/{{< latest "telegraf" >}}/plugins/#input-windows-services) ## MySQL @@ -472,9 +472,9 @@ The MySQL dashboard displays the following information: - InnoDB Data ### Plugins -- [`mySQL` plugin](/{{< latest "telegraf" >}}/plugins/#mysql) -- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#system) -- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#mem) +- [`mySQL` plugin](/{{< latest "telegraf" >}}/plugins/#input-mysql) +- [`system` plugin](/{{< latest "telegraf" >}}/plugins/#input-system) +- [`mem` plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) ## Ping The Ping dashboard displays the following information: @@ -483,4 +483,4 @@ The Ping dashboard displays the following information: - Ping - Response Times (ms) ### Plugins -- [`ping` plugin](/{{< latest "telegraf" >}}/plugins/#ping) +- [`ping` plugin](/{{< latest "telegraf" >}}/plugins/#input-ping) diff --git a/content/chronograf/v1.9/guides/advanced-kapacitor.md b/content/chronograf/v1.9/guides/advanced-kapacitor.md index dfc723f79..b49c4f9f0 100644 --- a/content/chronograf/v1.9/guides/advanced-kapacitor.md +++ b/content/chronograf/v1.9/guides/advanced-kapacitor.md @@ -7,7 +7,10 @@ menu: weight: 100 parent: Guides related: + - /{{< latest "kapacitor" >}}/introduction/getting-started/ + - /{{< latest "kapacitor" >}}/working/kapa-and-chrono/ - /{{< latest "kapacitor" >}}/working/flux/ + --- Chronograf provides a user interface for [Kapacitor](/{{< latest "kapacitor" >}}/), @@ -41,6 +44,7 @@ In the Databases tab: _See [supported duration units](/{{< latest "influxdb" "v1" >}}/query_language/spec/#duration-units)._ 4. Click **Save**. + If you set the retention policy's duration to one hour (`1h`), InfluxDB automatically deletes any alerts that occurred before the past hour. @@ -51,29 +55,33 @@ automatically deletes any alerts that occurred before the past hour. ### Manage Kapacitor TICKscripts -Chronograf lets you manage Kapacitor TICKscript tasks created in Kapacitor or in -Chronograf when [creating a Chronograf alert rule](/chronograf/v1.9/guides/create-alert-rules/). +Chronograf lets you view and manage all Kapacitor TICKscripts for a selected Kapacitor subscription using the **TICKscripts** page. -To manage Kapacitor TICKscript tasks in Chronograf, click -**{{< icon "alert" "v2">}} Alerts** in the left navigation bar. -On this page, you can: +1. To manage Kapacitor TICKscripts in Chronograf, click +**{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **TICKscripts**. +Do one or more of the following: -- View Kapacitor TICKscript tasks. -- View TICKscript task activity. -- Create new TICKscript tasks. -- Update TICKscript tasks. -- Enable and disable TICKscript tasks. -- Delete TICKscript tasks. + - View Kapacitor TICKscript tasks. You can view up to 100 TICKscripts at a time. If you have more than 100 TICKscripts, the list will be paginated at the bottom of the page. You can also filter your TICKscripts by name. + - View TICKscript task type. + - Enable and disable TICKscript tasks. + - Create new TICKscript tasks. + - Update TICKscript tasks. + - Rename a TICKscript. Note, renaming a TICKscript updates the `var name` variable within the TICKscript. + - Delete TICKscript tasks. + - Create alerts using the Alert Rule Builder. See [Configure Chronograf alert rules](/chronograf/v1.9/guides/create-alert-rules/#configure-chronograf-alert-rules). + +2. Click **Exit** when finished. ### Manage Kapacitor Flux tasks **Kapacitor 1.6+** supports Flux tasks. -Chronograf lets you view and manage [Kapacitor Flux tasks](/{{< latest "kapacitor" >}}/working/flux/). +Chronograf lets you view and manage Flux tasks for a selected Kapacitor subscription using the **Flux Tasks** page. To manage Kapacitor Flux tasks in Chronograf, click -**{{< icon "alert" "v2">}} Alerts** in the left navigation bar. -On this page, you can: +**{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select the **Flux Tasks** option. Do one or more of the following: -- View Kapacitor Flux tasks. -- View Kapacitor Flux task activity. -- Enable and disable Kapacitor Flux tasks. -- Delete Kapacitor Flux tasks. + - View and filter Kapacitor Flux tasks by name. + - View Kapacitor Flux task activity. + - Enable and disable Kapacitor Flux tasks. + - Delete Kapacitor Flux tasks. + +For more information on Flux tasks and Kapacitor see [Use Flux tasks with Kapacitor](/{{< latest "kapacitor" >}}/working/flux/). \ No newline at end of file diff --git a/content/chronograf/v1.9/guides/create-alert-rules.md b/content/chronograf/v1.9/guides/create-alert-rules.md index 3a0e3e629..27fe05bc4 100644 --- a/content/chronograf/v1.9/guides/create-alert-rules.md +++ b/content/chronograf/v1.9/guides/create-alert-rules.md @@ -22,141 +22,74 @@ Common alerting use cases that can be managed using Chronograf include: * Deadman switches. Complex alerts and other tasks can be defined directly in Kapacitor as TICKscripts, but can be viewed and managed within Chronograf. - -This guide walks through creating a Chronograf alert rule that sends an alert message to an existing [Slack](https://slack.com/) channel whenever your idle CPU usage crosses the 80% threshold. +To learn about managing Kapacitor TICKscripts in Chronograf, see [Manage Kapacitor TICKscripts](/{{< latest "chronograf" >}}/guides/advanced-kapacitor/#manage-kapacitor-tickscripts). ## Requirements -[Getting started with Chronograf](/chronograf/v1.9/introduction/getting-started/) offers step-by-step instructions for each of the following requirements: +[Get started with Chronograf](/{{< latest "chronograf" >}}/introduction/getting-started/) offers step-by-step instructions for each of the following requirements: -* Downloaded and install the entire TICKstack (Telegraf, InfluxDB, Chronograf, and Kapacitor). -* Configure Telegraf to collect data using the InfluxDB [system statistics](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) input plugin and write data to your InfluxDB instance. -* [Create a Kapacitor connection in Chronograf](/chronograf/v1.9/introduction/installation/#connect-chronograf-to-kapacitor). -* Slack is available and configured as an event handler in Chronograf. See [Configuring Chronograf alert endpoints](/chronograf/v1.9/guides/configuring-alert-endpoints/) for detailed configuration instructions. +* Download and install the entire TICKstack (Telegraf, InfluxDB, Chronograf, and Kapacitor). +* [Create a Kapacitor connection in Chronograf](/{{< latest "chronograf" >}}/introduction/installation/#connect-chronograf-to-kapacitor). -## Configure Chronograf alert rules +## Manage Chronograf alert rules -Navigate to the **Manage Tasks** page under **Alerting** in the left navigation, then click **+ Build Alert Rule** in the top right corner. +Chronograf lets you create and manage Kapacitor alert rules. To manage alert rules: -![Navigate to Manage Tasks](/img/chronograf/1-6-alerts-manage-tasks-nav.png) +1. Click on **{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **Alert Rules**. +2. Do one of the following: + - [Create an alert rule](#create-an-alert-rule) + - [View alert history](#view-alert-history) + - [Enable and disable alert rules](#enable-and-disable-alert-rules) + - [Delete alert rules](#delete-alert-rules) -The **Manage Tasks** page is used to create and edit your Chronograf alert rules. -The steps below guide you through the process of creating a Chronograf alert rule. +To create and manage alert rules in Chronograf, click on +**{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **Alert Rules**. +Do one of the following: -![Empty Rule Configuration](/img/chronograf/1-6-alerts-rule-builder.png) + - View alert rules. + - Enable and disable alert rules. + - Delete alert rules. + - Create new alert rules using the **Alert Rule Builder**. -### Step 1: Name the alert rule +## Create an alert rule -Under **Name this Alert Rule** provide a name for the alert. -For this example, use "Idle CPU Usage" as your alert name. +From the **Alert Rules** page in Chronograf: -### Step 2: Select the alert type +1. Click **+ Build Alert Rule**. -Choose from three alert types under the **Alert Types** section of the Rule Configuration page: +1. Name the alert rule. -_**Threshold**_ -Alert if data crosses a boundary. +2. Choose the alert type: + - `Threshold` - alert if data crosses a boundary. + - `Relative` - alert if data changes relative to data in a different time range. + - `Deadman` - alert if InfluxDB receives no relevant data for a specified time duration. -_**Relative**_ -Alert if data changes relative to data in a different time range. +3. Select the time series data to use in the alert rule. + - Navigate through databases, measurements, tags, and fields to select all relevant data. -_**Deadman**_ -Alert if InfluxDB receives no relevant data for a specified time duration. +4. Define the rule conditions. Condition options are determined by the alert type. -For this example, select the **Threshold** alert type. +5. Select and configure the alert handler. + - The alert handler determines where the system sends the alert (the event handler). + - Chronograf supports several event handlers and each handler has unique configurable options. + - Multiple alert handlers can be added to send alerts to multiple endpoints. -### Step 3: Select the time series data +6. Configure the alert message. + - The alert message is the text that accompanies an alert. + - Alert messages are templates that have access to alert data. + - Available templates appear below the message text field. + - As you type your alert message, clicking the data templates will insert them at the end of whatever text has been entered. -Choose the time series data you want the Chronograf alert rule to use. -Navigate through databases, measurements, fields, and tags to select the relevant data. +7. Click **Save Rule**. -In this example, select the `telegraf` [database](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#database), the `autogen` [retention policy](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#retention-policy-rp), the `cpu` [measurement](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#measurement), and the `usage_idle` [field](/{{< latest "influxdb" "v1" >}}/concepts/glossary/#field). +## View alert history -![Select your data](/img/chronograf/1-6-alerts-time-series.png) +Chronograf lets you view your alert history on the **Alert History** page. -### Step 4: Define the rule condition +To view a history of your alerts, click on +**{{< icon "alert" "v2">}} Alerting** in the left navigation bar and select **Alert History**. +Do one of the following: -Define the threshold condition. -Condition options are determined by the [alert type](#step-2-select-the-alert-type). -For this example, the alert conditions are if `usage_idle` is less than `80`. - -![Create a condition](/img/chronograf/1-6-alerts-conditions.png) - -The graph shows a preview of the relevant data and the threshold number. -By default, the graph shows data from the past 15 minutes. -Adjusting the graph's time range is helpful when determining a reasonable threshold number based on your data. - -{{% note %}} -We set the threshold number to `80` for demonstration purposes. -Setting the threshold for idle CPU usage to a high number ensures that we'll be able to see the alert in action. -In practice, you'd set the threshold number to better match the patterns in your data and your alerting needs. -{{% /note %}} - -### Step 5: Select and configure the alert handler - -The **Alert Handler** section determines where the system sends the alert (the event handler) -Chronograf supports several event handlers. -Each handler has unique configurable options. - -For this example, choose the **slack** alert handler and enter the desired options. - -![Select the alert handler](/img/chronograf/1-6-alerts-configure-handlers.png) - -{{% note %}} -Multiple alert handlers can be added to send alerts to multiple endpoints. -{{% /note %}} - -### Step 6: Configure the alert message - -The alert message is the text that accompanies an alert. -Alert messages are templates that have access to alert data. -Available data templates appear below the message text field. -As you type your alert message, clicking the data templates will insert them at end of whatever text has been entered. - -In this example, use the alert message, `Your idle CPU usage is {{.Level}} at {{ index .Fields "value" }}.`. - -![Specify event handler and alert message](/img/chronograf/1-6-alerts-message.png) - -*View the Kapacitor documentation for more information about [message template data](/{{< latest "kapacitor" >}}/nodes/alert_node/#message).* - -### Step 7: Save the alert rule - -Click **Save Rule** in the top right corner and navigate to the **Manage Tasks** page to see your rule. -Notice that you can easily enable and disable the rule by toggling the checkbox in the **Enabled** column. - -![See the alert rule](/img/chronograf/1-6-alerts-view-rules.png) - -Next, move on to the section below to experience your alert rule in action. - -## View alerts in practice - -### Step 1: Create some load on your system - -The purpose of this step is to generate enough load on your system to trigger an alert. -More specifically, your idle CPU usage must dip below `80%`. -On the machine that's running Telegraf, enter the following command in the terminal to start some `while` loops: - -``` -while true; do i=0; done -``` - -Let it run for a few seconds or minutes before terminating it. -On most systems, kill the script by using `Ctrl+C`. - -### Step 2: View the alerts - -Go to the Slack channel that you specified in the previous section. -In this example, it's the `#chronocats` channel. - -Assuming the first step was successful, `#ohnos` should reveal at least two alert messages: - -* The first alert message indicates that your idle CPU usage was `CRITICAL`, meaning it dipped below `80%`. -* The second alert message indicates that your idle CPU usage returned to an `OK` level of `80%` or above. - -![See the alerts](/img/chronograf/1-6-alerts-slack-notifications.png) - -You can also see alerts on the **Alert History** page available under **Alerting** in the left navigation. - -![Chronograf alert history](/img/chronograf/1-6-alerts-history.png) - -That's it! You've successfully used Chronograf to configure an alert rule to monitor your idle CPU usage and send notifications to Slack. + - View a history of all triggered alerts. + - Filter alert history by type. + - View alert history for a specified time range. diff --git a/content/chronograf/v1.9/guides/querying-data.md b/content/chronograf/v1.9/guides/querying-data.md index 8e82c8bad..d6e63bfc0 100644 --- a/content/chronograf/v1.9/guides/querying-data.md +++ b/content/chronograf/v1.9/guides/querying-data.md @@ -48,33 +48,20 @@ For more information, see [InfluxQL support](/influxdb/cloud/query-data/influxql Flux is InfluxData's new functional data scripting language designed for querying, analyzing, and acting on time series data. To learn more about Flux, see [Getting started with Flux](/{{< latest "influxdb" "v2" >}}/query-data/get-started). -1. Open the Data Explorer and click **Add a Query**. -2. To the right of the source dropdown above the graph placeholder, select **Flux** as the source type. - The **Schema**, **Functions**, and **Script** panes appear. -3. Use the **Schema** pane to explore your available data. Click the **+** sign next to a bucket name to expand its content. -4. Use the **Functions** pane to view details about the available Flux functions. -5. Use the **Script** pane to enter your Flux query. +1. Open the Data Explorer by clicking **Explore** in the left navigation bar. +2. Select **Flux** as the source type. +3. Click **Script Builder**. +4. The **Schema**, **Script**, and **Flux Functions** panes appear. + - Use the **Schema** pane to explore your available data. Click the **{{< icon "plus" >}}** sign next to a bucket name to expand its content. + - Use the **Script** pane to enter and view your Flux script. + - Use the **Flux Functions** pane to view details about the available Flux functions. +5. To get started building a new script, click **Script Builder**. Using the Flux script builder, you can select a bucket, measurements and tags, fields and an aggregate function. Click **{{< icon "plus" >}} Load More** to expand any truncated lists. You can also choose a variety of time ranges on your schema data. +6. When you are finished creating your script, click **Submit**. +7. Click **Script Editor** to view and edit your query. +8. If you make changes to the script using the **Script Builder**, you will receive a message when clicking **Submit** warning you +that submitting changes will override the script in the Flux editor, and that the script cannot be recovered. - * To get started with your query, click the **Script Wizard**. In the wizard, you can select a bucket, measurement, fields and an aggregate. - - - - For example, if you make the above selections, the wizard inserts the following script: - - ```js - from(bucket: "telegraf/autogen") - |> range(start: dashboardTime) - |> filter(fn: (r) => r._measurement == "cpu" and (r._field == "usage_system")) - |> window(every: autoInterval) - |> toFloat() - |> percentile(percentile: 0.95) - |> group(except: ["_time", "_start", "_stop", "_value"]) - ``` - * Alternatively, you can enter your entire script manually. - -6. Click **Run Script** in the top bar of the **Script** pane. You can then preview your graph in the above pane. - -## Visualize your query + ## Visualize your query Select the **Visualization** tab at the top of the **Data Explorer**. For details about all of the available visualization options, see [Visualization types in Chronograf](/chronograf/v1.9/guides/visualization-types/). diff --git a/content/chronograf/v1.9/guides/using-precreated-dashboards.md b/content/chronograf/v1.9/guides/using-precreated-dashboards.md index 4826a4a88..956495a62 100644 --- a/content/chronograf/v1.9/guides/using-precreated-dashboards.md +++ b/content/chronograf/v1.9/guides/using-precreated-dashboards.md @@ -71,7 +71,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## apache -**Required Telegraf plugin:** [Apache input plugin](/{{< latest "telegraf" >}}/plugins/#apache-http-server) +**Required Telegraf plugin:** [Apache input plugin](/{{< latest "telegraf" >}}/plugins/#input-apache) `apache.json` @@ -81,7 +81,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## consul -**Required Telegraf plugin:** [Consul input plugin](/{{< latest "telegraf" >}}/plugins/#consul) +**Required Telegraf plugin:** [Consul input plugin](/{{< latest "telegraf" >}}/plugins/#input-consul) `consul_http.json` @@ -101,7 +101,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## docker -**Required Telegraf plugin:** [Docker input plugin](/{{< latest "telegraf" >}}/plugins/#docker) +**Required Telegraf plugin:** [Docker input plugin](/{{< latest "telegraf" >}}/plugins/#input-docker) `docker.json` @@ -121,7 +121,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## elasticsearch -**Required Telegraf plugin:** [Elasticsearch input plugin](/{{< latest "telegraf" >}}/plugins/#elasticsearch) +**Required Telegraf plugin:** [Elasticsearch input plugin](/{{< latest "telegraf" >}}/plugins/#input-elasticsearch) `elasticsearch.json` @@ -138,7 +138,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## haproxy -**Required Telegraf plugin:** [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins/#haproxy) +**Required Telegraf plugin:** [HAProxy input plugin](/{{< latest "telegraf" >}}/plugins/#input-haproxy) `haproxy.json` @@ -160,7 +160,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## iis -**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#windows-performance-counters) +**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#input-windows-performance-counters) `win_websvc.json` @@ -168,7 +168,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## influxdb -**Required Telegraf plugin:** [InfluxDB input plugin](/{{< latest "telegraf" >}}/plugins/#influxdb) +**Required Telegraf plugin:** [InfluxDB input plugin](/{{< latest "telegraf" >}}/plugins/#input-influxdb) `influxdb_database.json` @@ -213,7 +213,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## Memcached (`memcached`) -**Required Telegraf plugin:** [Memcached input plugin](/{{< latest "telegraf" >}}/plugins/#memcached) +**Required Telegraf plugin:** [Memcached input plugin](/{{< latest "telegraf" >}}/plugins/#input-memcached) `memcached.json` @@ -233,7 +233,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## mesos -**Required Telegraf plugin:** [Mesos input plugin](/{{< latest "telegraf" >}}/plugins/#mesos) +**Required Telegraf plugin:** [Mesos input plugin](/{{< latest "telegraf" >}}/plugins/#input-mesos) `mesos.json` @@ -248,7 +248,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## mongodb -**Required Telegraf plugin:** [MongoDB input plugin](/{{< latest "telegraf" >}}/plugins/#mongodb) +**Required Telegraf plugin:** [MongoDB input plugin](/{{< latest "telegraf" >}}/plugins/#input-mongodb) `mongodb.json` @@ -260,7 +260,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## mysql -**Required Telegraf plugin:** [MySQL input plugin](/{{< latest "telegraf" >}}/plugins/#mysql) +**Required Telegraf plugin:** [MySQL input plugin](/{{< latest "telegraf" >}}/plugins/#input-mysql) `mysql.json` @@ -271,7 +271,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## nginx -**Required Telegraf plugin:** [NGINX input plugin](/{{< latest "telegraf" >}}/plugins/#nginx) +**Required Telegraf plugin:** [NGINX input plugin](/{{< latest "telegraf" >}}/plugins/#input-nginx) `nginx.json` @@ -282,7 +282,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## nsq -**Required Telegraf plugin:** [NSQ input plugin](/{{< latest "telegraf" >}}/plugins/#nsq) +**Required Telegraf plugin:** [NSQ input plugin](/{{< latest "telegraf" >}}/plugins/#input-nsq) `nsq_channel.json` @@ -303,7 +303,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## phpfpm -**Required Telegraf plugin:** [PHPfpm input plugin](/{{< latest "telegraf" >}}/plugins/#php-fpm) +**Required Telegraf plugin:** [PHPfpm input plugin](/{{< latest "telegraf" >}}/plugins/#input-php-fpm) `phpfpm.json` @@ -315,7 +315,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## ping -**Required Telegraf plugin:** [Ping input plugin](/{{< latest "telegraf" >}}/plugins/#ping) +**Required Telegraf plugin:** [Ping input plugin](/{{< latest "telegraf" >}}/plugins/#input-ping) `ping.json` @@ -324,7 +324,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## postgresql -**Required Telegraf plugin:** [PostgreSQL input plugin](/{{< latest "telegraf" >}}/plugins/#postgresql) +**Required Telegraf plugin:** [PostgreSQL input plugin](/{{< latest "telegraf" >}}/plugins/#input-postgresql) `postgresql.json` @@ -335,7 +335,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## rabbitmq -**Required Telegraf plugin:** [RabbitMQ input plugin](/{{< latest "telegraf" >}}/plugins/#rabbitmq) +**Required Telegraf plugin:** [RabbitMQ input plugin](/{{< latest "telegraf" >}}/plugins/#input-rabbitmq) `rabbitmq.json` @@ -346,7 +346,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## redis -**Required Telegraf plugin:** [Redis input plugin](/{{< latest "telegraf" >}}/plugins/#redis) +**Required Telegraf plugin:** [Redis input plugin](/{{< latest "telegraf" >}}/plugins/#input-redis) `redis.json` @@ -358,7 +358,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## riak -**Required Telegraf plugin:** [Riak input plugin](/{{< latest "telegraf" >}}/plugins/#riak) +**Required Telegraf plugin:** [Riak input plugin](/{{< latest "telegraf" >}}/plugins/#input-riak) `riak.json` @@ -377,7 +377,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### cpu -**Required Telegraf plugin:** [CPU input plugin](/{{< latest "telegraf" >}}/plugins/#cpu) +**Required Telegraf plugin:** [CPU input plugin](/{{< latest "telegraf" >}}/plugins/#input-cpu) `cpu.json` @@ -387,13 +387,13 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t `disk.json` -**Required Telegraf plugin:** [Disk input plugin](/{{< latest "telegraf" >}}/plugins/#disk) +**Required Telegraf plugin:** [Disk input plugin](/{{< latest "telegraf" >}}/plugins/#input-disk) * "System - Disk used %" ### diskio -**Required Telegraf plugin:** [DiskIO input plugin](/{{< latest "telegraf" >}}/plugins/#diskio) +**Required Telegraf plugin:** [DiskIO input plugin](/{{< latest "telegraf" >}}/plugins/#input-diskio) `diskio.json` @@ -402,7 +402,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### mem -**Required Telegraf plugin:** [Mem input plugin](/{{< latest "telegraf" >}}/plugins/#mem) +**Required Telegraf plugin:** [Mem input plugin](/{{< latest "telegraf" >}}/plugins/#input-mem) `mem.json` @@ -410,7 +410,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### net -**Required Telegraf plugin:** [Net input plugin](/{{< latest "telegraf" >}}/plugins/#net) +**Required Telegraf plugin:** [Net input plugin](/{{< latest "telegraf" >}}/plugins/#input-net) `net.json` @@ -419,7 +419,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### netstat -**Required Telegraf plugin:** [Netstat input plugin](/{{< latest "telegraf" >}}/plugins/#netstat) +**Required Telegraf plugin:** [Netstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-netstat) `netstat.json` @@ -428,7 +428,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### processes -**Required Telegraf plugin:** [Processes input plugin](/{{< latest "telegraf" >}}/plugins/#processes) +**Required Telegraf plugin:** [Processes input plugin](/{{< latest "telegraf" >}}/plugins/#input-processes) `processes.json` @@ -436,7 +436,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### procstat -**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#procstat) +**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-procstat) `procstat.json` @@ -445,7 +445,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ### system -**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#procstat) +**Required Telegraf plugin:** [Procstat input plugin](/{{< latest "telegraf" >}}/plugins/#input-procstat) `load.json` @@ -453,7 +453,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## varnish -**Required Telegraf plugin:** [Varnish](/{{< latest "telegraf" >}}/plugins/#varnish) +**Required Telegraf plugin:** [Varnish](/{{< latest "telegraf" >}}/plugins/#input-varnish) `varnish.json` @@ -462,7 +462,7 @@ Enable and disable apps in your Telegraf configuration file (by default, `/etc/t ## win_system -**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#windows-performance-counters) +**Required Telegraf plugin:** [Windows Performance Counters input plugin](/{{< latest "telegraf" >}}/plugins/#input-windows-performance-counters) `win_cpu.json` diff --git a/content/enterprise_influxdb/v1.7/administration/backup-and-restore.md b/content/enterprise_influxdb/v1.7/administration/backup-and-restore.md index cf57a1354..a6bee77ef 100644 --- a/content/enterprise_influxdb/v1.7/administration/backup-and-restore.md +++ b/content/enterprise_influxdb/v1.7/administration/backup-and-restore.md @@ -380,31 +380,35 @@ Use the InfluxDB `influx_inspect export` and `influx -import` commands to create ### Export data -Use the [`influx_inspect export` command](/{{< latest "influxdb" "v1" >}}/tools/influx_inspect#export) to export data in line protocol format from your InfluxDB Enterprise cluster. Options include: +Use the [`influx_inspect export` command](/influxdb/v1.7/tools/influx_inspect#export) to export data in line protocol format from your InfluxDB Enterprise cluster. Options include: - Exporting all, or specific, databases - Filtering with starting and ending timestamps - Using gzip compression for smaller files and faster exports -For details on optional settings and usage, see [`influx_inspect export` command](/{{< latest "influxdb" "v1" >}}/tools/influx_inspect#export). +For details on optional settings and usage, see [`influx_inspect export` command](/influxdb/v1.7/tools/influx_inspect#export). In the following example, the database is exported filtered to include only one day and compressed for optimal speed and file size. ```bash -influx_inspect export -database myDB -compress -start 2019-05-19T00:00:00.000Z -end 2019-05-19T23:59:59.999Z +influx_inspect export \ + -database myDB \ + -compress \ + -start 2019-05-19T00:00:00.000Z \ + -end 2019-05-19T23:59:59.999Z ``` ### Import data -After exporting the data in line protocol format, you can import the data using the [`influx -import` CLI command](/{{< latest "influxdb" "v1" >}}/tools/shell/#import). +After exporting the data in line protocol format, you can import the data using the [`influx -import` CLI command](/influxdb/v1.7/tools/shell/#import). In the following example, the compressed data file is imported into the specified database. ```bash -influx -import -database myDB -compress +influx -import -database myDB -compressed ``` -For details on using the `influx -import` command, see [Import data from a file with -import](/{{< latest "influxdb" "v1" >}}/tools/shell/#import-data-from-a-file-with-import). +For details on using the `influx -import` command, see [Import data from a file with -import](/influxdb/v1.7/tools/shell/#import-data-from-a-file-with-import). ## Take AWS snapshots as backup diff --git a/content/enterprise_influxdb/v1.8/administration/backup-and-restore.md b/content/enterprise_influxdb/v1.8/administration/backup-and-restore.md index c876579b6..49155ad01 100644 --- a/content/enterprise_influxdb/v1.8/administration/backup-and-restore.md +++ b/content/enterprise_influxdb/v1.8/administration/backup-and-restore.md @@ -27,7 +27,7 @@ Depending on the volume of data to be protected and your application requirement - [Backup and restore utilities](#backup-and-restore-utilities) — For most applications - [Exporting and importing data](#exporting-and-importing-data) — For large datasets -> **Note:** Use the [`backup` and `restore` utilities (InfluxDB OSS 1.5 and later)](/{{< latest "influxdb" "v1" >}}/administration/backup_and_restore/) to: +> **Note:** Use the [`backup` and `restore` utilities (InfluxDB OSS 1.5 and later)](/influxdb/v1.8/administration/backup_and_restore/) to: > > - Restore InfluxDB Enterprise backup files to InfluxDB OSS instances. > - Back up InfluxDB OSS data that can be restored in InfluxDB Enterprise clusters. @@ -434,31 +434,35 @@ As an alternative to the standard backup and restore utilities, use the InfluxDB ### Exporting data -Use the [`influx_inspect export` command](/{{< latest "influxdb" "v1" >}}/tools/influx_inspect#export) to export data in line protocol format from your InfluxDB Enterprise cluster. Options include: +Use the [`influx_inspect export` command](/influxdb/v1.8/tools/influx_inspect#export) to export data in line protocol format from your InfluxDB Enterprise cluster. Options include: - Exporting all, or specific, databases - Filtering with starting and ending timestamps - Using gzip compression for smaller files and faster exports -For details on optional settings and usage, see [`influx_inspect export` command](/{{< latest "influxdb" "v1" >}}/tools/influx_inspect#export). +For details on optional settings and usage, see [`influx_inspect export` command](/influxdb/v1.8/tools/influx_inspect#export). In the following example, the database is exported filtered to include only one day and compressed for optimal speed and file size. ```bash -influx_inspect export -database myDB -compress -start 2019-05-19T00:00:00.000Z -end 2019-05-19T23:59:59.999Z +influx_inspect export \ + -database myDB \ + -compress \ + -start 2019-05-19T00:00:00.000Z \ + -end 2019-05-19T23:59:59.999Z ``` ### Importing data -After exporting the data in line protocol format, you can import the data using the [`influx -import` CLI command](/{{< latest "influxdb" "v1" >}}/tools/shell/#import). +After exporting the data in line protocol format, you can import the data using the [`influx -import` CLI command](/influxdb/v1.8/tools/shell/#import). In the following example, the compressed data file is imported into the specified database. ```bash -influx -import -database myDB -compress +influx -import -database myDB -compressed ``` -For details on using the `influx -import` command, see [Import data from a file with -import](/{{< latest "influxdb" "v1" >}}/tools/shell/#import-data-from-a-file-with-import). +For details on using the `influx -import` command, see [Import data from a file with -import](/influxdb/v1.8/tools/shell/#import-data-from-a-file-with-import). ### Example diff --git a/content/enterprise_influxdb/v1.8/install-and-deploy/production_installation/_index.md b/content/enterprise_influxdb/v1.8/install-and-deploy/production_installation/_index.md index d10d3461f..bcab1bfb3 100644 --- a/content/enterprise_influxdb/v1.8/install-and-deploy/production_installation/_index.md +++ b/content/enterprise_influxdb/v1.8/install-and-deploy/production_installation/_index.md @@ -15,5 +15,3 @@ Complete the following steps to install an InfluxDB Enterprise cluster in your o 1. [Install InfluxDB Enterprise meta nodes](/enterprise_influxdb/v1.8/install-and-deploy/production_installation/meta_node_installation/) 2. [Install InfluxDB data nodes](/enterprise_influxdb/v1.8/install-and-deploy/production_installation/data_node_installation/) 3. [Install Chronograf](/enterprise_influxdb/v1.8/install-and-deploy/production_installation/chrono_install/) - -> **Note:** If you're looking for cloud infrastructure and services, check out how to deploy InfluxDB Enterprise (production-ready) on a cloud provider of your choice: [Azure](/enterprise_influxdb/v1.8/install-and-deploy/deploying/azure/), [GCP](/enterprise_influxdb/v1.8/install-and-deploy/deploying/google-cloud-platform/), or [AWS](/enterprise_influxdb/v1.8/install-and-deploy/deploying/aws/). diff --git a/content/enterprise_influxdb/v1.9/_index.md b/content/enterprise_influxdb/v1.9/_index.md index ba842d182..5bc11bfd9 100644 --- a/content/enterprise_influxdb/v1.9/_index.md +++ b/content/enterprise_influxdb/v1.9/_index.md @@ -30,7 +30,7 @@ and get started! ## Next steps -- [Install and deploy](/enterprise_influxdb/v1.9/install-and-deploy/) +- [Install and deploy](/enterprise_influxdb/v1.9/introduction/installation/) - Review key [concepts](/enterprise_influxdb/v1.9/concepts/) - [Get started](/enterprise_influxdb/v1.9/introduction/getting-started/) diff --git a/content/enterprise_influxdb/v1.9/about-the-project/release-notes-changelog.md b/content/enterprise_influxdb/v1.9/about-the-project/release-notes-changelog.md index 2c062dad3..4c8c16880 100644 --- a/content/enterprise_influxdb/v1.9/about-the-project/release-notes-changelog.md +++ b/content/enterprise_influxdb/v1.9/about-the-project/release-notes-changelog.md @@ -9,6 +9,52 @@ menu: parent: About the project --- +## 1.9.6 [2022-02-16] + +{{% note %}} InfluxDB Enterprise offerings are no longer available on AWS, Azure, and GCP marketplaces. Please [contact Sales](https://www.influxdata.com/contact-sales/) to request an license key to [install InfluxDB Enterprise in your own environment](/enterprise_influxdb/v1.9/introduction/installation/). +{{% /note %}} + +### Features + +#### Backup enhancements + +- **Revert damaged meta nodes to a previous state**: Add the `-meta-only-overwrite-force` option to [`influxd-ctl restore`](/enterprise_influxdb/v1.9/tools/influxd-ctl/#restore) to revert damaged meta nodes in an existing cluster to a previous state when restoring an InfluxDB Enterprise database. + +- **Estimate the size of a backup** (full or incremental) and provide progress messages. Add `-estimate` option to [`influxd-ctl backup`](/enterprise_influxdb/v1.9/tools/influxd-ctl/#backup) to estimate the size of a backup (full or incremental) and provide progress messages. Prints the number of files to back up, the percentage of bytes transferred for each file (organized by shard), and the estimated time remaining to complete the backup. + +#### Logging enhancements + +- **Log active queries when a process is terminated**: Add the [`termination-query-log`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#termination-query-log--false) configuration option. When set to `true` all running queries are printed to the log when a data node process receives a `SIGTERM` (for example, a Kubernetes process exceeds the container memory limit or the process is terminated). + +- **Log details of HTTP calls to meta nodes**. When [`cluster-tracing`](/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes/#cluster-tracing--false) is enabled, all API calls to meta nodes are now logged with details providing an audit trail including IP address of caller, specific API being invoked, action being invoked, and more. + +### Maintenance updates + +- Update to [Flux v0.140](/flux/v0.x/release-notes/#v01400-2021-11-22). +- Upgrade to Go 1.17. +- Upgrade `protobuf` library. + +### Bug fixes + +#### Data + +- Adjust shard start and end times to avoid overlaps in existing shards. This resolves issues with existing shards (truncated or not) that have a different shard duration than the current default. +- `DROP SHARD` now successfully ignores "shard not found errors." + +#### Errors + +- Fix panic when running `influxd config`. +- Ensure `influxd-ctl entropy` commands use the correct TLS settings. + +#### Profiling + +- Resolve issue to enable [mutex profiling](/enterprise_influxdb/v1.9/tools/api/#debugpprof-http-endpoint). + +#### influx-ctl updates + +- Improve [`influxd-ctl join`](/enterprise_influxdb/v1.9/tools/influxd-ctl/#join) robustness and provide better error messages on failure. +- Add user friendly error message when accessing a TLS-enabled server without TLS enabled on client. + ## v1.9.5 [2021-10-11] {{% note %}} @@ -67,7 +113,7 @@ Changes below are included in InfluxDB Enterprise 1.9.5. - Add [configurable password hashing](/enterprise_influxdb/v1.9/administration/configure-password-hashing/) with `bcrypt` and `pbkdf2` support. - Add retry with exponential back-off to anti-entropy repair. - Add logging to compaction. -- Add [`total-buffer-bytes`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#total-buffer-bytes--0) configuration parameter to subscriptions. +- Add [`total-buffer-bytes`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#total-buffer-bytes) configuration parameter to subscriptions. This option is intended to help alleviate out-of-memory errors. - Update to [Flux v0.120.1.](/influxdb/v2.0/reference/release-notes/flux/#v01201-2021-07-06) @@ -99,7 +145,7 @@ in that there is no corresponding InfluxDB OSS release. These queries now return a `cardinality estimation` column header where before they returned `count`. - Improve diagnostics for license problems. Add [license expiration date](/enterprise_influxdb/v1.9/features/clustering-features/#entitlements) to `debug/vars` metrics. -- Add improved [ingress metrics](/enterprise_influxdb/v1.9/administration/config-data-nodes/#ingress-metric-by-measurement-enabled--false) to track points written by measurement and by login. +- Add improved [ingress metrics](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#ingress-metric-by-measurement-enabled) to track points written by measurement and by login. Allow for collection of statistics regarding points, values, and new series written per measurement and by login. This data is collected and exposed at the data node level. With these metrics you can, for example: @@ -107,7 +153,7 @@ in that there is no corresponding InfluxDB OSS release. monitor the growth of series within a measurement, and track what user credentials are being used to write data. - Support authentication for Kapacitor via LDAP. -- Support for [configuring Flux query resource usage](/enterprise_influxdb/v1.9/administration/config-data-nodes/#flux-controller) (concurrency, memory, etc.). +- Support for [configuring Flux query resource usage](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#flux-controller) (concurrency, memory, etc.). - Upgrade to [Flux v0.113.0](/influxdb/v2.0/reference/release-notes/flux/#v01130-2021-04-21). - Update Prometheus remote protocol to allow streamed reading. - Improve performance of sorted merge iterator. diff --git a/content/enterprise_influxdb/v1.9/administration/backup-and-restore.md b/content/enterprise_influxdb/v1.9/administration/backup-and-restore.md index 8e0441e1b..8d2d139d4 100644 --- a/content/enterprise_influxdb/v1.9/administration/backup-and-restore.md +++ b/content/enterprise_influxdb/v1.9/administration/backup-and-restore.md @@ -91,7 +91,7 @@ for a complete list of the global `influxd-ctl` options. databases, continuous queries, retention policies. Shards are not exported. - `-full`: perform a full backup. Deprecated in favour of `-strategy=full` - `-rp `: the name of the single retention policy to back up (must specify `-db` with `-rp`) -- `-shard `: the ID of the single shard to back up +- `-shard `: the ID of the single shard to back up (cannot be used with `-db`) ### Backup examples @@ -176,9 +176,9 @@ $ ls ./telegrafbackup 20160803T222811Z.manifest 20160803T222811Z.meta 20160803T222811Z.s4.tar.gz ``` -#### Perform a metastore only backup +#### Perform a metadata only backup -Perform a meta store only backup into a specific directory with the command below. +Perform a metadata only backup into a specific directory with the command below. The directory must already exist. ```bash @@ -316,8 +316,8 @@ Restored from my-incremental-backup/ in 83.892591ms, transferred 588800 bytes ##### Restore from a metadata backup -In this example, the `restore` command restores an metadata backup stored -in the `metadata-backup/` directory. +In this example, the `restore` command restores a [metadata backup](#perform-a-metadata-only-backup) +stored in the `metadata-backup/` directory. ```bash # Syntax @@ -402,6 +402,29 @@ time written 1970-01-01T00:00:00Z 471 ``` +##### Restore (overwrite) metadata from a full or incremental backup to fix damaged metadata + +1. Identify a backup with uncorrupted metadata from which to restore. +2. Restore from backup with `-meta-only-overwrite-force`. + + {{% warn %}} + Only use the `-meta-only-overwrite-force` flag to restore from backups of the target cluster. + If you use this flag with metadata from a different cluster, you will lose data. + (since metadata includes shard assignments to data nodes). + {{% /warn %}} + + ```bash + # Syntax + influxd-ctl restore -meta-only-overwrite-force + + # Example + $ influxd-ctl restore -meta-only-overwrite-force my-incremental-backup/ + Using backup directory: my-incremental-backup/ + Using meta backup: 20200101T000000Z.meta + Restoring meta data... Done. Restored in 21.373019ms, 1 shards mapped + Restored from my-incremental-backup/ in 19.2311ms, transferred 588 bytes + ``` + #### Common issues with restore ##### Restore writes information not part of the original backup @@ -446,7 +469,11 @@ For details on optional settings and usage, see [`influx_inspect export` command In the following example, the database is exported filtered to include only one day and compressed for optimal speed and file size. ```bash -influx_inspect export -database myDB -compress -start 2019-05-19T00:00:00.000Z -end 2019-05-19T23:59:59.999Z +influx_inspect export \ + -database myDB \ + -compress \ + -start 2019-05-19T00:00:00.000Z \ + -end 2019-05-19T23:59:59.999Z ``` ### Importing data @@ -456,7 +483,7 @@ After exporting the data in line protocol format, you can import the data using In the following example, the compressed data file is imported into the specified database. ```bash -influx -import -database myDB -compress +influx -import -database myDB -compressed ``` For details on using the `influx -import` command, see [Import data from a file with -import](/enterprise_influxdb/v1.9/tools/influx-cli/use-influx/#import-data-from-a-file-with--import). diff --git a/content/enterprise_influxdb/v1.9/administration/configure/anti-entropy/_index.md b/content/enterprise_influxdb/v1.9/administration/configure/anti-entropy/_index.md index eab4bb7ef..3f8bb3b87 100644 --- a/content/enterprise_influxdb/v1.9/administration/configure/anti-entropy/_index.md +++ b/content/enterprise_influxdb/v1.9/administration/configure/anti-entropy/_index.md @@ -34,7 +34,7 @@ If data inconsistencies are detected among shards in a shard group, [invoke the In the repair process, the Anti-Entropy service will sync the necessary updates from other shards within a shard group. -By default, the service performs consistency checks every 5 minutes. This interval can be modified in the [`anti-entropy.check-interval`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#check-interval-5m) configuration setting. +By default, the service performs consistency checks every 5 minutes. This interval can be modified in the [`anti-entropy.check-interval`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#check-interval) configuration setting. The Anti-Entropy service can only address missing or inconsistent shards when there is at least one copy of the shard available. @@ -178,7 +178,7 @@ until it either shows as being in the queue, being repaired, or no longer in the ## Configuration -The configuration settings for the Anti-Entropy service are described in [Anti-Entropy settings](/enterprise_influxdb/v1.9/administration/config-data-nodes#anti-entropy) section of the data node configuration. +The configuration settings for the Anti-Entropy service are described in [Anti-Entropy settings](/enterprise_influxdb/v1.9/administration/config-data-nodes/#anti-entropy-ae-settings) section of the data node configuration. To enable the Anti-Entropy service, change the default value of the `[anti-entropy].enabled = false` setting to `true` in the `influxdb.conf` file of each of your data nodes. diff --git a/content/enterprise_influxdb/v1.9/administration/configure/config-data-nodes.md b/content/enterprise_influxdb/v1.9/administration/configure/config-data-nodes.md index 1eb0dc409..1ed1d8fd7 100644 --- a/content/enterprise_influxdb/v1.9/administration/configure/config-data-nodes.md +++ b/content/enterprise_influxdb/v1.9/administration/configure/config-data-nodes.md @@ -49,25 +49,33 @@ All commented-out settings will be determined by the internal defaults. ## Global settings -#### `reporting-disabled = false` +#### `reporting-disabled` + +Default is `false`. Once every 24 hours InfluxDB Enterprise will report usage data to usage.influxdata.com. The data includes a random ID, os, arch, version, the number of series and other usage data. No data from user databases is ever transmitted. Change this option to true to disable reporting. -#### `bind-address = ":8088"` +#### `bind-address` + +Default is `":8088"`. The TCP bind address used by the RPC service for inter-node communication and [backup and restore](/enterprise_influxdb/v1.9/administration/backup-and-restore/). Environment variable: `INFLUXDB_BIND_ADDRESS` -#### `hostname = "localhost"` +#### `hostname` + +Default is `"localhost"`. The hostname of the [data node](/enterprise_influxdb/v1.9/concepts/glossary/#data-node). This must be resolvable by all other nodes in the cluster. Environment variable: `INFLUXDB_HOSTNAME` -#### `gossip-frequency = "3s"` +#### `gossip-frequency` + +Default is `"3s"`. How often to update the cluster with this node's internal status. @@ -81,7 +89,9 @@ Environment variable: `INFLUXDB_GOSSIP_FREQUENCY` The `[enterprise]` section contains the parameters for the meta node's registration with the [InfluxDB Enterprise License Portal](https://portal.influxdata.com/). -#### `license-key = ""` +#### `license-key` + +Default is `""`. The license key created for you on [InfluxPortal](https://portal.influxdata.com). The meta node transmits the license key to [portal.influxdata.com](https://portal.influxdata.com) over port 80 or port 443 and receives a temporary JSON license file in return. The server caches the license file locally. @@ -98,7 +108,9 @@ mutually exclusive and one must remain set to the empty string. Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_KEY` -#### `license-path = ""` +#### `license-path` + +Default is `""`. The local path to the permanent JSON license file that you received from InfluxData for instances that do not have access to the internet. The data process will only function for a limited time without a valid license file. @@ -124,7 +136,9 @@ Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_PATH` Settings related to how the data nodes interact with the meta nodes. -#### `dir = "/var/lib/influxdb/meta"` +#### `dir` + +Default is `"/var/lib/influxdb/meta"`. The directory where the cluster metadata is stored. @@ -132,39 +146,50 @@ The directory where the cluster metadata is stored. Environment variable: `INFLUXDB_META_DIR` -#### `meta-tls-enabled = false` +#### `meta-tls-enabled` + +Default is `false`. Whether to use TLS when connecting to meta nodes. -Set to `true` to if [`https-enabled`](#https-enabled-false) is set to `true`. +Set to `true` to if [`https-enabled`](#https-enabled) is set to `true`. Environment variable: `INFLUXDB_META_META_TLS_ENABLED` -#### `meta-insecure-tls = false` +#### `meta-insecure-tls` + +Default is `false`. Allows insecure TLS connections to meta nodes. This is useful when testing with self-signed certificates. -Set to `true` to allow the data node to accept self-signed certificates if [`https-enabled`](#https-enabled-false) is set to `true`. +Set to `true` to allow the data node to accept self-signed certificates if [`https-enabled`](#https-enabled) is set to `true`. Environment variable: `INFLUXDB_META_META_INSECURE_TLS` -#### `meta-auth-enabled = false` +#### `meta-auth-enabled` + +Default is `false`. This setting must have the same value as the meta nodes' `[meta] auth-enabled` configuration. -Set to `true` if [`auth-enabled`](#auth-enabled-false) is set to `true` in the meta node configuration files. +Set to `true` if [`auth-enabled`](#auth-enabled) is set to `true` in the meta node configuration files. For JWT authentication, also see the [`meta-internal-shared-secret`](#meta-internal-shared-secret) configuration option. Environment variable: `INFLUXDB_META_META_AUTH_ENABLED` -#### `meta-internal-shared-secret = ""` +#### `meta-internal-shared-secret` + +Default is `""`. The shared secret used by the internal API for JWT authentication between InfluxDB nodes. -This value must be the same as the [`internal-shared-secret`](/enterprise_influxdb/v1.9/administration/config-meta-nodes/#internal-shared-secret) specified in the meta node configuration file. +This value must be the same as the [`internal-shared-secret`](/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes/#internal-shared-secret) +specified in the meta node configuration file. Environment variable: `INFLUXDB_META_META_INTERNAL_SHARED_SECRET` -#### `retention-autocreate = true` +#### `retention-autocreate` + +Default is `true`. Automatically creates a default [retention policy](/enterprise_influxdb/v1.9/concepts/glossary/#retention-policy-rp) (RP) when the system creates a database. The default RP (`autogen`) has an infinite duration, a shard group duration of seven days, and a replication factor set to the number of data nodes in the cluster. @@ -173,28 +198,34 @@ Set this option to `false` to prevent the system from creating the `autogen` RP Environment variable: `INFLUXDB_META_RETENTION_AUTOCREATE` -#### `logging-enabled = true` +#### `logging-enabled` + +Default is `true`. Whether log messages are printed for the meta service. Environment variable: `INFLUXDB_META_LOGGING_ENABLED` -#### `password-hash = bcrypt` +#### `password-hash` + +Default is `bcrypt`. Configures password hashing algorithm. Supported options are: `bcrypt` (the default), `pbkdf2-sha256`, and `pbkdf2-sha512` -This setting must have the same value as the meta node option [`meta.password-hash`](/enterprise_influxdb/v1.9/administration/config-meta-nodes/#password-hash--bcrypt). +This setting must have the same value as the meta node option [`meta.password-hash`](/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes/#password-hash). -For detailed configuration information, see [`meta.password-hash`](/enterprise_influxdb/v1.9/administration/config-meta-nodes/#password-hash--bcrypt). +For detailed configuration information, see [`meta.password-hash`](/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes/#password-hash). Environment variable: `INFLUXDB_META_PASSWORD_HASH` -#### `ensure-fips = false` +#### `ensure-fips` + +Default is `false`. When `true`, enables a FIPS-readiness check on startup. Default is `false`. -For detailed configuration information, see [`meta.ensure-fips`](/enterprise_influxdb/v1.9/administration/config-meta-nodes/#ensure-fips--false). +For detailed configuration information, see [`meta.ensure-fips`](/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes/#ensure-fips). Environment variable: `INFLUXDB_META_ENSURE_FIPS` @@ -208,33 +239,43 @@ Controls where the actual shard data for InfluxDB lives and how it is compacted "dir" may need to be changed to a suitable place for your system. The defaults should work for most systems. -#### `dir = "/var/lib/influxdb/data"` +#### `dir` + +Default is `"/var/lib/influxdb/data"`. The directory where the TSM storage engine stores TSM (read-optimized) files. Environment variable: `INFLUXDB_DATA_DIR` -#### `wal-dir = "/var/lib/influxdb/wal"` +#### `wal-dir` + +Default is `"/var/lib/influxdb/wal"`. The directory where the TSM storage engine stores WAL (write-optimized) files. Environment variable: `INFLUXDB_DATA_WAL_DIR` -#### `trace-logging-enabled = false` +#### `trace-logging-enabled` + +Default is `false`. Trace logging provides more verbose output around the TSM engine. Turning this on can provide more useful output for debugging TSM engine issues. Environmental variable: `INFLUXDB_DATA_TRACE_LOGGING_ENABLED` -#### `query-log-enabled = true` +#### `query-log-enabled` + +Default is `true`. Whether queries should be logged before execution. Very useful for troubleshooting, but will log any sensitive data contained within a query. Environment variable: `INFLUXDB_DATA_QUERY_LOG_ENABLED` -#### `wal-fsync-delay = "0s"` +#### `wal-fsync-delay` + +Default is `"0s"`. The amount of time that a write waits before fsyncing. Use a duration greater than 0 to batch up multiple fsync calls. @@ -244,14 +285,18 @@ InfluxData recommends values ranging from `0ms` to `100ms` for non-SSD disks. Environment variable: `INFLUXDB_DATA_WAL_FSYNC_DELAY` -#### `ingress-metric-by-measurement-enabled = false` +#### `ingress-metric-by-measurement-enabled` + +Default is `false`. When `true`, collect statistics of points, values and new series written per-measurement. Metrics are gathered per data node. These can be accessed via the `/debug/vars` endpoint and in the `_internal` database if enabled. Environment variable: `INFLUXDB_DATA_INGRESS_METRIC_BY_MEASUREMENT_ENABLED` -#### `ingress-metric-by-login-enabled = false` +#### `ingress-metric-by-login-enabled` + +Default is `false`. When `true`, collect statistics of points, values and new series written per-login. Metrics are gathered per data node. These can be accessed via the `/debug/vars` endpoint and in the `_internal` database if enabled. @@ -260,25 +305,35 @@ Environment variable: `INFLUXDB_DATA_INGRESS_METRIC_BY_LOGIN_ENABLED` ### Data settings for the TSM engine -#### `cache-max-memory-size = "1g"` +#### `cache-max-memory-size` -The maximum size a shard cache can reach before it starts rejecting writes. +Default is `1000000000`. + +The maximum size in bytes that a shard cache can reach before it starts rejecting writes. + +Consider increasing this value if encountering `cache maximum memory size exceeded` errors. Environment variable: `INFLUXDB_DATA_CACHE_MAX_MEMORY_SIZE` -#### `cache-snapshot-memory-size = "25m"` +#### `cache-snapshot-memory-size` -The size at which the TSM engine will snapshot the cache and write it to a TSM file, freeing up memory. +Default is `26214400`. + +The size in bytes at which the TSM engine will snapshot the cache and write it to a TSM file, freeing up memory. Environment variable: `INFLUXDB_DATA_CACHE_SNAPSHOT_MEMORY_SIZE` -#### `cache-snapshot-write-cold-duration = "10m"` +#### `cache-snapshot-write-cold-duration` + +Default is `"10m"`. The length of time at which the TSM engine will snapshot the cache and write it to a new TSM file if the shard hasn't received writes or deletes. Environment variable: `INFLUXDB_DATA_CACHE_SNAPSHOT_WRITE_COLD_DURATION` -#### `max-concurrent-compactions = 0` +#### `max-concurrent-compactions` + +Default is `0`. The maximum number of concurrent full and level compactions that can run at one time. A value of `0` (unlimited compactions) results in 50% of `runtime.GOMAXPROCS(0)` used at runtime, @@ -288,26 +343,35 @@ This setting does not apply to cache snapshotting. Environmental variable: `INFLUXDB_DATA_CACHE_MAX_CONCURRENT_COMPACTIONS` -#### `compact-throughput = "48m"` +#### `compact-throughput` + +Default is `50331648`. The maximum number of bytes per seconds TSM compactions write to disk. Default is `"48m"` (48 million). Note that short bursts are allowed to happen at a possibly larger value, set by `compact-throughput-burst`. Environment variable: `INFLUXDB_DATA_COMPACT_THROUGHPUT` -#### `compact-throughput-burst = "48m"` + +#### `compact-throughput-burst` + +Default is `50331648`. The maximum number of bytes per seconds TSM compactions write to disk during brief bursts. Default is `"48m"` (48 million). Environment variable: `INFLUXDB_DATA_COMPACT_THROUGHPUT_BURST` -#### `compact-full-write-cold-duration = "4h"` +#### `compact-full-write-cold-duration` + +Default is `"4h"`. The duration at which to compact all TSM and TSI files in a shard if it has not received a write or delete. Environment variable: `INFLUXDB_DATA_COMPACT_FULL_WRITE_COLD_DURATION` -#### `index-version = "inmem"` +#### `index-version` + +Default is `"inmem"`. The type of shard index to use for new shards. The default (`inmem`) is to use an in-memory index that is recreated at startup. @@ -318,7 +382,9 @@ Environment variable: `INFLUXDB_DATA_INDEX_VERSION` ### In-memory (`inmem`) index settings -#### `max-series-per-database = 1000000` +#### `max-series-per-database` + +Default is `1000000`. The maximum number of [series](/enterprise_influxdb/v1.9/concepts/glossary/#series) allowed per database before writes are dropped. The default setting is `1000000` (one million). @@ -338,7 +404,9 @@ If a point causes the number of series in a database to exceed Environment variable: `INFLUXDB_DATA_MAX_SERIES_PER_DATABASE` -#### `max-values-per-tag = 100000` +#### `max-values-per-tag` + +Default is `100000`. The maximum number of [tag values](/enterprise_influxdb/v1.9/concepts/glossary/#tag-value) allowed per [tag key](/enterprise_influxdb/v1.9/concepts/glossary/#tag-key). The default value is `100000` (one hundred thousand). @@ -356,7 +424,9 @@ Environment variable: `INFLUXDB_DATA_MAX_VALUES_PER_TAG` ### TSI (`tsi1`) index settings -#### `max-index-log-file-size = "1m"` +#### `max-index-log-file-size` + +Default is `1048576`. The threshold, in bytes, when an index write-ahead log (WAL) file will compact into an index file. Lower sizes will cause log files to be compacted more @@ -368,7 +438,9 @@ Values without a size suffix are in bytes. Environment variable: `INFLUXDB_DATA_MAX_INDEX_LOG_FILE_SIZE` -#### `series-id-set-cache-size = 100` +#### `series-id-set-cache-size` + +Default is `100`. The size of the internal cache used in the TSI index to store previously calculated series results. Cached results will be returned quickly from the cache rather @@ -400,14 +472,18 @@ a single-use TCP connection may be used. For information on InfluxDB `_internal` measurement statistics related to clusters, RPCs, and shards, see [Measurements for monitoring InfluxDB Enterprise (`_internal`)](/platform/monitoring/influxdata-platform/tools/measurements-internal/#cluster-enterprise-only). -#### `dial-timeout = "1s"` +#### `dial-timeout` + +Default is `"1s"`. The duration for which the meta node waits for a connection to a remote data node before the meta node attempts to connect to a different remote data node. This setting applies to queries only. Environment variable: `INFLUXDB_CLUSTER_DIAL_TIMEOUT` -#### `pool-max-idle-time = "60s"` +#### `pool-max-idle-time` + +Default is `"60s"`. The maximum time that a TCP connection to another data node remains idle in the connection pool. When the connection is idle longer than the specified duration, the inactive connection is reaped — @@ -416,7 +492,9 @@ idle connections minimizes inactive connections, decreases system load, and prev Environment variable: `INFLUXDB_CLUSTER_POOL_MAX_IDLE_TIME` -#### `pool-max-idle-streams = 100` +#### `pool-max-idle-streams` + +Default is `100`. The maximum number of idle RPC stream connections to retain in an idle pool between two nodes. When a new RPC request is issued, a connection is temporarily pulled from the idle pool, used, and then returned. @@ -428,7 +506,9 @@ so it is unlikely that changing this value will measurably improve performance b Environment variable: `INFLUXDB_CLUSTER_POOL_MAX_IDLE_STREAMS` -#### `allow-out-of-order-writes = false` +#### `allow-out-of-order-writes` + +Default is `false`. By default, this option is set to false and writes are processed in the order that they are received. This means if any points are in the hinted handoff (HH) queue for a shard, all incoming points must go into the HH queue. @@ -442,33 +522,45 @@ Point 1 (`cpu v=1.0 1234`) arrives at `node1`, attempts to replicate on `node2`, Environment variable: `INFLUXDB_CLUSTER_ALLOW_OUT_OF_ORDER` -#### `shard-reader-timeout = "0"` +#### `shard-reader-timeout` + +Default is `"0"`. The default timeout set on shard readers. The time in which a query connection must return its response after which the system returns an error. Environment variable: `INFLUXDB_CLUSTER_SHARD_READER_TIMEOUT` -#### `https-enabled = false` +#### `https-enabled` + +Default is `false`. Determines whether data nodes use HTTPS to communicate with each other. -#### `https-certificate = ""` +#### `https-certificate` + +Default is `""`. The SSL certificate to use when HTTPS is enabled. The certificate should be a PEM-encoded bundle of the certificate and key. If it is just the certificate, a key must be specified in `https-private-key`. -#### `https-private-key = ""` +#### `https-private-key` + +Default is `""`. Use a separate private key location. -#### `https-insecure-tls = false` +#### `https-insecure-tls` + +Default is `false`. Whether data nodes will skip certificate validation communicating with each other over HTTPS. This is useful when testing with self-signed certificates. -#### `cluster-tracing = false` +#### `cluster-tracing` + +Default is `false`. Enables cluster trace logging. Set to `true` to enable logging of cluster communications. @@ -476,13 +568,17 @@ Enable this setting to verify connectivity issues between data nodes. Environment variable: `INFLUXDB_CLUSTER_CLUSTER_TRACING` -#### `write-timeout = "10s"` +#### `write-timeout` + +Default is `"10s"`. The duration a write request waits until a "timeout" error is returned to the caller. The default value is 10 seconds. Environment variable: `INFLUXDB_CLUSTER_WRITE_TIMEOUT` -#### `max-concurrent-queries = 0` +#### `max-concurrent-queries` + +Default is `0`. The maximum number of concurrent queries allowed to be executing at one time. If a query is executed and exceeds this limit, an error is returned to the caller. @@ -490,14 +586,26 @@ This limit can be disabled by setting it to `0`. Environment variable: `INFLUXDB_CLUSTER_MAX_CONCURRENT_QUERIES` -#### `query-timeout = "0s"` +#### `max-concurrent-deletes` + +The default is `1`. + +The maximum number of allowed simultaneous `DELETE` calls on a shard. + +Environment variable: `INFLUXDB_CLUSTER_MAX_CONCURRENT_DELETES` + +#### `query-timeout` + +Default is `"0s"`. The maximum time a query is allowed to execute before being killed by the system. This limit can help prevent run away queries. Setting the value to `0` disables the limit. Environment variable: `INFLUXDB_CLUSTER_QUERY_TIMEOUT` -#### `log-queries-after = "0s"` +#### `log-queries-after` + +Default is `"0s"`. The time threshold when a query will be logged as a slow query. This limit can be set to help discover slow or resource intensive queries. @@ -505,27 +613,39 @@ Setting the value to `0` disables the slow query logging. Environment variable: `INFLUXDB_CLUSTER_LOG_QUERIES_AFTER` -#### `max-select-point = 0` +#### `max-select-point` + +Default is `0`. The maximum number of points a SELECT statement can process. A value of `0` will make the maximum point count unlimited. Environment variable: `INFLUXDB_CLUSTER_MAX_SELECT_POINT` -#### `max-select-series = 0` +#### `max-select-series` + +Default is `0`. The maximum number of series a SELECT can run. A value of `0` will make the maximum series count unlimited. Environment variable: `INFLUXDB_CLUSTER_MAX_SELECT_SERIES` -#### `max-select-buckets = 0` +#### `max-select-buckets` + +Default is `0`. The maximum number of group by time buckets a SELECT can create. A value of `0` will make the maximum number of buckets unlimited. Environment variable: `INFLUXDB_CLUSTER_MAX_SELECT_BUCKETS` +#### `termination-query-log = false` + +Set to `true` to print all running queries to the log when a data node process receives a `SIGTERM` (for example, a k8s process exceeds the container memory limit or the process is terminated). + +Environment variable: `INFLUXDB_CLUSTER_TERMINATION_QUERY_LOG` + ----- ## Hinted Handoff settings @@ -534,32 +654,42 @@ Environment variable: `INFLUXDB_CLUSTER_MAX_SELECT_BUCKETS` Controls the hinted handoff (HH) queue, which allows data nodes to temporarily cache writes destined for another data node when that data node is unreachable. -#### `batch-size = 512000` +#### `batch-size` + +Default is `512000`. The maximum number of bytes to write to a shard in a single request. Environment variable: `INFLUXDB_HINTED_HANDOFF_BATCH_SIZE` -#### `max-pending-writes = 1024` +#### `max-pending-writes` + +Default is `1024`. The maximum number of incoming pending writes allowed in the hinted handoff queue. Environment variable: `INFLUXDB_HINTED_HANDOFF_MAX_PENDING_WRITES` -#### `dir = "/var/lib/influxdb/hh"` +#### `dir` + +Default is `"/var/lib/influxdb/hh"`. The hinted handoff directory where the durable queue will be stored on disk. Environment variable: `INFLUXDB_HINTED_HANDOFF_DIR` -#### `enabled = true` +#### `enabled` + +Default is `true`. Set to `false` to disable hinted handoff. Disabling hinted handoff is not recommended and can lead to data loss if another data node is unreachable for any length of time. Environment variable: `INFLUXDB_HINTED_HANDOFF_ENABLED` -#### `max-size = 10737418240` +#### `max-size` + +Default is `10737418240`. The maximum size of the hinted handoff queue in bytes. Each queue is for one and only one other data node in the cluster. @@ -567,7 +697,9 @@ If there are N data nodes in the cluster, each data node may have up to N-1 hint Environment variable: `INFLUXDB_HINTED_HANDOFF_MAX_SIZE` -#### `max-age = "168h0m0s"` +#### `max-age` + +Default is `"168h0m0s"`. The time interval that writes sit in the queue before they are purged. The time is determined by how long the batch has been in the queue, not by the timestamps in the data. @@ -575,7 +707,9 @@ If another data node is unreachable for more than the `max-age` it can lead to d Environment variable: `INFLUXDB_HINTED_HANDOFF_MAX_AGE` -#### `retry-concurrency = 20` +#### `retry-concurrency` + +Default is `20`. The maximum number of hinted handoff blocks that the source data node attempts to write to each destination data node. Hinted handoff blocks are sets of data that belong to the same shard and have the same destination data node. @@ -590,19 +724,25 @@ Note that increasing `retry-concurrency` also increases network traffic. Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_CONCURRENCY` -#### `retry-rate-limit = 0` +#### `retry-rate-limit` + +Default is `0`. The rate limit (in bytes per second) that hinted handoff retries hints. A value of `0` disables the rate limit. Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_RATE_LIMIT` -#### `retry-interval = "1s"` +#### `retry-interval` + +Default is `"1s"`. The time period after which the hinted handoff retries a write after the write fails. Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_INTERVAL` -#### `retry-max-interval = "10s"` +#### `retry-max-interval` + +Default is `"10s"`. The maximum interval after which the hinted handoff retries a write after the write fails. The `retry-max-interval` option is no longer in use and will be removed from the configuration file in a future release. @@ -610,7 +750,9 @@ Changing the `retry-max-interval` setting has no effect on your cluster. Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_MAX_INTERVAL` -#### `purge-interval = "1m0s"` +#### `purge-interval` + +Default is `"1m0s"`. The interval at which InfluxDB checks to purge data that are above `max-age`. @@ -626,20 +768,26 @@ For information about the Anti-Entropy service, see [Anti-entropy service in Inf Controls the copying and repairing of shards to ensure that data nodes contain the shard data they are supposed to. -#### `enabled = false` +#### `enabled` + +Default is `false`. Enables the anti-entropy service. Default value is `false`. Environment variable: `INFLUXDB_ANTI_ENTROPY_ENABLED` -#### `check-interval = "5m"` +#### `check-interval` + +Default is `"5m"`. The interval of time when anti-entropy checks run on each data node. Environment variable: `INFLUXDB_ANTI_ENTROPY_CHECK_INTERVAL` -#### `max-fetch = 10` +#### `max-fetch` + +Default is `10`. The maximum number of shards that a single data node will copy or repair in parallel. @@ -653,14 +801,18 @@ higher CPU load as new shard digest files are created. The added load drops off after shard digests are completed for existing shards. {{% /note %}} -#### `max-sync = 1` +#### `max-sync` + +Default is `1`. The maximum number of concurrent sync operations that should be performed. Modify this setting only when requested by InfluxData support. Environment variable: `INFLUXDB_ANTI_ENTROPY_MAX_SYNC` -#### `auto-repair-missing = true` +#### `auto-repair-missing` + +Default is `true`. Enables missing shards to automatically be repaired. @@ -674,14 +826,18 @@ Environment variable: `INFLUXDB_ANTI_ENTROPY_AUTO_REPAIR_MISSING` Controls the enforcement of retention policies for evicting old data. -#### `enabled = true` +#### `enabled` + +Default is `true`. Enables retention policy enforcement. Default value is `true`. Environment variable: `INFLUXDB_RETENTION_ENABLED` -#### `check-interval = "30m0s"` +#### `check-interval` + +Default is `"30m0s"`. The interval of time when retention policy enforcement checks run. @@ -696,19 +852,25 @@ Environment variable: `INFLUXDB_RETENTION_CHECK_INTERVAL` Controls the precreation of shards, so they are available before data arrives. Only shards that, after creation, will have both a start- and end-time in the future, will ever be created. Shards are never precreated that would be wholly or partially in the past. -#### `enabled = true` +#### `enabled` + +Default is `true`. Enables the shard precreation service. Environment variable: `INFLUXDB_SHARD_PRECREATION_ENABLED` -#### `check-interval = "10m"` +#### `check-interval` + +Default is `"10m"`. The interval of time when the check to precreate new shards runs. Environment variable: `INFLUXDB_SHARD_PRECREATION_CHECK_INTERVAL` -#### `advance-period = "30m"` +#### `advance-period` + +Default is `"30m"`. The default period ahead of the end time of a shard group that its successor group is created. @@ -730,25 +892,33 @@ For InfluxDB Enterprise production systems, InfluxData recommends including a de * On the dedicated InfluxDB monitoring instance, set `store-enabled = false` to avoid potential performance and storage issues. * On each InfluxDB cluster node, install a Telegraf input plugin and Telegraf output plugin configured to report data to the dedicated InfluxDB monitoring instance. -#### `store-enabled = true` +#### `store-enabled` + +Default is `true`. Enables the internal storage of statistics. Environment variable: `INFLUXDB_MONITOR_STORE_ENABLED` -#### `store-database = "_internal"` +#### `store-database` + +Default is `"_internal"`. The destination database for recorded statistics. Environment variable: `INFLUXDB_MONITOR_STORE_DATABASE` -#### `store-interval = "10s"` +#### `store-interval` + +Default is `"10s"`. The interval at which to record statistics. Environment variable: `INFLUXDB_MONITOR_STORE_INTERVAL` -#### `remote-collect-interval = "10s"` +#### `remote-collect-interval` + +Default is `"10s"`. The time interval to poll other data nodes' stats when aggregating cluster stats. @@ -762,47 +932,63 @@ Environment variable: `INFLUXDB_MONITOR_REMOTE_COLLECT_INTERVAL` Controls how the HTTP endpoints are configured. These are the primary mechanism for getting data into and out of InfluxDB. -#### `enabled = true` +#### `enabled` + +Default is `true`. Enables HTTP endpoints. Environment variable: `INFLUXDB_HTTP_ENABLED` -#### `flux-enabled = false` +#### `flux-enabled` + +Default is `false`. Determines whether the Flux query endpoint is enabled. To enable the use of Flux queries, set the value to `true`. Environment variable: `INFLUXDB_HTTP_FLUX_ENABLED` -#### `bind-address = ":8086"` +#### `bind-address` + +Default is `":8086"`. The bind address used by the HTTP service. Environment variable: `INFLUXDB_HTTP_BIND_ADDRESS` -#### `auth-enabled = false` +#### `auth-enabled` + +Default is `false`. Enables HTTP authentication. Environment variable: `INFLUXDB_HTTP_AUTH_ENABLED` -#### `realm = "InfluxDB"` +#### `realm` + +Default is `"InfluxDB"`. The default realm sent back when issuing a basic authorization challenge. Environment variable: `INFLUXDB_HTTP_REALM` -#### `log-enabled = true` +#### `log-enabled` + +Default is `true`. Enables HTTP request logging. Environment variable: `INFLUXDB_HTTP_LOG_ENABLED` -#### `suppress-write-log = false` +#### `suppress-write-log` + +Default is `false`. Determines whether the HTTP write request logs should be suppressed when the log is enabled. -#### `access-log-path = ""` +#### `access-log-path` + +Default is `""`. The path to the access log, which determines whether detailed write logging is enabled using `log-enabled = true`. Specifies whether HTTP request logging is written to the specified path when enabled. @@ -813,7 +999,9 @@ If `influxd` is unable to access the specified path, it will log an error and fa Environment variable: `INFLUXDB_HTTP_ACCESS_LOG_PATH` -#### `access-log-status-filters = []` +#### `access-log-status-filters` + +Default is `[]`. Filters which requests should be logged. Each filter is of the pattern `nnn`, `nnx`, or `nxx` where `n` is a number and `x` is the wildcard for any number. @@ -843,26 +1031,34 @@ When using environment variables, the values can be supplied as follows. The `_n` at the end of the environment variable represents the array position of the entry. -#### `write-tracing = false` +#### `write-tracing` + +Default is `false`. Enables detailed write logging. Environment variable: `INFLUXDB_HTTP_WRITE_TRACING` -#### `pprof-enabled = true` +#### `pprof-enabled` + +Default is `true`. Determines whether the `/pprof` endpoint is enabled. This endpoint is used for troubleshooting and monitoring. Environment variable: `INFLUXDB_HTTP_PPROF_ENABLED` -#### `https-enabled = false` +#### `https-enabled` + +Default is `false`. Enables HTTPS. Environment variable: `INFLUXDB_HTTP_HTTPS_ENABLED` -#### `https-certificate = "/etc/ssl/influxdb.pem"` +#### `https-certificate` + +Default is `"/etc/ssl/influxdb.pem"`. The SSL certificate to use when HTTPS is enabled. The certificate should be a PEM-encoded bundle of the certificate and key. @@ -870,19 +1066,25 @@ If it is just the certificate, a key must be specified in `https-private-key`. Environment variable: `INFLUXDB_HTTP_HTTPS_CERTIFICATE` -#### `https-private-key = ""` +#### `https-private-key` + +Default is `""`. The location of the separate private key. Environment variable: `INFLUXDB_HTTP_HTTPS_PRIVATE_KEY` -#### `shared-secret = ""` +#### `shared-secret` + +Default is `""`. The JWT authorization shared secret used to validate requests using JSON web tokens (JWTs). Environment variable: `INFLUXDB_HTTP_SHARED_SECRET` -#### `max-body-size = 25000000` +#### `max-body-size` + +Default is `25000000`. The maximum size, in bytes, of a client request body. When a HTTP client sends data that exceeds the configured maximum size, a `413 Request Entity Too Large` HTTP response is returned. @@ -890,7 +1092,9 @@ To disable the limit, set the value to `0`. Environment variable: `INFLUXDB_HTTP_MAX_BODY_SIZE` -#### `max-row-limit = 0` +#### `max-row-limit` + +Default is `0`. The default chunk size for result sets that should be chunked. The maximum number of rows that can be returned in a non-chunked query. @@ -899,7 +1103,9 @@ InfluxDB includes a `"partial":true` tag in the response body if query results e Environment variable: `INFLUXDB_HTTP_MAX_ROW_LIMIT` -#### `max-connection-limit = 0` +#### `max-connection-limit` + +Default is `0`. The maximum number of HTTP connections that may be open at once. New connections that would exceed this limit are dropped. @@ -907,33 +1113,43 @@ The default value of `0` disables the limit. Environment variable: `INFLUXDB_HTTP_MAX_CONNECTION_LIMIT` -#### `unix-socket-enabled = false` +#### `unix-socket-enabled` + +Default is `false`. Enables the HTTP service over the UNIX domain socket. Environment variable: `INFLUXDB_HTTP_UNIX_SOCKET_ENABLED` -#### `bind-socket = "/var/run/influxdb.sock"` +#### `bind-socket` + +Default is `"/var/run/influxdb.sock"`. The path of the UNIX domain socket. Environment variable: `INFLUXDB_HTTP_BIND_SOCKET` -#### `max-concurrent-write-limit = 0` +#### `max-concurrent-write-limit` + +Default is `0`. The maximum number of writes processed concurrently. The default value of `0` disables the limit. Environment variable: `INFLUXDB_HTTP_MAX_CONCURRENT_WRITE_LIMIT` -#### `max-enqueued-write-limit = 0` +#### `max-enqueued-write-limit` + +Default is `0`. The maximum number of writes queued for processing. The default value of `0` disables the limit. Environment variable: `INFLUXDB_HTTP_MAX_ENQUEUED_WRITE_LIMIT` -#### `enqueued-write-timeout = 0` +#### `enqueued-write-timeout` + +Default is `0`. The maximum duration for a write to wait in the queue to be processed. Setting this to `0` or setting `max-concurrent-write-limit` to `0` disables the limit. @@ -944,7 +1160,9 @@ Setting this to `0` or setting `max-concurrent-write-limit` to `0` disables the ### `[logging]` -#### `format = "logfmt"` +#### `format` + +Default is `"logfmt"`. Determines which log encoder to use for logs. Valid options are `auto`, `logfmt`, and `json`. @@ -953,13 +1171,17 @@ When the output is a non-TTY, `auto` will use `logfmt`. Environment variable: `INFLUXDB_LOGGING_FORMAT` -#### `level = "info"` +#### `level` + +Default is `"info"`. Determines which level of logs will be emitted. Environment variable: `INFLUXDB_LOGGING_LEVEL` -#### `suppress-logo = false` +#### `suppress-logo` + +Default is `false`. Suppresses the logo output that is printed when the program is started. @@ -973,45 +1195,59 @@ Environment variable: `INFLUXDB_LOGGING_SUPPRESS_LOGO` Controls the subscriptions, which can be used to fork a copy of all data received by the InfluxDB host. -#### `enabled = true` +#### `enabled` + +Default is `true`. Determines whether the subscriber service is enabled. Environment variable: `INFLUXDB_SUBSCRIBER_ENABLED` -#### `http-timeout = "30s"` +#### `http-timeout` + +Default is `"30s"`. The default timeout for HTTP writes to subscribers. Environment variable: `INFLUXDB_SUBSCRIBER_HTTP_TIMEOUT` -#### `insecure-skip-verify = false` +#### `insecure-skip-verify` + +Default is `false`. Allows insecure HTTPS connections to subscribers. This option is useful when testing with self-signed certificates. Environment variable: `INFLUXDB_SUBSCRIBER_INSECURE_SKIP_VERIFY` -#### `ca-certs = ""` +#### `ca-certs` + +Default is `""`. The path to the PEM-encoded CA certs file. If the set to the empty string (`""`), the default system certs will used. Environment variable: `INFLUXDB_SUBSCRIBER_CA_CERTS` -#### `write-concurrency = 40` +#### `write-concurrency` + +Default is `40`. The number of writer Goroutines processing the write channel. Environment variable: `INFLUXDB_SUBSCRIBER_WRITE_CONCURRENCY` -#### `write-buffer-size = 1000` +#### `write-buffer-size` + +Default is `1000`. The number of in-flight writes buffered in the write channel. Environment variable: `INFLUXDB_SUBSCRIBER_WRITE_BUFFER_SIZE` -#### `total-buffer-bytes = 0` +#### `total-buffer-bytes` + +Default is `0`. Total number of bytes allocated to buffering across all subscriptions. Each named subscription receives an equal share of the total. @@ -1029,7 +1265,9 @@ Environment variable: `INFLUXDB_SUBSCRIBER_TOTAL_BUFFER_BYTES` This section controls one or many listeners for Graphite data. For more information, see [Graphite protocol support in InfluxDB](/enterprise_influxdb/v1.9/supported_protocols/graphite/). -#### `enabled = false` +#### `enabled` + +Default is `false`. Determines whether the graphite endpoint is enabled. @@ -1045,27 +1283,39 @@ Batching will buffer points in memory if you have many coming in. # consistency-level = "one" ``` -#### `batch-size = 5000` +#### `batch-size` + +Default is `5000`. Flush if this many points get buffered. -#### `batch-pending = 10` +#### `batch-pending` + +Default is `10`. The number of batches that may be pending in memory. -#### `batch-timeout = "1s"` +#### `batch-timeout` + +Default is `"1s"`. Flush at least this often even if we haven't hit buffer limit. -#### `udp-read-buffer = 0` +#### `udp-read-buffer` + +Default is `0`. UDP Read buffer size, `0` means OS default. UDP listener will fail if set above OS max. -#### `separator = "."` +#### `separator` + +Default is `"."`. This string joins multiple matching 'measurement' values providing more control over the final measurement name. -#### `tags = ["region=us-east", "zone=1c"]` +#### `tags` + +Default is `["region=us-east", "zone=1c"]`. Default tags that will be added to all metrics. These can be overridden at the template level or by tags extracted from metric. @@ -1096,18 +1346,24 @@ For more information, see [CollectD protocol support in InfluxDB](/enterprise_in ### `[[collectd]]` ```toml -# enabled = false +# enabled` + +Default is `false. # bind-address = ":25826" # database = "collectd" # retention-policy = "" # typesdb = "/usr/share/collectd/types.db" ``` -#### `security-level = ""` +#### `security-level` + +Default is `""`. The collectd security level can be "" (or "none"), "sign", or "encrypt". -#### `auth-file = ""` +#### `auth-file` + +Default is `""`. The path to the `collectd` authorization file. Must be set if security level is sign or encrypt. @@ -1117,19 +1373,27 @@ These next lines control how batching works. You should have this enabled otherwise you could get dropped metrics or poor performance. Batching will buffer points in memory if you have many coming in. -#### `batch-size = 5000` +#### `batch-size` + +Default is `5000`. Flush if this many points get buffered. -#### `batch-pending = 10` +#### `batch-pending` + +Default is `10`. The number of batches that may be pending in memory. -#### `batch-timeout = "10s"` +#### `batch-timeout` + +Default is `"10s"`. Flush at least this often even if we haven't hit buffer limit. -#### `read-buffer = 0` +#### `read-buffer` + +Default is `0`. UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max. @@ -1152,7 +1416,9 @@ For more information, see [OpenTSDB protocol support in InfluxDB](/enterprise_in # certificate= "/etc/ssl/influxdb.pem" ``` -#### `log-point-errors = true` +#### `log-point-errors` + +Default is `true`. Log an error for every malformed point. @@ -1162,15 +1428,21 @@ These next lines control how batching works. You should have this enabled otherwise you could get dropped metrics or poor performance. Only points metrics received over the telnet protocol undergo batching. -#### `batch-size = 1000` +#### `batch-size` + +Default is `1000`. Flush if this many points get buffered. -#### `batch-pending = 5` +#### `batch-pending` + +Default is `5`. The number of batches that may be pending in memory. -#### `batch-timeout = "1s"` +#### `batch-timeout` + +Default is `"1s"`. Flush at least this often even if we haven't hit buffer limit. @@ -1190,26 +1462,36 @@ For more information, see [UDP protocol support in InfluxDB](/enterprise_influxd # retention-policy = "" ``` -#### `precision = ""` +#### `precision` + +Default is `""`. InfluxDB precision for timestamps on received points ("" or "n", "u", "ms", "s", "m", "h") These next lines control how batching works. You should have this enabled otherwise you could get dropped metrics or poor performance. Batching will buffer points in memory if you have many coming in. -#### `batch-size = 5000` +#### `batch-size` + +Default is `5000`. Flush if this many points get buffered. -#### `batch-pending = 10` +#### `batch-pending` + +Default is `10`. The number of batches that may be pending in memory. -#### `batch-timeout = "1s"` +#### `batch-timeout` + +Default is `"1s"`. Will flush at least this often even if we haven't hit buffer limit. -#### `read-buffer = 0` +#### `read-buffer` + +Default is `0`. UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max. @@ -1221,25 +1503,33 @@ UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS Controls how continuous queries are run within InfluxDB. -#### `enabled = true` +#### `enabled` + +Default is `true`. Determines whether the continuous query service is enabled. Environment variable: `INFLUXDB_CONTINUOUS_QUERIES_ENABLED` -#### `log-enabled = true` +#### `log-enabled` + +Default is `true`. Controls whether queries are logged when executed by the CQ service. Environment variable: `INFLUXDB_CONTINUOUS_QUERIES_LOG_ENABLED` -#### `query-stats-enabled = false` +#### `query-stats-enabled` + +Default is `false`. Write continuous query execution statistics to the default monitor store. Environment variable: `INFLUXDB_CONTINUOUS_QUERIES_QUERY_STATS_ENABLED` -#### `run-interval = "1s"` +#### `run-interval` + +Default is `"1s"`. The interval for how often continuous queries will be checked whether they need to run. @@ -1281,7 +1571,9 @@ max-version = "tls1.3" ``` -#### `min-version = "tls1.3"` +#### `min-version` + +Default is `"tls1.3"`. Minimum version of the TLS protocol that will be negotiated. Valid values include: `tls1.0`, `tls1.1`, and `tls1.3`. @@ -1290,7 +1582,9 @@ In this example, `tls1.3` specifies the minimum version as TLS 1.3. Environment variable: `INFLUXDB_TLS_MIN_VERSION` -#### `max-version = "tls1.3"` +#### `max-version` + +Default is `"tls1.3"`. The maximum version of the TLS protocol that will be negotiated. Valid values include: `tls1.0`, `tls1.1`, and `tls1.3`. @@ -1306,27 +1600,28 @@ Environment variable: `INFLUXDB_TLS_MAX_VERSION` This section contains configuration settings for Flux query management. For more on managing queries, see [Query Management](/enterprise_influxdb/v1.9/troubleshooting/query_management/). -#### query-concurrency +#### `query-concurrency` + Number of queries allowed to execute concurrently. `0` means unlimited. Default is `0`. -#### query-initial-memory-bytes +#### `query-initial-memory-bytes` Initial bytes of memory allocated for a query. `0` means unlimited. Default is `0`. -#### query-max-memory-bytes +#### `query-max-memory-bytes` Maximum total bytes of memory allowed for an individual query. `0` means unlimited. Default is `0`. -#### total-max-memory-bytes +#### `total-max-memory-bytes` Maximum total bytes of memory allowed for all running Flux queries. `0` means unlimited. Default is `0`. -#### query-queue-size +#### `query-queue-size` Maximum number of queries allowed in execution queue. When queue limit is reached, new queries are rejected. `0` means unlimited. diff --git a/content/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes.md b/content/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes.md index f1da803fe..0479edaeb 100644 --- a/content/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes.md +++ b/content/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes.md @@ -21,7 +21,9 @@ aliases: ### Global options -#### `reporting-disabled = false` +#### `reporting-disabled` + +Default is `false`. InfluxData, the company, relies on reported data from running nodes primarily to track the adoption rates of different InfluxDB versions. @@ -35,12 +37,16 @@ To disable reporting, set this option to `true`. > **Note:** No data from user databases are ever transmitted. -#### `bind-address = ""` +#### `bind-address` + +Default is `""`. This setting is not intended for use. It will be removed in future versions. -#### `hostname = ""` +#### `hostname` + +Default is `""`. The hostname of the [meta node](/enterprise_influxdb/v1.9/concepts/glossary/#meta-node). This must be resolvable and reachable by all other members of the cluster. @@ -55,7 +61,9 @@ Environment variable: `INFLUXDB_HOSTNAME` The `[enterprise]` section contains the parameters for the meta node's registration with the [InfluxData portal](https://portal.influxdata.com/). -#### `license-key = ""` +#### `license-key` + +Default is `""`. The license key created for you on [InfluxData portal](https://portal.influxdata.com). The meta node transmits the license key to @@ -72,7 +80,9 @@ Use the same key for all nodes in the same cluster. Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_KEY` -#### `license-path = ""` +#### `license-path` + +Default is `""`. The local path to the permanent JSON license file that you received from InfluxData for instances that do not have access to the internet. @@ -96,13 +106,17 @@ Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_PATH` #### `[meta]` -#### `dir = "/var/lib/influxdb/meta"` +#### `dir` + +Default is `"/var/lib/influxdb/meta"`. The directory where cluster meta data is stored. Environment variable: `INFLUXDB_META_DIR` -#### `bind-address = ":8089"` +#### `bind-address` + +Default is `":8089"`. The bind address(port) for meta node communication. For simplicity, InfluxData recommends using the same port on all meta nodes, @@ -110,13 +124,17 @@ but this is not necessary. Environment variable: `INFLUXDB_META_BIND_ADDRESS` -#### `http-bind-address = ":8091"` +#### `http-bind-address` + +Default is `":8091"`. The default address to bind the API to. Environment variable: `INFLUXDB_META_HTTP_BIND_ADDRESS` -#### `https-enabled = false` +#### `https-enabled` + +Default is `false`. Determines whether meta nodes use HTTPS to communicate with each other. By default, HTTPS is disabled. We strongly recommend enabling HTTPS. @@ -124,7 +142,9 @@ To enable HTTPS, set https-enabled to `true`, specify the path to the SSL certif Environment variable: `INFLUXDB_META_HTTPS_ENABLED` -#### `https-certificate = ""` +#### `https-certificate` + +Default is `""`. If HTTPS is enabled, specify the path to the SSL certificate. Use either: @@ -134,7 +154,9 @@ Use either: Environment variable: `INFLUXDB_META_HTTPS_CERTIFICATE` -#### `https-private-key = ""` +#### `https-private-key` + +Default is `""`. If HTTPS is enabled, specify the path to the SSL private key. Use either: @@ -144,43 +166,61 @@ Use either: Environment variable: `INFLUXDB_META_HTTPS_PRIVATE_KEY` -#### `https-insecure-tls = false` +#### `https-insecure-tls` + +Default is `false`. Whether meta nodes will skip certificate validation communicating with each other over HTTPS. This is useful when testing with self-signed certificates. Environment variable: `INFLUXDB_META_HTTPS_INSECURE_TLS` -#### `data-use-tls = false` +#### `data-use-tls` + +Default is `false`. Whether to use TLS to communicate with data nodes. -#### `data-insecure-tls = false` +#### `data-insecure-tls` + +Default is `false`. Whether meta nodes will skip certificate validation communicating with data nodes over TLS. This is useful when testing with self-signed certificates. -#### `gossip-frequency = "5s"` +#### `gossip-frequency` + +Default is `"5s"`. The default frequency with which the node will gossip its known announcements. -#### `announcement-expiration = "30s"` +#### `announcement-expiration` + +Default is `"30s"`. The default length of time an announcement is kept before it is considered too old. -#### `retention-autocreate = true` +#### `retention-autocreate` + +Default is `true`. Automatically create a default retention policy when creating a database. -#### `election-timeout = "1s"` +#### `election-timeout` + +Default is `"1s"`. The amount of time in candidate state without a leader before we attempt an election. -#### `heartbeat-timeout = "1s"` +#### `heartbeat-timeout` + +Default is `"1s"`. The amount of time in follower state without a leader before we attempt an election. -#### `leader-lease-timeout = "500ms"` +#### `leader-lease-timeout` + +Default is `"500ms"`. The leader lease timeout is the amount of time a Raft leader will remain leader if it does not hear from a majority of nodes. @@ -190,7 +230,9 @@ Clusters with high latency between nodes may want to increase this parameter to Environment variable: `INFLUXDB_META_LEADER_LEASE_TIMEOUT` -#### `commit-timeout = "50ms"` +#### `commit-timeout` + +Default is `"50ms"`. The commit timeout is the amount of time a Raft node will tolerate between commands before issuing a heartbeat to tell the leader it is alive. @@ -198,33 +240,53 @@ The default setting should work for most systems. Environment variable: `INFLUXDB_META_COMMIT_TIMEOUT` -#### `consensus-timeout = "30s"` +#### `consensus-timeout` + +Default is `"30s"`. Timeout waiting for consensus before getting the latest Raft snapshot. Environment variable: `INFLUXDB_META_CONSENSUS_TIMEOUT` -#### `cluster-tracing = false` +#### `cluster-tracing` -Cluster tracing toggles the logging of Raft logs on Raft nodes. -Enable this setting when debugging Raft consensus issues. +Default is `false`. + +Log all HTTP requests made to meta nodes. +Prints sanitized POST request information to show actual commands. + +**Sample log output:** + +``` +ts=2021-12-08T02:00:54.864731Z lvl=info msg=weblog log_id=0YHxBFZG001 service=meta-http host=172.18.0.1 user-id= username=admin method=POST uri=/user protocol=HTTP/1.1 command="{'{\"action\":\"create\",\"user\":{\"name\":\"fipple\",\"password\":[REDACTED]}}': ''}" status=307 size=0 referrer= user-agent=curl/7.68.0 request-id=ad87ce47-57ca-11ec-8026-0242ac120004 execution-time=63.571ms execution-time-readable=63.570738ms +ts=2021-12-08T02:01:00.070137Z lvl=info msg=weblog log_id=0YHxBEhl001 service=meta-http host=172.18.0.1 user-id= username=admin method=POST uri=/user protocol=HTTP/1.1 command="{'{\"action\":\"create\",\"user\":{\"name\":\"fipple\",\"password\":[REDACTED]}}': ''}" status=200 size=0 referrer= user-agent=curl/7.68.0 request-id=b09eb13a-57ca-11ec-800d-0242ac120003 execution-time=85.823ms execution-time-readable=85.823406ms +ts=2021-12-08T02:01:29.062313Z lvl=info msg=weblog log_id=0YHxBEhl001 service=meta-http host=172.18.0.1 user-id= username=admin method=POST uri=/user protocol=HTTP/1.1 command="{'{\"action\":\"create\",\"user\":{\"name\":\"gremch\",\"hash\":[REDACTED]}}': ''}" status=200 size=0 referrer= user-agent=curl/7.68.0 request-id=c1f3614a-57ca-11ec-8015-0242ac120003 execution-time=1.722ms execution-time-readable=1.722089ms +ts=2021-12-08T02:01:47.457607Z lvl=info msg=weblog log_id=0YHxBEhl001 service=meta-http host=172.18.0.1 user-id= username=admin method=POST uri=/user protocol=HTTP/1.1 command="{'{\"action\":\"create\",\"user\":{\"name\":\"gremchy\",\"hash\":[REDACTED]}}': ''}" status=400 size=37 referrer= user-agent=curl/7.68.0 request-id=ccea84b7-57ca-11ec-8019-0242ac120003 execution-time=0.154ms execution-time-readable=154.417µs +ts=2021-12-08T02:02:05.522571Z lvl=info msg=weblog log_id=0YHxBEhl001 service=meta-http host=172.18.0.1 user-id= username=admin method=POST uri=/user protocol=HTTP/1.1 command="{'{\"action\":\"create\",\"user\":{\"name\":\"thimble\",\"password\":[REDACTED]}}': ''}" status=400 size=37 referrer= user-agent=curl/7.68.0 request-id=d7af0082-57ca-11ec-801f-0242ac120003 execution-time=0.227ms execution-time-readable=227.853µs +``` Environment variable: `INFLUXDB_META_CLUSTER_TRACING` -#### `logging-enabled = true` +#### `logging-enabled` + +Default is `true`. Meta logging toggles the logging of messages from the meta service. Environment variable: `INFLUXDB_META_LOGGING_ENABLED` -#### `pprof-enabled = true` +#### `pprof-enabled` + +Default is `true`. Enables the `/debug/pprof` endpoint for troubleshooting. To disable, set the value to `false`. Environment variable: `INFLUXDB_META_PPROF_ENABLED` -#### `lease-duration = "1m0s"` +#### `lease-duration` + +Default is `"1m0s"`. The default duration of the leases that data nodes acquire from the meta nodes. Leases automatically expire after the `lease-duration` is met. @@ -238,35 +300,45 @@ For more details about `lease-duration` and its impact on continuous queries, se Environment variable: `INFLUXDB_META_LEASE_DURATION` -#### `auth-enabled = false` +#### `auth-enabled` + +Default is `false`. If true, HTTP endpoints require authentication. This setting must have the same value as the data nodes' meta.meta-auth-enabled configuration. -#### `ldap-allowed = false` +#### `ldap-allowed` + +Default is `false`. Whether LDAP is allowed to be set. If true, you will need to use `influxd ldap set-config` and set enabled=true to use LDAP authentication. -#### `shared-secret = ""` +#### `shared-secret` + +Default is `""`. The shared secret to be used by the public API for creating custom JWT authentication. -If you use this setting, set [`auth-enabled`](#auth-enabled-false) to `true`. +If you use this setting, set [`auth-enabled`](#auth-enabled) to `true`. Environment variable: `INFLUXDB_META_SHARED_SECRET` -#### `internal-shared-secret = ""` +#### `internal-shared-secret` + +Default is `""`. The shared secret used by the internal API for JWT authentication for inter-node communication within the cluster. Set this to a long pass phrase. This value must be the same value as the [`[meta] meta-internal-shared-secret`](/enterprise_influxdb/v1.9/administration/config-data-nodes#meta-internal-shared-secret) in the data node configuration file. -To use this option, set [`auth-enabled`](#auth-enabled-false) to `true`. +To use this option, set [`auth-enabled`](#auth-enabled) to `true`. Environment variable: `INFLUXDB_META_INTERNAL_SHARED_SECRET` -#### `password-hash = "bcrypt"` +#### `password-hash` + +Default is `"bcrypt"`. Specifies the password hashing scheme and its configuration. @@ -279,7 +351,7 @@ Optional sections after this are `key=value` password hash configuration options Each scheme has its own set of options. Any options not specified default to reasonable values as specified below. -This setting must have the same value as the data node option [`meta.password-hash`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#password-hash--bcrypt). +This setting must have the same value as the data node option [`meta.password-hash`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#password-hash). Environment variable: `INFLUXDB_META_PASSWORD_HASH` @@ -352,7 +424,9 @@ when used with appropriate `rounds` and `salt_len` options. * Must be greater than or equal to `16` for FIPS-readiness according to [NIST Special Publication 800-132] § 5.1. -#### `ensure-fips = false` +#### `ensure-fips` + +Default is `false`. If `ensure-fips` is set to `true`, then `influxd` and `influxd-meta` will refuse to start if they are not configured in a FIPS-ready manner. diff --git a/content/enterprise_influxdb/v1.9/administration/configure/ports.md b/content/enterprise_influxdb/v1.9/administration/configure/ports.md index 407593027..5ae29a6b9 100644 --- a/content/enterprise_influxdb/v1.9/administration/configure/ports.md +++ b/content/enterprise_influxdb/v1.9/administration/configure/ports.md @@ -20,7 +20,7 @@ aliases: The default port that runs the InfluxDB HTTP service. It is used for the primary public write and query API. Clients include the CLI, Chronograf, InfluxDB client libraries, Grafana, curl, or anything that wants to write and read time series data to and from InfluxDB. -[Configure this port](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#bind-address--8086) +[Configure this port](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#bind-address) in the data node configuration file. _See also: [API Reference](/enterprise_influxdb/v1.9/tools/api/)._ @@ -34,7 +34,7 @@ It's also used by meta nodes for cluster-type operations (e.g., tell a data node This is the default port used for RPC calls used for inter-node communication and by the CLI for backup and restore operations (`influxdb backup` and `influxd restore`). -[Configure this port](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#bind-address--8088) +[Configure this port](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#bind-address) in the configuration file. This port should not be exposed outside the cluster. diff --git a/content/enterprise_influxdb/v1.9/administration/configure/security/authentication.md b/content/enterprise_influxdb/v1.9/administration/configure/security/authentication.md index 7cef96fcd..e37ae1c39 100644 --- a/content/enterprise_influxdb/v1.9/administration/configure/security/authentication.md +++ b/content/enterprise_influxdb/v1.9/administration/configure/security/authentication.md @@ -47,7 +47,7 @@ For a more secure alternative to using passwords, include JWT tokens in requests InfluxDB Enterprise uses the shared secret to encode the JWT signature. By default, `shared-secret` is set to an empty string (no JWT authentication). - Add a custom shared secret in your [InfluxDB configuration file](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#shared-secret--) + Add a custom shared secret in your [InfluxDB configuration file](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#shared-secret) for each meta and data node. Longer strings are more secure: diff --git a/content/enterprise_influxdb/v1.9/administration/configure/security/configure-password-hashing.md b/content/enterprise_influxdb/v1.9/administration/configure/security/configure-password-hashing.md index 2e27a86a1..8e1e7c343 100644 --- a/content/enterprise_influxdb/v1.9/administration/configure/security/configure-password-hashing.md +++ b/content/enterprise_influxdb/v1.9/administration/configure/security/configure-password-hashing.md @@ -83,5 +83,5 @@ run: create server: passwordhash: not FIPS-ready: config: 'bcrypt' ``` [FIPS]: https://csrc.nist.gov/publications/detail/fips/140/3/final -[`password-hash`]: /enterprise_influxdb/v1.9/administration/config-meta-nodes/#password-hash--bcrypt -[`ensure-fips`]: /enterprise_influxdb/v1.9/administration/config-meta-nodes/#ensure-fips--false +[`password-hash`]: /enterprise_influxdb/v1.9/administration/config-meta-nodes/#password-hash +[`ensure-fips`]: /enterprise_influxdb/v1.9/administration/config-meta-nodes/#ensure-fips diff --git a/content/enterprise_influxdb/v1.9/administration/configure/security/enable_tls.md b/content/enterprise_influxdb/v1.9/administration/configure/security/enable_tls.md index 783f36067..651af58cc 100644 --- a/content/enterprise_influxdb/v1.9/administration/configure/security/enable_tls.md +++ b/content/enterprise_influxdb/v1.9/administration/configure/security/enable_tls.md @@ -264,6 +264,7 @@ Also change `localhost` to the relevant domain name. The best practice in terms of security is to transfer the certificate to the client and make it trusted (either by putting in the operating system's trusted certificate system or using the `ssl_ca` option). The alternative is to sign the certificate using an internal CA and then trust the CA certificate. +Provide the file paths of your key and certificate to the InfluxDB output plugin as shown below. If you're using a self-signed certificate, uncomment the `insecure_skip_verify` setting and set it to `true`. @@ -284,7 +285,8 @@ uncomment the `insecure_skip_verify` setting and set it to `true`. [...] ## Optional SSL Config - [...] + tls_cert = "/etc/telegraf/cert.pem" + tls_key = "/etc/telegraf/key.pem" insecure_skip_verify = true # <-- Update only if you're using a self-signed certificate ``` diff --git a/content/enterprise_influxdb/v1.9/administration/monitor/_index.md b/content/enterprise_influxdb/v1.9/administration/monitor/_index.md index 13043f909..975fde693 100644 --- a/content/enterprise_influxdb/v1.9/administration/monitor/_index.md +++ b/content/enterprise_influxdb/v1.9/administration/monitor/_index.md @@ -24,7 +24,7 @@ InfluxDB Aware and Influx Insights is a free Enterprise service that sends your Aware assists you in monitoring your data by yourself. Insights assists you in monitoring your data with the help of the support team. -To apply for this service, please contact the [support team](support@influxdata.com). +To apply for this service, please contact the [support team](https://support.influxdata.com/s/login/). {{% /note %}} {{< children >}} diff --git a/content/enterprise_influxdb/v1.9/administration/monitor/monitor-with-cloud.md b/content/enterprise_influxdb/v1.9/administration/monitor/monitor-with-cloud.md index e9647bf86..73c19f5a4 100644 --- a/content/enterprise_influxdb/v1.9/administration/monitor/monitor-with-cloud.md +++ b/content/enterprise_influxdb/v1.9/administration/monitor/monitor-with-cloud.md @@ -32,7 +32,7 @@ Before you begin, make sure you have access to the following: - An InfluxDB Cloud account. ([Sign up for free here](https://cloud2.influxdata.com/signup)). - Command line access to a machine [running InfluxDB Enterprise 1.x](/enterprise_influxdb/v1.9/introduction/install-and-deploy/) and permissions to install Telegraf on this machine. - Internet connectivity from the machine running InfluxDB Enterprise 1.x and Telegraf to InfluxDB Cloud. - - Sufficient resource availability to install the template. (InfluxDB Cloud Free Plan accounts include [resource limits](/influxdb/cloud/account-management/pricing-plans/#resource-limits/influxdb/cloud/account-management/pricing-plans/#resource-limits).) + - Sufficient resource availability to install the template. (InfluxDB Cloud Free Plan accounts include a finite number of [available resources](/influxdb/cloud/account-management/limits/#free-plan-limits).) ## Install the InfluxDB Enterprise Monitoring template diff --git a/content/enterprise_influxdb/v1.9/administration/monitor/monitor-with-oss.md b/content/enterprise_influxdb/v1.9/administration/monitor/monitor-with-oss.md index add587b44..4290509f2 100644 --- a/content/enterprise_influxdb/v1.9/administration/monitor/monitor-with-oss.md +++ b/content/enterprise_influxdb/v1.9/administration/monitor/monitor-with-oss.md @@ -8,6 +8,8 @@ menu: name: Monitor with OSS parent: Monitor weight: 101 +related: + - /platform/monitoring/influxdata-platform/tools/measurements-internal aliases: - /enterprise_influxdb/v1.9/administration/monitor-enterprise/monitor-with-oss/ --- diff --git a/content/enterprise_influxdb/v1.9/concepts/file-system-layout.md b/content/enterprise_influxdb/v1.9/concepts/file-system-layout.md index 9ba97ed33..8ee209f71 100644 --- a/content/enterprise_influxdb/v1.9/concepts/file-system-layout.md +++ b/content/enterprise_influxdb/v1.9/concepts/file-system-layout.md @@ -28,19 +28,19 @@ The InfluxDB file structure includes the following: ### Data directory (**Data nodes only**) Directory path where InfluxDB Enterprise stores time series data (TSM files). -To customize this path, use the [`[data].dir`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#dir--varlibinfluxdbdata) +To customize this path, use the [`[data].dir`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#dir) configuration option. ### WAL directory (**Data nodes only**) Directory path where InfluxDB Enterprise stores Write Ahead Log (WAL) files. -To customize this path, use the [`[data].wal-dir`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#wal-dir--varlibinfluxdbwal) +To customize this path, use the [`[data].wal-dir`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#wal-dir) configuration option. ### Hinted handoff directory (**Data nodes only**) Directory path where hinted handoff (HH) queues are stored. -To customize this path, use the [`[hinted-handoff].dir`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#dir--varlibinfluxdbhh) +To customize this path, use the [`[hinted-handoff].dir`](/enterprise_influxdb/v1.9/administration/config-data-nodes/#dir) configuration option. ### Metastore directory @@ -48,10 +48,10 @@ Directory path of the InfluxDB Enterprise metastore, which stores information about the cluster, users, databases, retention policies, shards, and continuous queries. **On data nodes**, the metastore contains information about InfluxDB Enterprise meta nodes. -To customize this path, use the [`[meta].dir` configuration option in your data node configuration file](/enterprise_influxdb/v1.9/administration/config-data-nodes/#dir--varlibinfluxdbmeta). +To customize this path, use the [`[meta].dir` configuration option in your data node configuration file](/enterprise_influxdb/v1.9/administration/config-data-nodes/#dir). **On meta nodes**, the metastore contains information about the InfluxDB Enterprise RAFT cluster. -To customize this path, use the [`[meta].dir` configuration option in your meta node configuration file](/enterprise_influxdb/v1.9/administration/config-meta-nodes/#dir--varlibinfluxdbmeta). +To customize this path, use the [`[meta].dir` configuration option in your meta node configuration file](/enterprise_influxdb/v1.9/administration/config-meta-nodes/#dir). ### InfluxDB Enterprise configuration files InfluxDB Enterprise stores default data and meta node configuration file on disk. diff --git a/content/enterprise_influxdb/v1.9/concepts/schema_and_data_layout.md b/content/enterprise_influxdb/v1.9/concepts/schema_and_data_layout.md index 0d8e2ef4b..7b68d0570 100644 --- a/content/enterprise_influxdb/v1.9/concepts/schema_and_data_layout.md +++ b/content/enterprise_influxdb/v1.9/concepts/schema_and_data_layout.md @@ -126,9 +126,9 @@ The [Flux](/{{< latest "flux" >}}/) queries calculate the average `temp` for blu ```js // Query *Good Measurements*, data stored in separate tags (recommended) from(bucket: "/") - |> range(start:2016-08-30T00:00:00Z) - |> filter(fn: (r) => r._measurement == "weather_sensor" and r.region == "north" and r._field == "temp") - |> mean() + |> range(start:2016-08-30T00:00:00Z) + |> filter(fn: (r) => r._measurement == "weather_sensor" and r.region == "north" and r._field == "temp") + |> mean() ``` **Difficult to query**: [_Bad Measurements_](#bad-measurements-schema) requires regular expressions to extract `plot` and `region` from the measurement, as in the following example. @@ -136,9 +136,9 @@ from(bucket: "/") ```js // Query *Bad Measurements*, data encoded in the measurement (not recommended) from(bucket: "/") - |> range(start:2016-08-30T00:00:00Z) - |> filter(fn: (r) => r._measurement =~ /\.north$/ and r._field == "temp") - |> mean() + |> range(start:2016-08-30T00:00:00Z) + |> filter(fn: (r) => r._measurement =~ /\.north$/ and r._field == "temp") + |> mean() ``` Complex measurements make some queries impossible. For example, calculating the average temperature of both plots is not possible with the [_Bad Measurements_](#bad-measurements-schema) schema. @@ -188,15 +188,15 @@ Schema 2 is preferable because using multiple tags, you don't need a regular exp ```js // Schema 1 - Query for multiple data encoded in a single tag from(bucket:"/") - |> range(start:2016-08-30T00:00:00Z) - |> filter(fn: (r) => r._measurement == "weather_sensor" and r.location =~ /\.north$/ and r._field == "temp") - |> mean() + |> range(start:2016-08-30T00:00:00Z) + |> filter(fn: (r) => r._measurement == "weather_sensor" and r.location =~ /\.north$/ and r._field == "temp") + |> mean() // Schema 2 - Query for data encoded in multiple tags from(bucket:"/") - |> range(start:2016-08-30T00:00:00Z) - |> filter(fn: (r) => r._measurement == "weather_sensor" and r.region == "north" and r._field == "temp") - |> mean() + |> range(start:2016-08-30T00:00:00Z) + |> filter(fn: (r) => r._measurement == "weather_sensor" and r.region == "north" and r._field == "temp") + |> mean() ``` #### InfluxQL example to query schemas diff --git a/content/enterprise_influxdb/v1.9/concepts/storage_engine.md b/content/enterprise_influxdb/v1.9/concepts/storage_engine.md index f7cf01d5b..6c129482b 100644 --- a/content/enterprise_influxdb/v1.9/concepts/storage_engine.md +++ b/content/enterprise_influxdb/v1.9/concepts/storage_engine.md @@ -65,13 +65,13 @@ Deletes sent to the Cache will clear out the given key or the specific time rang The Cache exposes a few controls for snapshotting behavior. The two most important controls are the memory limits. -There is a lower bound, [`cache-snapshot-memory-size`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#cache-snapshot-memory-size--25m), which when exceeded will trigger a snapshot to TSM files and remove the corresponding WAL segments. +There is a lower bound, [`cache-snapshot-memory-size`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#cache-snapshot-memory-size), which when exceeded will trigger a snapshot to TSM files and remove the corresponding WAL segments. There is also an upper bound, [`cache-max-memory-size`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes#cache-max-memory-size-1g), which when exceeded will cause the Cache to reject new writes. These configurations are useful to prevent out of memory situations and to apply back pressure to clients writing data faster than the instance can persist it. The checks for memory thresholds occur on every write. The other snapshot controls are time based. -The idle threshold, [`cache-snapshot-write-cold-duration`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes#cache-snapshot-write-cold-duration--10m), forces the Cache to snapshot to TSM files if it hasn't received a write within the specified interval. +The idle threshold, [`cache-snapshot-write-cold-duration`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes#cache-snapshot-write-cold-duration), forces the Cache to snapshot to TSM files if it hasn't received a write within the specified interval. The in-memory Cache is recreated on restart by re-reading the WAL files on disk. diff --git a/content/enterprise_influxdb/v1.9/features/clustering-features.md b/content/enterprise_influxdb/v1.9/features/clustering-features.md index 714b41277..cbed7323c 100644 --- a/content/enterprise_influxdb/v1.9/features/clustering-features.md +++ b/content/enterprise_influxdb/v1.9/features/clustering-features.md @@ -14,8 +14,7 @@ menu: _For an overview of InfluxDB Enterprise security features, see ["InfluxDB Enterprise features - Security"](/enterprise_influxdb/v1.9/features/#security). To secure your InfluxDB Enterprise cluster, see -["Configure security"](/enterprise_influxdb/v1.9/administration/configure/security/) -and ["Manage security"](/enterprise_influxdb/v1.9/administration/manage/security/)_. +["Configure security"](/enterprise_influxdb/v1.9/administration/configure/security/). {{% /note %}} ## Entitlements @@ -59,11 +58,11 @@ Subscriptions used by Kapacitor work in a cluster. Writes to any node will be fo It is important to understand how to configure InfluxDB Enterprise and how this impacts the continuous queries (CQ) engine’s behavior: - **Data node configuration** `[continuous queries]` -[run-interval](/enterprise_influxdb/v1.9/administration/config-data-nodes#run-interval-1s) +[run-interval](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#run-interval) -- The interval at which InfluxDB checks to see if a CQ needs to run. Set this option to the lowest interval at which your CQs run. For example, if your most frequent CQ runs every minute, set run-interval to 1m. - **Meta node configuration** `[meta]` -[lease-duration](/enterprise_influxdb/v1.9/administration/config-meta-nodes#lease-duration-1m0s) +[lease-duration](/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes/#lease-duration) -- The default duration of the leases that data nodes acquire from the meta nodes. Leases automatically expire after the lease-duration is met. Leases ensure that only one data node is running something at a given time. For example, Continuous Queries use a lease so that all data nodes aren’t running the same CQs at once. diff --git a/content/enterprise_influxdb/v1.9/flux/_index.md b/content/enterprise_influxdb/v1.9/flux/_index.md index 2c74b958f..fa15e8668 100644 --- a/content/enterprise_influxdb/v1.9/flux/_index.md +++ b/content/enterprise_influxdb/v1.9/flux/_index.md @@ -26,13 +26,10 @@ filtering that data by the `cpu` measurement and the `cpu=cpu-total` tag, window and calculating the average of each window: ```js -from(bucket:"telegraf/autogen") - |> range(start:-1h) - |> filter(fn:(r) => - r._measurement == "cpu" and - r.cpu == "cpu-total" - ) - |> aggregateWindow(every: 1m, fn: mean) +from(bucket: "telegraf/autogen") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r.cpu == "cpu-total") + |> aggregateWindow(every: 1m, fn: mean) ``` {{< children >}} diff --git a/content/enterprise_influxdb/v1.9/flux/flux-vs-influxql.md b/content/enterprise_influxdb/v1.9/flux/flux-vs-influxql.md index c7ee8915a..a449f95a6 100644 --- a/content/enterprise_influxdb/v1.9/flux/flux-vs-influxql.md +++ b/content/enterprise_influxdb/v1.9/flux/flux-vs-influxql.md @@ -41,23 +41,14 @@ This opens the door for really powerful and useful operations. ```js dataStream1 = from(bucket: "bucket1") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "network" and - r._field == "bytes-transferred" - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "network" and r._field == "bytes-transferred") dataStream2 = from(bucket: "bucket1") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "httpd" and - r._field == "requests-per-sec" - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "httpd" and r._field == "requests-per-sec") -join( - tables: {d1:dataStream1, d2:dataStream2}, - on: ["_time", "_stop", "_start", "host"] - ) +join(tables: {d1: dataStream1, d2: dataStream2}, on: ["_time", "_stop", "_start", "host"]) ``` @@ -76,31 +67,18 @@ joins them, then calculates the average amount of memory used per running proces ```js // Memory used (in bytes) memUsed = from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field == "used" - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used") // Total processes running procTotal = from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "processes" and - r._field == "total" - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "processes" and r._field == "total") // Join memory used with total processes and calculate // the average memory (in MB) used for running processes. -join( - tables: {mem:memUsed, proc:procTotal}, - on: ["_time", "_stop", "_start", "host"] - ) - |> map(fn: (r) => ({ - _time: r._time, - _value: (r._value_mem / r._value_proc) / 1000000 - }) -) +join(tables: {mem: memUsed, proc: procTotal}, on: ["_time", "_stop", "_start", "host"]) + |> map(fn: (r) => ({_time: r._time, _value: r._value_mem / r._value_proc / 1000000})) ``` ### Sort by tags @@ -110,13 +88,10 @@ Flux's [`sort()` function](/{{< latest "flux" >}}/stdlib/universe/sort) sorts re Depending on the column type, records are sorted lexicographically, numerically, or chronologically. ```js -from(bucket:"telegraf/autogen") - |> range(start:-12h) - |> filter(fn: (r) => - r._measurement == "system" and - r._field == "uptime" - ) - |> sort(columns:["region", "host", "_value"]) +from(bucket: "telegraf/autogen") + |> range(start: -12h) + |> filter(fn: (r) => r._measurement == "system" and r._field == "uptime") + |> sort(columns: ["region", "host", "_value"]) ``` ### Group by any column @@ -127,9 +102,9 @@ to define which columns to group data by. ```js from(bucket:"telegraf/autogen") - |> range(start:-12h) - |> filter(fn: (r) => r._measurement == "system" and r._field == "uptime" ) - |> group(columns:["host", "_value"]) + |> range(start:-12h) + |> filter(fn: (r) => r._measurement == "system" and r._field == "uptime" ) + |> group(columns:["host", "_value"]) ``` ### Window by calendar months and years @@ -139,9 +114,9 @@ window and aggregate data by calendar month and year. ```js from(bucket:"telegraf/autogen") - |> range(start:-1y) - |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" ) - |> aggregateWindow(every: 1mo, fn: mean) + |> range(start:-1y) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" ) + |> aggregateWindow(every: 1mo, fn: mean) ``` ### Work with multiple data sources @@ -160,19 +135,19 @@ import "sql" csvData = csv.from(csv: rawCSV) sqlData = sql.from( - driverName: "postgres", - dataSourceName: "postgresql://user:password@localhost", - query:"SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: "postgresql://user:password@localhost", + query: "SELECT * FROM example_table", ) data = from(bucket: "telegraf/autogen") - |> range(start: -24h) - |> filter(fn: (r) => r._measurement == "sensor") + |> range(start: -24h) + |> filter(fn: (r) => r._measurement == "sensor") auxData = join(tables: {csv: csvData, sql: sqlData}, on: ["sensor_id"]) enrichedData = join(tables: {data: data, aux: auxData}, on: ["sensor_id"]) enrichedData - |> yield(name: "enriched_data") + |> yield(name: "enriched_data") ``` --- @@ -188,12 +163,9 @@ returns only data with time values in a specified hour range. ```js from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "cpu" and - r.cpu == "cpu-total" - ) - |> hourSelection(start: 9, stop: 17) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r.cpu == "cpu-total") + |> hourSelection(start: 9, stop: 17) ``` ### Pivot @@ -203,16 +175,9 @@ to pivot data tables by specifying `rowKey`, `columnKey`, and `valueColumn` para ```js from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "cpu" and - r.cpu == "cpu-total" - ) - |> pivot( - rowKey:["_time"], - columnKey: ["_field"], - valueColumn: "_value" - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r.cpu == "cpu-total") + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") ``` ### Histograms @@ -222,14 +187,9 @@ data to generate a cumulative histogram with support for other histogram types c ```js from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field == "used_percent" - ) - |> histogram( - buckets: [10, 20, 30, 40, 50, 60, 70, 80, 90, 100] - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> histogram(buckets: [10, 20, 30, 40, 50, 60, 70, 80, 90, 100,]) ``` --- @@ -247,23 +207,19 @@ calculates the covariance between two data streams. ###### Covariance between two columns ```js from(bucket: "telegraf/autogen") - |> range(start:-5m) - |> covariance(columns: ["x", "y"]) + |> range(start: -5m) + |> covariance(columns: ["x", "y"]) ``` ###### Covariance between two streams of data ```js table1 = from(bucket: "telegraf/autogen") - |> range(start: -15m) - |> filter(fn: (r) => - r._measurement == "measurement_1" - ) + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "measurement_1") table2 = from(bucket: "telegraf/autogen") - |> range(start: -15m) - |> filter(fn: (r) => - r._measurement == "measurement_2" - ) + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "measurement_2") cov(x: table1, y: table2, on: ["_time", "_field"]) ``` @@ -277,12 +233,9 @@ operations like casting a boolean values to integers. ##### Cast boolean field values to integers ```js from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "m" and - r._field == "bool_field" - ) - |> toInt() + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "m" and r._field == "bool_field") + |> toInt() ``` ### String manipulation and data shaping @@ -295,17 +248,16 @@ functions in the string package allow for operations like string sanitization an import "strings" from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "weather" and - r._field == "temp" - ) - |> map(fn: (r) => ({ - r with - location: strings.toTitle(v: r.location), - sensor: strings.replaceAll(v: r.sensor, t: " ", u: "-"), - status: strings.substring(v: r.status, start: 0, end: 8) - })) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "weather" and r._field == "temp") + |> map( + fn: (r) => ({ + r with + location: strings.toTitle(v: r.location), + sensor: strings.replaceAll(v: r.sensor, t: " ", u: "-"), + status: strings.substring(v: r.status, start: 0, end: 8), + }) + ) ``` ### Work with geo-temporal data @@ -317,14 +269,11 @@ let you shape, filter, and group geo-temporal data. import "experimental/geo" from(bucket: "geo/autogen") - |> range(start: -1w) - |> filter(fn: (r) => r._measurement == "taxi") - |> geo.shapeData(latField: "latitude", lonField: "longitude", level: 20) - |> geo.filterRows( - region: {lat: 40.69335938, lon: -73.30078125, radius: 20.0}, - strict: true - ) - |> geo.asTracks(groupBy: ["fare-id"]) + |> range(start: -1w) + |> filter(fn: (r) => r._measurement == "taxi") + |> geo.shapeData(latField: "latitude", lonField: "longitude", level: 20) + |> geo.filterRows(region: {lat: 40.69335938, lon: -73.30078125, radius: 20.0}, strict: true) + |> geo.asTracks(groupBy: ["fare-id"]) ``` diff --git a/content/enterprise_influxdb/v1.9/flux/get-started/query-influxdb.md b/content/enterprise_influxdb/v1.9/flux/get-started/query-influxdb.md index c113e09a7..6132de337 100644 --- a/content/enterprise_influxdb/v1.9/flux/get-started/query-influxdb.md +++ b/content/enterprise_influxdb/v1.9/flux/get-started/query-influxdb.md @@ -47,11 +47,11 @@ or **absolute** using [timestamps](/{{< latest "flux" >}}/spec/lexical-elements# ```js // Relative time range with start only. Stop defaults to now. from(bucket:"telegraf/autogen") - |> range(start: -1h) + |> range(start: -1h) // Relative time range with start and stop from(bucket:"telegraf/autogen") - |> range(start: -1h, stop: -10m) + |> range(start: -1h, stop: -10m) ``` > Relative ranges are relative to "now." @@ -59,7 +59,7 @@ from(bucket:"telegraf/autogen") ###### Example absolute time range ```js from(bucket:"telegraf/autogen") - |> range(start: 2018-11-05T23:30:00Z, stop: 2018-11-06T00:00:00Z) + |> range(start: 2018-11-05T23:30:00Z, stop: 2018-11-06T00:00:00Z) ``` #### Use the following: @@ -67,7 +67,7 @@ For this guide, use the relative time range, `-15m`, to limit query results to d ```js from(bucket:"telegraf/autogen") - |> range(start: -15m) + |> range(start: -15m) ``` ## 3. Filter your data @@ -95,27 +95,19 @@ Use the `AND` relational operator to chain multiple filters. For this example, filter by the `cpu` measurement, the `usage_system` field, and the `cpu-total` tag value: ```js -from(bucket:"telegraf/autogen") - |> range(start: -15m) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" and - r.cpu == "cpu-total" - ) +from(bucket: "telegraf/autogen") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") ``` ## 4. Yield your queried data Use Flux's `yield()` function to output the filtered tables as the result of the query. ```js -from(bucket:"telegraf/autogen") - |> range(start: -15m) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" and - r.cpu == "cpu-total" - ) - |> yield() +from(bucket: "telegraf/autogen") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> yield() ``` > Chronograf and the `influx` CLI automatically assume a `yield()` function at diff --git a/content/enterprise_influxdb/v1.9/flux/get-started/syntax-basics.md b/content/enterprise_influxdb/v1.9/flux/get-started/syntax-basics.md index 36c983beb..c4d98048a 100644 --- a/content/enterprise_influxdb/v1.9/flux/get-started/syntax-basics.md +++ b/content/enterprise_influxdb/v1.9/flux/get-started/syntax-basics.md @@ -132,22 +132,13 @@ or more input data streams. ```js timeRange = -1h -cpuUsageUser = - from(bucket:"telegraf/autogen") +cpuUsageUser = from(bucket: "telegraf/autogen") |> range(start: timeRange) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_user" and - r.cpu == "cpu-total" - ) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_user" and r.cpu == "cpu-total") -memUsagePercent = - from(bucket:"telegraf/autogen") +memUsagePercent = from(bucket: "telegraf/autogen") |> range(start: timeRange) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field == "used_percent" - ) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") ``` These variables can be used in other functions, such as `join()`, while keeping the syntax minimal and flexible. @@ -158,8 +149,7 @@ To do this, pass the input stream (`tables`) and the number of results to return Then using Flux's `sort()` and `limit()` functions to find the top `n` results in the data set. ```js -topN = (tables=<-, n) => - tables +topN = (tables=<-, n) => tables |> sort(desc: true) |> limit(n: n) ``` @@ -171,8 +161,8 @@ find the top five data points and yield the results. ```js cpuUsageUser - |> topN(n:5) - |> yield() + |> topN(n: 5) + |> yield() ``` {{% /tab-content %}} @@ -182,8 +172,14 @@ A common use case for variable assignments in Flux is creating variables for mul ```js timeRange = -1h -cpuUsageUser = from(bucket:"telegraf/autogen") |> range(start: timeRange) |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_user" and r.cpu == "cpu-total") -memUsagePercent = from(bucket:"telegraf/autogen") |> range(start: timeRange) |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + +cpuUsageUser = from(bucket: "telegraf/autogen") + |> range(start: timeRange) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_user" and r.cpu == "cpu-total") + +memUsagePercent = from(bucket: "telegraf/autogen") + |> range(start: timeRange) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") ``` These variables can be used in other functions, such as `join()`, while keeping the syntax minimal and flexible. diff --git a/content/enterprise_influxdb/v1.9/flux/get-started/transform-data.md b/content/enterprise_influxdb/v1.9/flux/get-started/transform-data.md index 810da2057..25f8bf766 100644 --- a/content/enterprise_influxdb/v1.9/flux/get-started/transform-data.md +++ b/content/enterprise_influxdb/v1.9/flux/get-started/transform-data.md @@ -27,13 +27,9 @@ Use the query built in the previous [Query data from InfluxDB](/enterprise_influ guide, but update the range to pull data from the last hour: ```js -from(bucket:"telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" and - r.cpu == "cpu-total" - ) +from(bucket: "telegraf/autogen") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") ``` ## Flux functions @@ -65,14 +61,10 @@ including **calendar months (`1mo`)** and **years (`1y`)**. For this example, window data in five minute intervals (`5m`). ```js -from(bucket:"telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" and - r.cpu == "cpu-total" - ) - |> window(every: 5m) +from(bucket: "telegraf/autogen") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> window(every: 5m) ``` As data is gathered into windows of time, each window is output as its own table. @@ -85,15 +77,11 @@ Flux aggregate functions take the `_value`s in each table and aggregate them in Use the [`mean()` function](/{{< latest "flux" >}}/stdlib/universe/mean) to average the `_value`s of each table. ```js -from(bucket:"telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" and - r.cpu == "cpu-total" - ) - |> window(every: 5m) - |> mean() +from(bucket: "telegraf/autogen") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> window(every: 5m) + |> mean() ``` As rows in each window are aggregated, their output table contains only a single row with the aggregate value. @@ -112,16 +100,12 @@ To add one, use the [`duplicate()` function](/{{< latest "flux" >}}/stdlib/unive to duplicate the `_stop` column as the `_time` column for each windowed table. ```js -from(bucket:"telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" and - r.cpu == "cpu-total" - ) - |> window(every: 5m) - |> mean() - |> duplicate(column: "_stop", as: "_time") +from(bucket: "telegraf/autogen") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> window(every: 5m) + |> mean() + |> duplicate(column: "_stop", as: "_time") ``` ## Unwindow aggregate tables @@ -130,17 +114,13 @@ Use the `window()` function with the `every: inf` parameter to gather all points into a single, infinite window. ```js -from(bucket:"telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" and - r.cpu == "cpu-total" - ) - |> window(every: 5m) - |> mean() - |> duplicate(column: "_stop", as: "_time") - |> window(every: inf) +from(bucket: "telegraf/autogen") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> window(every: 5m) + |> mean() + |> duplicate(column: "_stop", as: "_time") + |> window(every: inf) ``` Once ungrouped and combined into a single table, the aggregate data points will appear connected in your visualization. @@ -156,14 +136,10 @@ The same operation performed in this guide can be accomplished using the [`aggregateWindow()` function](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow). ```js -from(bucket:"telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" and - r.cpu == "cpu-total" - ) - |> aggregateWindow(every: 5m, fn: mean) +from(bucket: "telegraf/autogen") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") + |> aggregateWindow(every: 5m, fn: mean) ``` ## Congratulations! diff --git a/content/enterprise_influxdb/v1.9/flux/guides/_index.md b/content/enterprise_influxdb/v1.9/flux/guides/_index.md index 58127501c..0cb506e17 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/_index.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/_index.md @@ -23,11 +23,8 @@ which represents a basic query that filters data by measurement and field. ```js data = from(bucket: "db/rp") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") ``` {{% /note %}} diff --git a/content/enterprise_influxdb/v1.9/flux/guides/calculate-percentages.md b/content/enterprise_influxdb/v1.9/flux/guides/calculate-percentages.md index ece9acba0..25acb37c9 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/calculate-percentages.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/calculate-percentages.md @@ -36,10 +36,10 @@ _See [Pivot vs join](/enterprise_influxdb/v1.9/flux/guides/mathematic-operations ```js from(bucket: "db/rp") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "m1" and r._field =~ /field[1-2]/ ) - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> map(fn: (r) => ({ r with _value: r.field1 / r.field2 * 100.0 })) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "m1" and r._field =~ /field[1-2]/) + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({r with _value: r.field1 / r.field2 * 100.0})) ``` ## GPU monitoring example @@ -54,8 +54,8 @@ Data includes the following: ### Query mem_used and mem_total fields ```js from(bucket: "gpu-monitor") - |> range(start: 2020-01-01T00:00:00Z) - |> filter(fn: (r) => r._measurement == "gpu" and r._field =~ /mem_/) + |> range(start: 2020-01-01T00:00:00Z) + |> filter(fn: (r) => r._measurement == "gpu" and r._field =~ /mem_/) ``` ###### Returns the following stream of tables: @@ -86,7 +86,7 @@ Output includes `mem_used` and `mem_total` columns with values for each correspo ```js // ... - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") + |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") ``` ###### Returns the following: @@ -112,12 +112,14 @@ below casts integer field values to floats and multiplies by a float value (`100 ```js // ... - |> map(fn: (r) => ({ - _time: r._time, - _measurement: r._measurement, - _field: "mem_used_percent", - _value: float(v: r.mem_used) / float(v: r.mem_total) * 100.0 - })) + |> map( + fn: (r) => ({ + _time: r._time, + _measurement: r._measurement, + _field: "mem_used_percent", + _value: float(v: r.mem_used) / float(v: r.mem_total) * 100.0 + }) + ) ``` ##### Query results: @@ -133,15 +135,17 @@ below casts integer field values to floats and multiplies by a float value (`100 ### Full query ```js from(bucket: "gpu-monitor") - |> range(start: 2020-01-01T00:00:00Z) - |> filter(fn: (r) => r._measurement == "gpu" and r._field =~ /mem_/ ) - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> map(fn: (r) => ({ - _time: r._time, - _measurement: r._measurement, - _field: "mem_used_percent", - _value: float(v: r.mem_used) / float(v: r.mem_total) * 100.0 - })) + |> range(start: 2020-01-01T00:00:00Z) + |> filter(fn: (r) => r._measurement == "gpu" and r._field =~ /mem_/ ) + |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map( + fn: (r) => ({ + _time: r._time, + _measurement: r._measurement, + _field: "mem_used_percent", + _value: float(v: r.mem_used) / float(v: r.mem_total) * 100.0 + }) + ) ``` ## Examples @@ -149,17 +153,11 @@ from(bucket: "gpu-monitor") #### Calculate percentages using multiple fields ```js from(bucket: "db/rp") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> filter(fn: (r) => - r._field == "used_system" or - r._field == "used_user" or - r._field == "total" - ) - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> map(fn: (r) => ({ r with - _value: float(v: r.used_system + r.used_user) / float(v: r.total) * 100.0 - })) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> filter(fn: (r) => r._field == "used_system" or r._field == "used_user" or r._field == "total") + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({r with _value: float(v: r.used_system + r.used_user) / float(v: r.total) * 100.0})) ``` #### Calculate percentages using multiple measurements @@ -173,14 +171,11 @@ from(bucket: "db/rp") ```js from(bucket: "db/rp") - |> range(start: -1h) - |> filter(fn: (r) => - (r._measurement == "m1" or r._measurement == "m2") and - (r._field == "field1" or r._field == "field2") - ) - |> group() - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> map(fn: (r) => ({ r with _value: r.field1 / r.field2 * 100.0 })) + |> range(start: -1h) + |> filter(fn: (r) => (r._measurement == "m1" or r._measurement == "m2") and (r._field == "field1" or r._field == "field2")) + |> group() + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({r with _value: r.field1 / r.field2 * 100.0})) ``` #### Calculate percentages using multiple data sources @@ -193,18 +188,15 @@ pgPass = secrets.get(key: "POSTGRES_PASSWORD") pgHost = secrets.get(key: "POSTGRES_HOST") t1 = sql.from( - driverName: "postgres", - dataSourceName: "postgresql://${pgUser}:${pgPass}@${pgHost}", - query:"SELECT id, name, available FROM exampleTable" + driverName: "postgres", + dataSourceName: "postgresql://${pgUser}:${pgPass}@${pgHost}", + query: "SELECT id, name, available FROM exampleTable", ) t2 = from(bucket: "db/rp") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") join(tables: {t1: t1, t2: t2}, on: ["id"]) - |> map(fn: (r) => ({ r with _value: r._value_t2 / r.available_t1 * 100.0 })) + |> map(fn: (r) => ({r with _value: r._value_t2 / r.available_t1 * 100.0})) ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/conditional-logic.md b/content/enterprise_influxdb/v1.9/flux/guides/conditional-logic.md index 11bfbfc67..0f207b8b9 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/conditional-logic.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/conditional-logic.md @@ -44,10 +44,14 @@ Flux evaluates statements in order and stops evaluating once a condition matches For example, given the following statement: ```js -if r._value > 95.0000001 and r._value <= 100.0 then "critical" -else if r._value > 85.0000001 and r._value <= 95.0 then "warning" -else if r._value > 70.0000001 and r._value <= 85.0 then "high" -else "normal" +if r._value > 95.0000001 and r._value <= 100.0 then + "critical" +else if r._value > 85.0000001 and r._value <= 95.0 then + "warning" +else if r._value > 70.0000001 and r._value <= 85.0 then + "high" +else + "normal" ``` When `r._value` is 96, the output is "critical" and the remaining conditions are not evaluated. @@ -64,7 +68,7 @@ The following example sets the `overdue` variable based on the `dueDate` variable's relation to `now()`. ```js -dueDate = 2019-05-01 +dueDate = 2019-05-01T00:00:00Z overdue = if dueDate < now() then true else false ``` @@ -80,16 +84,17 @@ The following example uses an example `metric` variable to change how the query metric = "Memory" from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => - if v.metric == "Memory" - then r._measurement == "mem" and r._field == "used_percent" - else if v.metric == "CPU" - then r._measurement == "cpu" and r._field == "usage_user" - else if v.metric == "Disk" - then r._measurement == "disk" and r._field == "used_percent" - else r._measurement != "" - ) + |> range(start: -1h) + |> filter( + fn: (r) => if v.metric == "Memory" then + r._measurement == "mem" and r._field == "used_percent" + else if v.metric == "CPU" then + r._measurement == "cpu" and r._field == "usage_user" + else if v.metric == "Disk" then + r._measurement == "disk" and r._field == "used_percent" + else + r._measurement != "", + ) ``` ### Conditionally transform column values with map() @@ -105,35 +110,42 @@ It sets the `level` column to a specific string based on `_value` column. {{% code-tab-content %}} ```js from(bucket: "telegraf/autogen") - |> range(start: -5m) - |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" ) - |> map(fn: (r) => ({ - r with - level: - if r._value >= 95.0000001 and r._value <= 100.0 then "critical" - else if r._value >= 85.0000001 and r._value <= 95.0 then "warning" - else if r._value >= 70.0000001 and r._value <= 85.0 then "high" - else "normal" - }) - ) + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> map( + fn: (r) => ({r with + level: if r._value >= 95.0000001 and r._value <= 100.0 then + "critical" + else if r._value >= 85.0000001 and r._value <= 95.0 then + "warning" + else if r._value >= 70.0000001 and r._value <= 85.0 then + "high" + else + "normal", + }), + ) ``` {{% /code-tab-content %}} {{% code-tab-content %}} ```js from(bucket: "telegraf/autogen") - |> range(start: -5m) - |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" ) - |> map(fn: (r) => ({ - // Retain all existing columns in the mapped row - r with - // Set the level column value based on the _value column - level: - if r._value >= 95.0000001 and r._value <= 100.0 then "critical" - else if r._value >= 85.0000001 and r._value <= 95.0 then "warning" - else if r._value >= 70.0000001 and r._value <= 85.0 then "high" - else "normal" - }) - ) + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> map( + fn: (r) => ({ + // Retain all existing columns in the mapped row + r with + // Set the level column value based on the _value column + level: if r._value >= 95.0000001 and r._value <= 100.0 then + "critical" + else if r._value >= 85.0000001 and r._value <= 95.0 then + "warning" + else if r._value >= 70.0000001 and r._value <= 85.0 then + "high" + else + "normal", + }), + ) ``` {{% /code-tab-content %}} @@ -154,18 +166,20 @@ functions to count the number of records in every five minute window that exceed threshold = 65.0 from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" ) - |> aggregateWindow( - every: 5m, - fn: (column, tables=<-) => tables |> reduce( - identity: {above_threshold_count: 0.0}, - fn: (r, accumulator) => ({ - above_threshold_count: - if r._value >= threshold then accumulator.above_threshold_count + 1.0 - else accumulator.above_threshold_count + 0.0 - }) - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> aggregateWindow( + every: 5m, + fn: (column, tables=<-) => tables + |> reduce( + identity: {above_threshold_count: 0.0}, + fn: (r, accumulator) => ({ + above_threshold_count: if r._value >= threshold then + accumulator.above_threshold_count + 1.0 + else + accumulator.above_threshold_count + 0.0, + }), + ), ) ``` {{% /code-tab-content %}} @@ -174,23 +188,25 @@ from(bucket: "telegraf/autogen") threshold = 65.0 from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" ) - // Aggregate data into 5 minute windows using a custom reduce() function - |> aggregateWindow( - every: 5m, - // Use a custom function in the fn parameter. - // The aggregateWindow fn parameter requires 'column' and 'tables' parameters. - fn: (column, tables=<-) => tables |> reduce( - identity: {above_threshold_count: 0.0}, - fn: (r, accumulator) => ({ - // Conditionally increment above_threshold_count if - // r.value exceeds the threshold - above_threshold_count: - if r._value >= threshold then accumulator.above_threshold_count + 1.0 - else accumulator.above_threshold_count + 0.0 - }) - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + // Aggregate data into 5 minute windows using a custom reduce() function + |> aggregateWindow( + every: 5m, + // Use a custom function in the fn parameter. + // The aggregateWindow fn parameter requires 'column' and 'tables' parameters. + fn: (column, tables=<-) => tables + |> reduce( + identity: {above_threshold_count: 0.0}, + fn: (r, accumulator) => ({ + // Conditionally increment above_threshold_count if + // r.value exceeds the threshold + above_threshold_count: if r._value >= threshold then + accumulator.above_threshold_count + 1.0 + else + accumulator.above_threshold_count + 0.0, + }), + ), ) ``` {{% /code-tab-content %}} diff --git a/content/enterprise_influxdb/v1.9/flux/guides/cumulativesum.md b/content/enterprise_influxdb/v1.9/flux/guides/cumulativesum.md index 7bf66f48d..78e4872b8 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/cumulativesum.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/cumulativesum.md @@ -48,7 +48,7 @@ The examples below use the [example data variable](/enterprise_influxdb/v1.9/flu ##### Calculate the running total of values ```js data - |> cumulativeSum() + |> cumulativeSum() ``` ## Use cumulativeSum() with aggregateWindow() @@ -64,6 +64,6 @@ then calculate the running total of the aggregate values with `cumulativeSum()`. ```js data - |> aggregateWindow(every: 5m, fn: sum) - |> cumulativeSum() + |> aggregateWindow(every: 5m, fn: sum) + |> cumulativeSum() ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/execute-queries.md b/content/enterprise_influxdb/v1.9/flux/guides/execute-queries.md index 88f58b70f..bd9a5c030 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/execute-queries.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/execute-queries.md @@ -31,6 +31,69 @@ on your InfluxDB instance, use the `-username` flag to provide your InfluxDB use the `-password` flag to provide your password. {{% /note %}} +## Influx CLI +To start an interactive Flux read-eval-print-loop (REPL) with the InfluxDB Enterprise 1.9+ +`influx` CLI, run the `influx` command with the following flags: + +- `-type=flux` +- `-path-prefix=/api/v2/query` + +{{% note %}} +If [authentication is enabled](/enterprise_influxdb/v1.9/administration/authentication_and_authorization) +on your InfluxDB instance, use the `-username` flag to provide your InfluxDB username and +the `-password` flag to provide your password. +{{% /note %}} + +##### Enter an interactive Flux REPL +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[No Auth](#) +[Auth Enabled](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```bash +influx -type=flux -path-prefix=/api/v2/query +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```bash +influx -type=flux \ + -path-prefix=/api/v2/query \ + -username myuser \ + -password PasSw0rd +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + +Any Flux query can be executed within the REPL. + +### Submit a Flux query via parameter +Flux queries can also be passed to the Flux REPL as a parameter using the `influx` CLI's `-type=flux` option and the `-execute` parameter. +The accompanying string is executed as a Flux query and results are output in your terminal. + +{{< code-tabs-wrapper >}} +{{% code-tabs %}} +[No Auth](#) +[Auth Enabled](#) +{{% /code-tabs %}} +{{% code-tab-content %}} +```bash +influx -type=flux \ + -path-prefix=/api/v2/query \ + -execute '' +``` +{{% /code-tab-content %}} +{{% code-tab-content %}} +```bash +influx -type=flux \ + -path-prefix=/api/v2/query \ + -username myuser \ + -password PasSw0rd \ + -execute '' +``` +{{% /code-tab-content %}} +{{< /code-tabs-wrapper >}} + ### Submit a Flux query via via STDIN Flux queries an be piped into the `influx` CLI via STDIN. Query results are otuput in your terminal. @@ -42,12 +105,15 @@ Query results are otuput in your terminal. {{% /code-tabs %}} {{% code-tab-content %}} ```bash -echo '' | influx -type=flux +echo '' | influx -type=flux -path-prefix=/api/v2/query ``` {{% /code-tab-content %}} {{% code-tab-content %}} ```bash -echo '' | influx -type=flux -username myuser -password PasSw0rd +echo '' | influx -type=flux \ + -path-prefix=/api/v2/query \ + -username myuser \ + -password PasSw0rd ``` {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} @@ -78,8 +144,8 @@ curl -XPOST localhost:8086/api/v2/query -sS \ -H 'Accept:application/csv' \ -H 'Content-type:application/vnd.flux' \ -d 'from(bucket:"telegraf") - |> range(start:-5m) - |> filter(fn:(r) => r._measurement == "cpu")' + |> range(start:-5m) + |> filter(fn:(r) => r._measurement == "cpu")' ``` {{% /code-tab-content %}} {{% code-tab-content %}} @@ -89,8 +155,8 @@ curl -XPOST localhost:8086/api/v2/query -sS \ -H 'Content-type:application/vnd.flux' \ -H 'Authorization: Token :' \ -d 'from(bucket:"telegraf") - |> range(start:-5m) - |> filter(fn:(r) => r._measurement == "cpu")' + |> range(start:-5m) + |> filter(fn:(r) => r._measurement == "cpu")' ``` {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} diff --git a/content/enterprise_influxdb/v1.9/flux/guides/exists.md b/content/enterprise_influxdb/v1.9/flux/guides/exists.md index 24bcf4db0..3cd971ea6 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/exists.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/exists.md @@ -47,36 +47,38 @@ to check if a row includes a column or if the value for that column is `null`. #### Filter null values ```js from(bucket: "db/rp") - |> range(start: -5m) - |> filter(fn: (r) => exists r._value) + |> range(start: -5m) + |> filter(fn: (r) => exists r._value) ``` #### Map values based on existence ```js from(bucket: "default") - |> range(start: -30s) - |> map(fn: (r) => ({ - r with - human_readable: - if exists r._value then "${r._field} is ${string(v:r._value)}." - else "${r._field} has no value." - })) + |> range(start: -30s) + |> map( + fn: (r) => ({r with + human_readable: if exists r._value then + "${r._field} is ${string(v: r._value)}." + else + "${r._field} has no value.", + }), + ) ``` #### Ignore null values in a custom aggregate function ```js -customSumProduct = (tables=<-) => - tables +customSumProduct = (tables=<-) => tables |> reduce( - identity: {sum: 0.0, product: 1.0}, - fn: (r, accumulator) => ({ - r with - sum: - if exists r._value then r._value + accumulator.sum - else accumulator.sum, - product: - if exists r._value then r.value * accumulator.product - else accumulator.product - }) + identity: {sum: 0.0, product: 1.0}, + fn: (r, accumulator) => ({r with + sum: if exists r._value then + r._value + accumulator.sum + else + accumulator.sum, + product: if exists r._value then + r.value * accumulator.product + else + accumulator.product, + }), ) ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/fill.md b/content/enterprise_influxdb/v1.9/flux/guides/fill.md index 243e432da..f4f5cbba6 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/fill.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/fill.md @@ -23,12 +23,12 @@ to replace _null_ values with: ```js data - |> fill(usePrevious: true) + |> fill(usePrevious: true) // OR data - |> fill(value: 0.0) + |> fill(value: 0.0) ``` {{% note %}} @@ -48,7 +48,7 @@ Values remain _null_ if there is no previous non-null value in the table. ```js data - |> fill(usePrevious: true) + |> fill(usePrevious: true) ``` {{< flex >}} @@ -83,7 +83,7 @@ of the [column](/{{< latest "flux" >}}/stdlib/universe/fill/#column)._ ```js data - |> fill(value: 0.0) + |> fill(value: 0.0) ``` {{< flex >}} diff --git a/content/enterprise_influxdb/v1.9/flux/guides/first-last.md b/content/enterprise_influxdb/v1.9/flux/guides/first-last.md index 23b878d76..7a76078fd 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/first-last.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/first-last.md @@ -20,12 +20,12 @@ to return the first or last record in an input table. ```js data - |> first() + |> first() // OR data - |> last() + |> last() ``` {{% note %}} @@ -120,8 +120,8 @@ point using aggregate or selector functions, and then removes the time-based seg {{% code-tab-content %}} ```js |> aggregateWindow( - every: 1h, - fn: first + every: 1h, + fn: first, ) ``` | _time | _value | @@ -133,8 +133,8 @@ point using aggregate or selector functions, and then removes the time-based seg {{% code-tab-content %}} ```js |> aggregateWindow( - every: 1h, - fn: last + every: 1h, + fn: last, ) ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/flux-in-dashboards.md b/content/enterprise_influxdb/v1.9/flux/guides/flux-in-dashboards.md index f3b8d7e19..712fcb943 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/flux-in-dashboards.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/flux-in-dashboards.md @@ -85,12 +85,9 @@ The following example uses Chronograf's [predefined template variables](#predefi ```js from(bucket: "telegraf/autogen") - |> filter(fn: (r) => r._measurement == "cpu") - |> range( - start: dashboardTime, - stop: upperDashboardTime - ) - window(every: autoInterval) + |> filter(fn: (r) => r._measurement == "cpu") + |> range(start: dashboardTime, stop: upperDashboardTime) + |> window(every: autoInterval) ``` ### Predefined template variables @@ -102,9 +99,7 @@ It should be used to define the `start` parameter of the `range()` function. ```js dataSet - |> range( - start: dashboardTime - ) + |> range(start: dashboardTime) ``` #### upperDashboardTime @@ -114,10 +109,7 @@ It should be used to define the `stop` parameter of the `range()` function. ```js dataSet - |> range( - start: dashboardTime, - stop: upperDashboardTime - ) + |> range(start: dashboardTime, stop: upperDashboardTime) ``` > As a best practice, always set the `stop` parameter of the `range()` function to `upperDashboardTime` in cell queries. > Without it, `stop` defaults to "now" and the absolute upper range bound selected in the time dropdown is not honored, @@ -131,14 +123,8 @@ It's typically used to align window intervals created in ```js dataSet - |> range( - start: dashboardTime, - stop: upperDashboardTime - ) - |> aggregateWindow( - every: autoInterval, - fn: mean - ) + |> range(start: dashboardTime, stop: upperDashboardTime) + |> aggregateWindow(every: autoInterval, fn: mean) ``` ### Custom template variables diff --git a/content/enterprise_influxdb/v1.9/flux/guides/geo/_index.md b/content/enterprise_influxdb/v1.9/flux/guides/geo/_index.md index 43ed26e22..6cd580026 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/geo/_index.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/geo/_index.md @@ -86,6 +86,6 @@ Use Flux to query the bird migration data and assign it to the `sampleGeoData` v ```js sampleGeoData = from(bucket: "db/rp") - |> range(start: 2019-01-01T00:00:00Z, stop: 2019-12-31T23:59:59Z) - |> filter(fn: (r) => r._measurement == "migration") + |> range(start: 2019-01-01T00:00:00Z, stop: 2019-12-31T23:59:59Z) + |> filter(fn: (r) => r._measurement == "migration") ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/geo/filter-by-region.md b/content/enterprise_influxdb/v1.9/flux/guides/geo/filter-by-region.md index 9c62cf7dc..3868b6d51 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/geo/filter-by-region.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/geo/filter-by-region.md @@ -37,10 +37,7 @@ and queries data points **within 200km of Cairo, Egypt**: import "experimental/geo" sampleGeoData - |> geo.filterRows( - region: {lat: 30.04, lon: 31.23, radius: 200.0}, - strict: true - ) + |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}, strict: true) ``` ## Define a geographic region @@ -62,10 +59,10 @@ Define a box-shaped region by specifying a record containing the following prope ##### Example box-shaped region ```js { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 + minLat: 40.51757813, + maxLat: 40.86914063, + minLon: -73.65234375, + maxLon: -72.94921875, } ``` @@ -79,9 +76,9 @@ Define a circular region by specifying a record containing the following propert ##### Example circular region ```js { - lat: 40.69335938, - lon: -73.30078125, - radius: 20.0 + lat: 40.69335938, + lon: -73.30078125, + radius: 20.0, } ``` @@ -99,11 +96,11 @@ each point in the polygon: ##### Example polygonal region ```js { - points: [ - {lat: 40.671659, lon: -73.936631}, - {lat: 40.706543, lon: -73.749177}, - {lat: 40.791333, lon: -73.880327} - ] + points: [ + {lat: 40.671659, lon: -73.936631}, + {lat: 40.706543, lon: -73.749177}, + {lat: 40.791333, lon: -73.880327}, + ] } ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/geo/group-geo-data.md b/content/enterprise_influxdb/v1.9/flux/guides/geo/group-geo-data.md index 46b5a86c5..6fc55b102 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/geo/group-geo-data.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/geo/group-geo-data.md @@ -45,11 +45,8 @@ to query data points within 200km of Cairo, Egypt and group them by geographic a import "experimental/geo" sampleGeoData - |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}) - |> geo.groupByArea( - newColumn: "geoArea", - level: 5 - ) + |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}) + |> geo.groupByArea(newColumn: "geoArea", level: 5) ``` ### Group data by track or route @@ -68,9 +65,6 @@ to each bird: import "experimental/geo" sampleGeoData - |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}) - |> geo.asTracks( - groupBy: ["id"], - sortBy: ["_time"] - ) + |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}) + |> geo.asTracks(groupBy: ["id"], sortBy: ["_time"]) ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/geo/shape-geo-data.md b/content/enterprise_influxdb/v1.9/flux/guides/geo/shape-geo-data.md index eabb6e0c6..dfb937202 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/geo/shape-geo-data.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/geo/shape-geo-data.md @@ -39,13 +39,9 @@ into row-wise sets, and generate S2 cell ID tokens for each point. import "experimental/geo" from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.shapeData( - latField: "latitude", - lonField: "longitude", - level: 10 - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.shapeData(latField: "latitude", lonField: "longitude", level: 10) ``` ## Generate S2 cell ID tokens @@ -108,12 +104,10 @@ to pivot **lat** and **lon** fields into row-wise sets: import "experimental/geo" from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.toRows() - |> map(fn: (r) => ({ r with - s2_cell_id: geo.s2CellIDToken(point: {lon: r.lon, lat: r.lat}, level: 10) - })) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.toRows() + |> map(fn: (r) => ({r with s2_cell_id: geo.s2CellIDToken(point: {lon: r.lon, lat: r.lat}, level: 10)})) ``` {{% note %}} diff --git a/content/enterprise_influxdb/v1.9/flux/guides/group-data.md b/content/enterprise_influxdb/v1.9/flux/guides/group-data.md index 5a9d9660b..bfded2f7b 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/group-data.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/group-data.md @@ -43,7 +43,7 @@ group key for output tables, i.e. grouping records based on values for specific ###### group() example ```js dataStream - |> group(columns: ["cpu", "host"]) + |> group(columns: ["cpu", "host"]) ``` ###### Resulting group key @@ -72,12 +72,9 @@ It uses a regular expression to filter only numbered cores. ```js dataSet = from(bucket: "db/rp") - |> range(start: -2m) - |> filter(fn: (r) => - r._field == "usage_system" and - r.cpu =~ /cpu[0-9*]/ - ) - |> drop(columns: ["host"]) + |> range(start: -2m) + |> filter(fn: (r) => r._field == "usage_system" and r.cpu =~ /cpu[0-9*]/) + |> drop(columns: ["host"]) ``` {{% note %}} @@ -167,7 +164,7 @@ Group the `dataSet` stream by the `cpu` column. ```js dataSet - |> group(columns: ["cpu"]) + |> group(columns: ["cpu"]) ``` This won't actually change the structure of the data since it already has `cpu` @@ -256,7 +253,7 @@ Grouping data by the `_time` column is a good illustration of how grouping chang ```js dataSet - |> group(columns: ["_time"]) + |> group(columns: ["_time"]) ``` When grouping by `_time`, all records that share a common `_time` value are grouped into individual tables. @@ -383,9 +380,9 @@ If you're interested in running and visualizing this yourself, here's what the q ```js dataSet - |> group(columns: ["_time"]) - |> mean() - |> group(columns: ["_value", "_time"], mode: "except") + |> group(columns: ["_time"]) + |> mean() + |> group(columns: ["_value", "_time"], mode: "except") ``` {{% /note %}} @@ -394,7 +391,7 @@ Group by the `cpu` and `_time` columns. ```js dataSet - |> group(columns: ["cpu", "_time"]) + |> group(columns: ["cpu", "_time"]) ``` This outputs a table for every unique `cpu` and `_time` combination: diff --git a/content/enterprise_influxdb/v1.9/flux/guides/histograms.md b/content/enterprise_influxdb/v1.9/flux/guides/histograms.md index a0c3ac66e..47da6ee77 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/histograms.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/histograms.md @@ -25,13 +25,10 @@ In the histogram output, a column is added (le) that represents the upper bounds Bin counts are cumulative. ```js -from(bucket:"telegraf/autogen") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field == "used_percent" - ) - |> histogram(bins: [0.0, 10.0, 20.0, 30.0]) +from(bucket: "telegraf/autogen") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> histogram(bins: [0.0, 10.0, 20.0, 30.0]) ``` > Values output by the `histogram` function represent points of data aggregated over time. @@ -63,20 +60,10 @@ logarithmicBins(start: 1.0, factor: 2.0, count: 10, infinty: true) ### Generating a histogram with linear bins ```js -from(bucket:"telegraf/autogen") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field == "used_percent" - ) - |> histogram( - bins: linearBins( - start:65.5, - width: 0.5, - count: 20, - infinity:false - ) - ) +from(bucket: "telegraf/autogen") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> histogram(bins: linearBins(start: 65.5, width: 0.5, count: 20, infinity: false)) ``` ###### Output table @@ -108,20 +95,10 @@ Table: keys: [_start, _stop, _field, _measurement, host] ### Generating a histogram with logarithmic bins ```js -from(bucket:"telegraf/autogen") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field == "used_percent" - ) - |> histogram( - bins: logarithmicBins( - start:0.5, - factor: 2.0, - count: 10, - infinity:false - ) - ) +from(bucket: "telegraf/autogen") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> histogram(bins: logarithmicBins(start: 0.5, factor: 2.0, count: 10, infinity: false)) ``` ###### Output table diff --git a/content/enterprise_influxdb/v1.9/flux/guides/increase.md b/content/enterprise_influxdb/v1.9/flux/guides/increase.md index 233347915..e8cc5efd9 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/increase.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/increase.md @@ -23,7 +23,7 @@ wrap over time or periodically reset. ```js data - |> increase() + |> increase() ``` `increase()` returns a cumulative sum of **non-negative** differences between rows in a table. diff --git a/content/enterprise_influxdb/v1.9/flux/guides/join.md b/content/enterprise_influxdb/v1.9/flux/guides/join.md index d99f1b84a..ce9fcc4e2 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/join.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/join.md @@ -40,11 +40,8 @@ This returns the amount of memory (in bytes) used. ###### memUsed stream definition ```js memUsed = from(bucket: "db/rp") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field == "used" - ) + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used") ``` {{% truncate %}} @@ -94,11 +91,8 @@ This returns the number of running processes. ###### procTotal stream definition ```js procTotal = from(bucket: "db/rp") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "processes" and - r._field == "total" - ) + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "processes" and r._field == "total") ``` {{% truncate %}} @@ -154,10 +148,7 @@ An array of strings defining the columns on which the tables will be joined. _**Both tables must have all columns specified in this list.**_ ```js -join( - tables: {mem:memUsed, proc:procTotal}, - on: ["_time", "_stop", "_start", "host"] -) +join(tables: {mem: memUsed, proc: procTotal}, on: ["_time", "_stop", "_start", "host"]) ``` {{% truncate %}} @@ -219,12 +210,8 @@ column and dividing `_value_mem` by `_value_proc` and mapping it to a new `_value` column. ```js -join(tables: {mem:memUsed, proc:procTotal}, on: ["_time", "_stop", "_start", "host"]) - |> map(fn: (r) => ({ - _time: r._time, - _value: r._value_mem / r._value_proc - }) - ) +join(tables: {mem: memUsed, proc: procTotal}, on: ["_time", "_stop", "_start", "host"]) + |> map(fn: (r) => ({_time: r._time, _value: r._value_mem / r._value_proc})) ``` {{% truncate %}} @@ -277,35 +264,21 @@ The results are grouped by cluster ID so you can make comparisons across cluster ```js batchSize = (cluster_id, start=-1m, interval=10s) => { - httpd = from(bucket:"telegraf") - |> range(start:start) - |> filter(fn:(r) => - r._measurement == "influxdb_httpd" and - r._field == "writeReq" and - r.cluster_id == cluster_id - ) - |> aggregateWindow(every: interval, fn: mean) - |> derivative(nonNegative:true,unit:60s) + httpd = from(bucket: "telegraf") + |> range(start: start) + |> filter(fn: (r) => r._measurement == "influxdb_httpd" and r._field == "writeReq" and r.cluster_id == cluster_id) + |> aggregateWindow(every: interval, fn: mean) + |> derivative(nonNegative: true, unit: 60s) - write = from(bucket:"telegraf") - |> range(start:start) - |> filter(fn:(r) => - r._measurement == "influxdb_write" and - r._field == "pointReq" and - r.cluster_id == cluster_id - ) - |> aggregateWindow(every: interval, fn: max) - |> derivative(nonNegative:true,unit:60s) + write = from(bucket: "telegraf") + |> range(start: start) + |> filter(fn: (r) => r._measurement == "influxdb_write" and r._field == "pointReq" and r.cluster_id == cluster_id) + |> aggregateWindow(every: interval, fn: max) + |> derivative(nonNegative: true, unit: 60s) - return join( - tables:{httpd:httpd, write:write}, - on:["_time","_stop","_start","host"] - ) - |> map(fn:(r) => ({ - _time: r._time, - _value: r._value_httpd / r._value_write, - })) - |> group(columns: cluster_id) + return join(tables: {httpd: httpd, write: write}, on: ["_time", "_stop", "_start", "host"]) + |> map(fn: (r) => ({_time: r._time, _value: r._value_httpd / r._value_write})) + |> group(columns: cluster_id) } batchSize(cluster_id: "enter cluster id here") diff --git a/content/enterprise_influxdb/v1.9/flux/guides/manipulate-timestamps.md b/content/enterprise_influxdb/v1.9/flux/guides/manipulate-timestamps.md index 96a046102..132b4740d 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/manipulate-timestamps.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/manipulate-timestamps.md @@ -116,7 +116,7 @@ operations where points should align by time, but timestamps vary slightly. ```js data - |> truncateTimeColumn(unit: 1m) + |> truncateTimeColumn(unit: 1m) ``` {{< flex >}} @@ -160,10 +160,7 @@ By using `experimental.addDuration()`, you accept the ```js import "experimental" -experimental.addDuration( - d: 6h, - to: 2019-09-16T12:00:00Z, -) +experimental.addDuration(d: 6h, to: 2019-09-16T12:00:00Z) // Returns 2019-09-16T18:00:00.000000000Z ``` @@ -180,10 +177,7 @@ By using `experimental.subDuration()`, you accept the ```js import "experimental" -experimental.subDuration( - d: 6h, - from: 2019-09-16T12:00:00Z, -) +experimental.subDuration(d: 6h, from: 2019-09-16T12:00:00Z) // Returns 2019-09-16T06:00:00.000000000Z ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/mathematic-operations.md b/content/enterprise_influxdb/v1.9/flux/guides/mathematic-operations.md index 1b2f421d8..489587bb4 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/mathematic-operations.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/mathematic-operations.md @@ -102,16 +102,11 @@ The example `multiplyByX()` function below includes: It also multiples the `_value` column by `x`. ```js -multiplyByX = (x, tables=<-) => - tables - |> map(fn: (r) => ({ - r with - _value: r._value * x - }) - ) +multiplyByX = (x, tables=<-) => tables + |> map(fn: (r) => ({r with _value: r._value * x})) data - |> multiplyByX(x: 10) + |> multiplyByX(x: 10) ``` ## Examples @@ -125,31 +120,19 @@ a new `_value` by dividing the original `_value` by 1073741824. ```js from(bucket: "db/rp") - |> range(start: -10m) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field == "active" - ) - |> map(fn: (r) => ({ - r with - _value: r._value / 1073741824 - }) - ) + |> range(start: -10m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "active") + |> map(fn: (r) => ({r with _value: r._value / 1073741824})) ``` You could turn that same calculation into a function: ```js -bytesToGB = (tables=<-) => - tables - |> map(fn: (r) => ({ - r with - _value: r._value / 1073741824 - }) - ) +bytesToGB = (tables=<-) => tables + |> map(fn: (r) => ({r with _value: r._value / 1073741824})) data - |> bytesToGB() + |> bytesToGB() ``` #### Include partial gigabytes @@ -159,13 +142,8 @@ To calculate partial GBs, convert the `_value` column and its values to floats u and format the denominator in the division operation as a float. ```js -bytesToGB = (tables=<-) => - tables - |> map(fn: (r) => ({ - r with - _value: float(v: r._value) / 1073741824.0 - }) - ) +bytesToGB = (tables=<-) => tables + |> map(fn: (r) => ({r with _value: float(v: r._value) / 1073741824.0})) ``` ### Calculate a percentage @@ -194,10 +172,8 @@ Use `join()` when querying data from different buckets or data sources. ##### Pivot fields into columns for mathematic calculations ```js data - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> map(fn: (r) => ({ r with - _value: (r.field1 + r.field2) / r.field3 * 100.0 - })) + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({r with _value: (r.field1 + r.field2) / r.field3 * 100.0})) ``` ##### Join multiple data sources for mathematic calculations @@ -210,18 +186,15 @@ pgPass = secrets.get(key: "POSTGRES_PASSWORD") pgHost = secrets.get(key: "POSTGRES_HOST") t1 = sql.from( - driverName: "postgres", - dataSourceName: "postgresql://${pgUser}:${pgPass}@${pgHost}", - query:"SELECT id, name, available FROM exampleTable" + driverName: "postgres", + dataSourceName: "postgresql://${pgUser}:${pgPass}@${pgHost}", + query: "SELECT id, name, available FROM exampleTable", ) t2 = from(bucket: "db/rp") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") join(tables: {t1: t1, t2: t2}, on: ["id"]) - |> map(fn: (r) => ({ r with _value: r._value_t2 / r.available_t1 * 100.0 })) + |> map(fn: (r) => ({r with _value: r._value_t2 / r.available_t1 * 100.0})) ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/median.md b/content/enterprise_influxdb/v1.9/flux/guides/median.md index 30b7d10d4..922bef8c7 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/median.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/median.md @@ -107,7 +107,7 @@ contain values in the 50th percentile of data in the table. ```js data - |> median() + |> median() ``` ## Find the average of values closest to the median @@ -116,7 +116,7 @@ average of the two values closest to the mathematical median of data in the tabl ```js data - |> median(method: "exact_mean") + |> median(method: "exact_mean") ``` ## Find the point with the median value @@ -125,7 +125,7 @@ value that 50% of values in the table are less than. ```js data - |> median(method: "exact_selector") + |> median(method: "exact_selector") ``` ## Use median() with aggregateWindow() @@ -139,8 +139,5 @@ To specify the [median calculation method](#select-a-method-for-calculating-the- ```js data - |> aggregateWindow( - every: 5m, - fn: (tables=<-, column) => tables |> median(method: "exact_selector") - ) + |> aggregateWindow(every: 5m, fn: (tables=<-, column) => tables |> median(method: "exact_selector")) ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/monitor-states.md b/content/enterprise_influxdb/v1.9/flux/guides/monitor-states.md index bc96f3ba4..33a22916d 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/monitor-states.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/monitor-states.md @@ -33,12 +33,12 @@ If you're just getting started with Flux queries, check out the following: ```js - |> stateDuration( - fn: (r) => - r._column_to_search == "value_to_search_for", - column: "state_duration", - unit: 1s - ) + data + |> stateDuration( + fn: (r) => r._column_to_search == "value_to_search_for", + column: "state_duration", + unit: 1s, + ) ``` 2. Use `stateDuration()` to search each point for the specified value: @@ -52,13 +52,12 @@ The following query searches the `doors` bucket over the past 5 minutes to find ```js from(bucket: "doors") - |> range(start: -5m) - |> stateDuration( - fn: (r) => - r._value == "closed", - column: "door_closed", - unit: 1s - ) + |> range(start: -5m) + |> stateDuration( + fn: (r) => r._value == "closed", + column: "door_closed", + unit: 1s, + ) ``` In this example, `door_closed` is the **State duration** column. If you write data to the `doors` bucket every minute, the state duration increases by `60s` for each consecutive point where `_value` is `closed`. If `_value` is not `closed`, the state duration is reset to `0`. @@ -87,11 +86,11 @@ _time _value door_closed ```js - |> stateCount - (fn: (r) => - r._column_to_search == "value_to_search_for", - column: "state_count" - ) + data + |> stateCount( + fn: (r) => r._column_to_search == "value_to_search_for", + column: "state_count" + ) ``` 2. Use `stateCount()` to search each point for the specified value: @@ -106,11 +105,8 @@ calculates how many points have `closed` as their `_value`. ```js from(bucket: "doors") - |> range(start: -5m) - |> stateDuration( - fn: (r) => - r._value == "closed", - column: "door_closed") + |> range(start: -5m) + |> stateDuration(fn: (r) => r._value == "closed", column: "door_closed") ``` This example stores the **state count** in the `door_closed` column. @@ -139,13 +135,9 @@ InfluxDB searches the `servers` bucket over the past hour and counts records wit ```js from(bucket: "servers") - |> range(start: -1h) - |> filter(fn: (r) => - r.machine_state == "idle" or - r.machine_state == "assigned" or - r.machine_state == "busy" - ) - |> stateCount(fn: (r) => r.machine_state == "busy", column: "_count") - |> stateCount(fn: (r) => r.machine_state == "assigned", column: "_count") - |> stateCount(fn: (r) => r.machine_state == "idle", column: "_count") + |> range(start: -1h) + |> filter(fn: (r) => r.machine_state == "idle" or r.machine_state == "assigned" or r.machine_state == "busy") + |> stateCount(fn: (r) => r.machine_state == "busy", column: "_count") + |> stateCount(fn: (r) => r.machine_state == "assigned", column: "_count") + |> stateCount(fn: (r) => r.machine_state == "idle", column: "_count") ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/moving-average.md b/content/enterprise_influxdb/v1.9/flux/guides/moving-average.md index 893e9e91a..ad91dfb17 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/moving-average.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/moving-average.md @@ -20,12 +20,12 @@ functions to return the moving average of data. ```js data - |> movingAverage(n: 5) + |> movingAverage(n: 5) // OR data - |> timedMovingAverage(every: 5m, period: 10m) + |> timedMovingAverage(every: 5m, period: 10m) ``` ### movingAverage() @@ -99,10 +99,7 @@ If `every = 30m` and `period = 1h`: **The following would return:** ```js -|> timedMovingAverage( - every: 2m, - period: 4m -) +|> timedMovingAverage(every: 2m, period: 4m) ``` | _time | _value | diff --git a/content/enterprise_influxdb/v1.9/flux/guides/percentile-quantile.md b/content/enterprise_influxdb/v1.9/flux/guides/percentile-quantile.md index 18a264d02..01d122ded 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/percentile-quantile.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/percentile-quantile.md @@ -120,7 +120,7 @@ contain values in the 99th percentile of data in the table. ```js data - |> quantile(q: 0.99) + |> quantile(q: 0.99) ``` ## Find the average of values closest to the quantile @@ -130,7 +130,7 @@ For example, to calculate the `0.99` quantile: ```js data - |> quantile(q: 0.99, method: "exact_mean") + |> quantile(q: 0.99, method: "exact_mean") ``` ## Find the point with the quantile value @@ -140,7 +140,7 @@ For example, to calculate the `0.99` quantile: ```js data - |> quantile(q: 0.99, method: "exact_selector") + |> quantile(q: 0.99, method: "exact_selector") ``` ## Use quantile() with aggregateWindow() @@ -154,10 +154,9 @@ To specify the [quantile calculation method](#select-a-method-for-calculating-th ```js data - |> aggregateWindow( - every: 5m, - fn: (tables=<-, column) => - tables - |> quantile(q: 0.99, method: "exact_selector") - ) + |> aggregateWindow( + every: 5m, + fn: (tables=<-, column) => tables + |> quantile(q: 0.99, method: "exact_selector"), + ) ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/query-fields.md b/content/enterprise_influxdb/v1.9/flux/guides/query-fields.md index 74ca57f70..aec322d78 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/query-fields.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/query-fields.md @@ -37,7 +37,7 @@ Rows that evaluate to `false` are **excluded** from the output data. ```js // ... - |> filter(fn: (r) => r._measurement == "example-measurement" ) + |> filter(fn: (r) => r._measurement == "example-measurement" ) ``` The `fn` predicate function requires an `r` argument, which represents each row @@ -69,10 +69,7 @@ and `filter()` represent the most basic Flux query: ```js from(bucket: "db/rp") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" and - r.tag == "example-tag" - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement" and r.tag == "example-tag") + |> filter(fn: (r) => r._field == "example-field") ``` diff --git a/content/enterprise_influxdb/v1.9/flux/guides/rate.md b/content/enterprise_influxdb/v1.9/flux/guides/rate.md index ca676f267..9c3de71b9 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/rate.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/rate.md @@ -34,7 +34,7 @@ to calculate the rate of change per unit of time between subsequent _non-null_ v ```js data - |> derivative(unit: 1s) + |> derivative(unit: 1s) ``` By default, `derivative()` returns only positive derivative values and replaces negative values with _null_. @@ -93,10 +93,7 @@ To return negative derivative values, set the `nonNegative` parameter to `false` **The following returns:** ```js -|> derivative( - unit: 1m, - nonNegative: false -) +|> derivative(unit: 1m, nonNegative: false) ``` | _time | _value | @@ -122,11 +119,7 @@ to calculate the average rate of change per window of time. import "experimental/aggregate" data - |> aggregate.rate( - every: 1m, - unit: 1s, - groupColumns: ["tag1", "tag2"] - ) + |> aggregate.rate(every: 1m, unit: 1s, groupColumns: ["tag1", "tag2"]) ``` `aggregate.rate()` returns the average rate of change (as a [float](/{{< latest "flux" >}}/language/types/#numeric-types)) @@ -155,10 +148,7 @@ Negative values are replaced with _null_. **The following returns:** ```js -|> aggregate.rate( - every: 20m, - unit: 1m -) +|> aggregate.rate(every: 20m, unit: 1m) ``` | _time | _value | diff --git a/content/enterprise_influxdb/v1.9/flux/guides/regular-expressions.md b/content/enterprise_influxdb/v1.9/flux/guides/regular-expressions.md index 91d1e1865..c5a24dca0 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/regular-expressions.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/regular-expressions.md @@ -52,12 +52,8 @@ It only keeps records for which the `cpu` is either `cpu0`, `cpu1`, or `cpu2`. ```js from(bucket: "db/rp") - |> range(start: -15m) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_user" and - r.cpu =~ /cpu[0-2]/ - ) + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_user" and r.cpu =~ /cpu[0-2]/) ``` ### Use a regex to filter by field key @@ -65,11 +61,8 @@ The following example excludes records that do not have `_percent` in a field ke ```js from(bucket: "db/rp") - |> range(start: -15m) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field =~ /_percent/ - ) + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "mem" and r._field =~ /_percent/) ``` ### Drop columns matching a regex @@ -77,9 +70,9 @@ The following example drops columns whose names do not being with `_`. ```js from(bucket: "db/rp") - |> range(start: -15m) - |> filter(fn: (r) => r._measurement == "mem") - |> drop(fn: (column) => column !~ /_.*/) + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "mem") + |> drop(fn: (column) => column !~ /_.*/) ``` ## Helpful links diff --git a/content/enterprise_influxdb/v1.9/flux/guides/scalar-values.md b/content/enterprise_influxdb/v1.9/flux/guides/scalar-values.md index f756260fb..f8e4a3c07 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/scalar-values.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/scalar-values.md @@ -64,10 +64,7 @@ each table. ```js sampleData - |> tableFind(fn: (key) => - key._field == "temp" and - key.location == "sfo" - ) + |> tableFind(fn: (key) => key._field == "temp" and key.location == "sfo") ``` The example above returns a single table: @@ -95,11 +92,8 @@ to output an array of values from a specific column in the extracted table. ```js sampleData - |> tableFind(fn: (key) => - key._field == "temp" and - key.location == "sfo" - ) - |> getColumn(column: "_value") + |> tableFind(fn: (key) => key._field == "temp" and key.location == "sfo") + |> getColumn(column: "_value") // Returns [65.1, 66.2, 66.3, 66.8] ``` @@ -112,11 +106,8 @@ value at that index. ```js SFOTemps = sampleData - |> tableFind(fn: (key) => - key._field == "temp" and - key.location == "sfo" - ) - |> getColumn(column: "_value") + |> tableFind(fn: (key) => key._field == "temp" and key.location == "sfo") + |> getColumn(column: "_value") SFOTemps // Returns [65.1, 66.2, 66.3, 66.8] @@ -136,11 +127,8 @@ The function outputs a record with key-value pairs for each column. ```js sampleData - |> tableFind(fn: (key) => - key._field == "temp" and - key.location == "sfo" - ) - |> getRecord(idx: 0) + |> tableFind(fn: (key) => key._field == "temp" and key.location == "sfo") + |> getRecord(idx: 0) // Returns { // _time:2019-11-11T12:00:00Z, @@ -158,11 +146,8 @@ keys in the record. ```js tempInfo = sampleData - |> tableFind(fn: (key) => - key._field == "temp" and - key.location == "sfo" - ) - |> getRecord(idx: 0) + |> tableFind(fn: (key) => key._field == "temp" and key.location == "sfo") + |> getRecord(idx: 0) tempInfo // Returns { @@ -186,17 +171,18 @@ Create custom helper functions to extract scalar values from query output. ```js // Define a helper function to extract field values getFieldValue = (tables=<-, field) => { - extract = tables - |> tableFind(fn: (key) => key._field == field) - |> getColumn(column: "_value") - return extract[0] + extract = tables + |> tableFind(fn: (key) => key._field == field) + |> getColumn(column: "_value") + + return extract[0] } // Use the helper function to define a variable lastJFKTemp = sampleData - |> filter(fn: (r) => r.location == "kjfk") - |> last() - |> getFieldValue(field: "temp") + |> filter(fn: (r) => r.location == "kjfk") + |> last() + |> getFieldValue(field: "temp") lastJFKTemp // Returns 71.2 @@ -206,16 +192,17 @@ lastJFKTemp ```js // Define a helper function to extract a row as a record getRow = (tables=<-, field, idx=0) => { - extract = tables - |> tableFind(fn: (key) => true) - |> getRecord(idx: idx) - return extract + extract = tables + |> tableFind(fn: (key) => true) + |> getRecord(idx: idx) + + return extract } // Use the helper function to define a variable lastReported = sampleData - |> last() - |> getRow(idx: 0) + |> last() + |> getRow(field: "temp") "The last location to report was ${lastReported.location}. The temperature was ${string(v: lastReported._value)}°F." diff --git a/content/enterprise_influxdb/v1.9/flux/guides/sort-limit.md b/content/enterprise_influxdb/v1.9/flux/guides/sort-limit.md index 8fab30494..1b7ba229d 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/sort-limit.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/sort-limit.md @@ -30,13 +30,10 @@ If you're just getting started with Flux queries, check out the following: The following example orders system uptime first by region, then host, then value. ```js -from(bucket:"db/rp") - |> range(start:-12h) - |> filter(fn: (r) => - r._measurement == "system" and - r._field == "uptime" - ) - |> sort(columns:["region", "host", "_value"]) +from(bucket: "db/rp") + |> range(start: -12h) + |> filter(fn: (r) => r._measurement == "system" and r._field == "uptime") + |> sort(columns: ["region", "host", "_value"]) ``` The [`limit()` function](/{{< latest "flux" >}}/stdlib/universe/limit) @@ -45,8 +42,8 @@ The following example shows up to 10 records from the past hour. ```js from(bucket:"db/rp") - |> range(start:-1h) - |> limit(n:10) + |> range(start:-1h) + |> limit(n:10) ``` You can use `sort()` and `limit()` together to show the top N records. @@ -54,14 +51,11 @@ The example below returns the 10 top system uptime values sorted first by region, then host, then value. ```js -from(bucket:"db/rp") - |> range(start:-12h) - |> filter(fn: (r) => - r._measurement == "system" and - r._field == "uptime" - ) - |> sort(columns:["region", "host", "_value"]) - |> limit(n:10) +from(bucket: "db/rp") + |> range(start: -12h) + |> filter(fn: (r) => r._measurement == "system" and r._field == "uptime") + |> sort(columns: ["region", "host", "_value"]) + |> limit(n: 10) ``` You now have created a Flux query that sorts and limits data. diff --git a/content/enterprise_influxdb/v1.9/flux/guides/sql.md b/content/enterprise_influxdb/v1.9/flux/guides/sql.md index 2e8c61935..f4857fc0f 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/sql.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/sql.md @@ -17,9 +17,9 @@ list_code_example: | import "sql" sql.from( - driverName: "postgres", - dataSourceName: "postgresql://user:password@localhost", - query: "SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: "postgresql://user:password@localhost", + query: "SELECT * FROM example_table", ) ``` --- @@ -53,9 +53,9 @@ To query a SQL data source: import "sql" sql.from( - driverName: "postgres", - dataSourceName: "postgresql://user:password@localhost", - query: "SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: "postgresql://user:password@localhost", + query: "SELECT * FROM example_table", ) ``` {{% /code-tab-content %}} @@ -65,9 +65,9 @@ sql.from( import "sql" sql.from( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - query: "SELECT * FROM example_table" + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + query: "SELECT * FROM example_table", ) ``` {{% /code-tab-content %}} @@ -80,9 +80,9 @@ sql.from( import "sql" sql.from( - driverName: "sqlite3", - dataSourceName: "file:/path/to/test.db?cache=shared&mode=ro", - query: "SELECT * FROM example_table" + driverName: "sqlite3", + dataSourceName: "file:/path/to/test.db?cache=shared&mode=ro", + query: "SELECT * FROM example_table", ) ``` {{% /code-tab-content %}} @@ -106,15 +106,15 @@ import "sql" // Query data from PostgreSQL sensorInfo = sql.from( - driverName: "postgres", - dataSourceName: "postgresql://localhost?sslmode=disable", - query: "SELECT * FROM sensors" + driverName: "postgres", + dataSourceName: "postgresql://localhost?sslmode=disable", + query: "SELECT * FROM sensors", ) // Query data from InfluxDB sensorMetrics = from(bucket: "telegraf/autogen") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "airSensors") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "airSensors") // Join InfluxDB query results with PostgreSQL query results join(tables: {metric: sensorMetrics, info: sensorInfo}, on: ["sensor_id"]) diff --git a/content/enterprise_influxdb/v1.9/flux/guides/window-aggregate.md b/content/enterprise_influxdb/v1.9/flux/guides/window-aggregate.md index 25ef0fdd1..4fb3168e8 100644 --- a/content/enterprise_influxdb/v1.9/flux/guides/window-aggregate.md +++ b/content/enterprise_influxdb/v1.9/flux/guides/window-aggregate.md @@ -37,12 +37,9 @@ The following example queries the memory usage of the host machine. ```js dataSet = from(bucket: "db/rp") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field == "used_percent" - ) - |> drop(columns: ["host"]) + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") + |> drop(columns: ["host"]) ``` {{% note %}} @@ -103,7 +100,7 @@ set into one minute windows. ```js dataSet - |> window(every: 1m) + |> window(every: 1m) ``` {{% note %}} @@ -195,8 +192,8 @@ to output the average of each window: ```js dataSet - |> window(every: 1m) - |> mean() + |> window(every: 1m) + |> mean() ``` {{% truncate %}} @@ -259,9 +256,9 @@ duplicate either the `_start` or `_stop` column as a new `_time` column. ```js dataSet - |> window(every: 1m) - |> mean() - |> duplicate(column: "_stop", as: "_time") + |> window(every: 1m) + |> mean() + |> duplicate(column: "_stop", as: "_time") ``` {{% truncate %}} @@ -310,10 +307,10 @@ Use the `window()` function to "unwindow" your data into a single infinite (`inf ```js dataSet - |> window(every: 1m) - |> mean() - |> duplicate(column: "_stop", as: "_time") - |> window(every: inf) + |> window(every: 1m) + |> mean() + |> duplicate(column: "_stop", as: "_time") + |> window(every: inf) ``` {{% note %}} @@ -350,5 +347,5 @@ The following Flux query will return the same results: ###### aggregateWindow function ```js dataSet - |> aggregateWindow(every: 1m, fn: mean) + |> aggregateWindow(every: 1m, fn: mean) ``` diff --git a/content/enterprise_influxdb/v1.9/flux/optimize-queries.md b/content/enterprise_influxdb/v1.9/flux/optimize-queries.md index be44bb903..a0dfbd141 100644 --- a/content/enterprise_influxdb/v1.9/flux/optimize-queries.md +++ b/content/enterprise_influxdb/v1.9/flux/optimize-queries.md @@ -65,13 +65,13 @@ subsequent operations there. ##### Pushdown functions in use ```js from(bucket: "db/rp") - |> range(start: -1h) // - |> filter(fn: (r) => r.sensor == "abc123") // - |> group(columns: ["_field", "host"]) // Pushed to the data source - |> aggregateWindow(every: 5m, fn: max) // - |> filter(fn: (r) => r._value >= 90.0) // + |> range(start: -1h) // + |> filter(fn: (r) => r.sensor == "abc123") // + |> group(columns: ["_field", "host"]) // Pushed to the data source + |> aggregateWindow(every: 5m, fn: max) // + |> filter(fn: (r) => r._value >= 90.0) // - |> top(n: 10) // Run in memory + |> top(n: 10) // Run in memory ``` ### Avoid processing filters inline @@ -88,8 +88,8 @@ to the underlying data source and loads all data returned from `range()` into me ```js from(bucket: "db/rp") - |> range(start: -1h) - |> filter(fn: (r) => r.region == v.provider + v.region) + |> range(start: -1h) + |> filter(fn: (r) => r.region == v.provider + v.region) ``` To dynamically set filters and maintain the pushdown ability of the `filter()` function, @@ -99,8 +99,8 @@ use variables to define filter values outside of `filter()`: region = v.provider + v.region from(bucket: "db/rp") - |> range(start: -1h) - |> filter(fn: (r) => r.region == region) + |> range(start: -1h) + |> filter(fn: (r) => r.region == region) ``` ## Avoid short window durations @@ -136,17 +136,17 @@ The following queries are functionally the same, but using `set()` is more perfo ```js data - |> map(fn: (r) => ({ r with foo: "bar" })) + |> map(fn: (r) => ({ r with foo: "bar" })) // Recommended data - |> set(key: "foo", value: "bar") + |> set(key: "foo", value: "bar") ``` #### Dynamically set a column value using existing row data ```js data - |> map(fn: (r) => ({ r with foo: r.bar })) + |> map(fn: (r) => ({ r with foo: r.bar })) ``` ## Balance time range and data precision diff --git a/content/enterprise_influxdb/v1.9/guides/authenticate.md b/content/enterprise_influxdb/v1.9/guides/authenticate.md index 49fad2a00..106cd990c 100644 --- a/content/enterprise_influxdb/v1.9/guides/authenticate.md +++ b/content/enterprise_influxdb/v1.9/guides/authenticate.md @@ -110,7 +110,7 @@ This is currently only possible through the [InfluxDB HTTP API](/enterprise_infl InfluxDB Enterprise uses the shared secret to encode the JWT signature. By default, `shared-secret` is set to an empty string, in which case no JWT authentication takes place. - Add a custom shared secret in your [InfluxDB configuration file](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#shared-secret--). + Add a custom shared secret in your [InfluxDB configuration file](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#shared-secret). The longer the secret string, the more secure it is: ```toml diff --git a/content/enterprise_influxdb/v1.9/guides/calculate_percentages.md b/content/enterprise_influxdb/v1.9/guides/calculate_percentages.md index 4cd821d4b..5e9c099f7 100644 --- a/content/enterprise_influxdb/v1.9/guides/calculate_percentages.md +++ b/content/enterprise_influxdb/v1.9/guides/calculate_percentages.md @@ -56,17 +56,17 @@ Here's how that looks in Flux: ```js // Query data from the past 15 minutes pivot fields into columns so each row // contains values for each field -data = from(bucket:"your_db/your_retention_policy") - |> range(start: -15m) - |> filter(fn: (r) => r._measurement == "measurement_name" and r._field =~ /field[1-2]/) - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") +data = from(bucket: "your_db/your_retention_policy") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "measurement_name" and r._field =~ /field[1-2]/) + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") ``` Each row now contains the values necessary to perform a math operation. For example, to add two field keys, start with the `data` variable created above, and then use `map()` to re-map values in each row. ```js data - |> map(fn: (r) => ({ r with _value: r.field1 + r.field2})) + |> map(fn: (r) => ({ r with _value: r.field1 + r.field2})) ``` > **Note:** Flux supports basic math operators such as `+`,`-`,`/`, `*`, and `()`. For example, to subtract `field2` from `field1`, change `+` to `-`. @@ -77,12 +77,14 @@ Use the `data` variable created above, and then use the [`map()` function](/{{< ```js data - |> map(fn: (r) => ({ - _time: r._time, - _measurement: r._measurement, - _field: "percent", - _value: field1 / field2 * 100.0 - })) + |> map( + fn: (r) => ({ + _time: r._time, + _measurement: r._measurement, + _field: "percent", + _value: field1 / field2 * 100.0 + }) + ) ``` >**Note:** In this example, `field1` and `field2` are float values, hence multiplied by 100.0. For integer values, multiply by 100 or use the `float()` function to cast integers to floats. @@ -92,12 +94,12 @@ data Use [`aggregateWindow()`](/{{< latest "flux" >}}/stdlib/universe/aggregatewindow) to window data by time and perform an aggregate function on each window. ```js -from(bucket:"/") - |> range(start: -15m) - |> filter(fn: (r) => r._measurement == "measurement_name" and r._field =~ /fieldkey[1-2]/) - |> aggregateWindow(every: 1m, fn:sum) - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> map(fn: (r) => ({ r with _value: r.field1 / r.field2 * 100.0 })) +from(bucket: "/") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "measurement_name" and r._field =~ /fieldkey[1-2]/) + |> aggregateWindow(every: 1m, fn: sum) + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({r with _value: r.field1 / r.field2 * 100.0})) ``` ## Calculate the percentage of total weight per apple variety @@ -115,16 +117,19 @@ Use the following query to calculate the percentage of the total weight each var accounts for at each given point in time. ```js -from(bucket:"apple_stand/autogen") +from(bucket: "apple_stand/autogen") |> range(start: 2018-06-18T12:00:00Z, stop: 2018-06-19T04:35:00Z) - |> filter(fn: (r) => r._measurement == "variety") - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> map(fn: (r) => ({ r with - granny_smith: r.granny_smith / r.total_weight * 100.0 , - golden_delicious: r.golden_delicious / r.total_weight * 100.0 , - fuji: r.fuji / r.total_weight * 100.0 , - gala: r.gala / r.total_weight * 100.0 , - braeburn: r.braeburn / r.total_weight * 100.0 ,})) + |> filter(fn: (r) => r._measurement == "variety") + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map( + fn: (r) => ({r with + granny_smith: r.granny_smith / r.total_weight * 100.0, + golden_delicious: r.golden_delicious / r.total_weight * 100.0, + fuji: r.fuji / r.total_weight * 100.0, + gala: r.gala / r.total_weight * 100.0, + braeburn: r.braeburn / r.total_weight * 100.0, + }), + ) ``` ## Calculate the average percentage of total weight per variety each hour @@ -132,18 +137,20 @@ from(bucket:"apple_stand/autogen") With the apple stand data from the prior example, use the following query to calculate the average percentage of the total weight each variety accounts for per hour. ```js -from(bucket:"apple_stand/autogen") - |> range(start: 2018-06-18T00:00:00.00Z, stop: 2018-06-19T16:35:00.00Z) - |> filter(fn: (r) => r._measurement == "variety") - |> aggregateWindow(every:1h, fn: mean) - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> map(fn: (r) => ({ r with - granny_smith: r.granny_smith / r.total_weight * 100.0, - golden_delicious: r.golden_delicious / r.total_weight * 100.0, - fuji: r.fuji / r.total_weight * 100.0, - gala: r.gala / r.total_weight * 100.0, - braeburn: r.braeburn / r.total_weight * 100.0 - })) +from(bucket: "apple_stand/autogen") + |> range(start: 2018-06-18T00:00:00Z, stop: 2018-06-19T16:35:00Z) + |> filter(fn: (r) => r._measurement == "variety") + |> aggregateWindow(every: 1h, fn: mean) + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map( + fn: (r) => ({r with + granny_smith: r.granny_smith / r.total_weight * 100.0, + golden_delicious: r.golden_delicious / r.total_weight * 100.0, + fuji: r.fuji / r.total_weight * 100.0, + gala: r.gala / r.total_weight * 100.0, + braeburn: r.braeburn / r.total_weight * 100.0, + }), + ) ``` {{% /tab-content %}} diff --git a/content/enterprise_influxdb/v1.9/guides/downsample_and_retain.md b/content/enterprise_influxdb/v1.9/guides/downsample_and_retain.md index 3fcd19824..4587fe898 100644 --- a/content/enterprise_influxdb/v1.9/guides/downsample_and_retain.md +++ b/content/enterprise_influxdb/v1.9/guides/downsample_and_retain.md @@ -214,7 +214,7 @@ data that reside in an RP other than the `DEFAULT` RP. Between checks, `orders` may have data that are older than two hours. The rate at which InfluxDB checks to enforce an RP is a configurable setting, see -[Database Configuration](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#check-interval--30m0s). +[Database Configuration](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#check-interval). Using a combination of RPs and CQs, we've successfully set up our database to automatically keep the high precision raw data for a limited time, create lower diff --git a/content/enterprise_influxdb/v1.9/introduction/download.md b/content/enterprise_influxdb/v1.9/introduction/download.md index e95d38028..c65977815 100644 --- a/content/enterprise_influxdb/v1.9/introduction/download.md +++ b/content/enterprise_influxdb/v1.9/introduction/download.md @@ -18,5 +18,5 @@ If you have purchased a license or already obtained a demo license, log in to the [InfluxDB Enterprise portal](https://portal.influxdata.com/users/sign_in) to get your license key and download URLs. -See the [installation documentation](/enterprise_influxdb/v1.9/install-and-deploy/) +See the [installation documentation](/enterprise_influxdb/v1.9/introduction/installation/) for more information about getting started. diff --git a/content/enterprise_influxdb/v1.9/introduction/getting-started.md b/content/enterprise_influxdb/v1.9/introduction/getting-started.md index 3af569497..b5fc29ac4 100644 --- a/content/enterprise_influxdb/v1.9/introduction/getting-started.md +++ b/content/enterprise_influxdb/v1.9/introduction/getting-started.md @@ -11,7 +11,7 @@ menu: parent: Introduction --- -After you successfully [install and set up](/enterprise_influxdb/v1.9/install-and-deploy/installation/) InfluxDB Enterprise, learn how to [monitor your InfluxDB Enterprise clusters](/{{< latest "chronograf" >}}/guides/monitoring-influxenterprise-clusters) with Chronograf, InfluxDB, and Telegraf. +After you successfully [install and set up](/enterprise_influxdb/v1.9/introduction/installation/installation/) InfluxDB Enterprise, learn how to [monitor your InfluxDB Enterprise clusters](/{{< latest "chronograf" >}}/guides/monitoring-influxenterprise-clusters) with Chronograf, InfluxDB, and Telegraf. ### Where to from here? diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/_index.md b/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/_index.md deleted file mode 100644 index e842680d4..000000000 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/_index.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Install and deploy InfluxDB Enterprise -description: Install InfluxDB Enterprise to on-premise or cloud providers, including Google Cloud Platform, Amazon Web Services, and Azure. -aliases: -- /enterprise_influxdb/v1.9/install-and-deploy/deploying/ -- /enterprise_influxdb/v1.9/install-and-deploy/ -- /enterprise_influxdb/v1.9/introduction/get-started/ -- /enterprise_influxdb/v1.9/production_installation/ -- /enterprise_influxdb/v1.9/introduction/installation/ - -menu: - enterprise_influxdb_1_9: - name: Install and deploy - weight: 30 - parent: Introduction ---- - -Install or deploy your InfluxDB Enterprise cluster in the environment of your choice: - -- Your own environment -- Your cloud provider - -## Your own environment - -Learn how to [install a cluster in your own environment](/enterprise_influxdb/v1.9/install-and-deploy/installation/). - -## Your cloud provider - -Learn how to deploy a cluster on the cloud provider of your choice: - - - [GCP](/enterprise_influxdb/v1.9/install-and-deploy/deploying/google-cloud-platform/) - - [AWS](/enterprise_influxdb/v1.9/install-and-deploy/deploying/aws/) - - [Azure](/enterprise_influxdb/v1.9/install-and-deploy/deploying/azure/) diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/_index.md b/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/_index.md deleted file mode 100644 index eee062c65..000000000 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/_index.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Deploy InfluxDB Enterprise clusters -description: > - Install InfluxDB Enterprise to a cloud provider of your choice, including Google Cloud Platform, Amazon Web Services, and Azure. -aliases: - - /enterprise_influxdb/v1.9/other-options/ - - /enterprise_influxdb/v1.9/install-and-deploy/deploying/index/ -menu: - enterprise_influxdb_1_9: - name: Deploy in cloud - identifier: deploy-in-cloud-enterprise - weight: 30 - parent: Install and deploy ---- - -Deploy InfluxDB Enterprise clusters on the cloud provider of your choice. - -> **Note:** To install in your own environment, see [Install an InfluxDB Enterprise cluster in your own environment](/enterprise_influxdb/v1.9/install-and-deploy/installation/). - -{{< children hlevel="h2" >}} diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/aws/_index.md b/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/aws/_index.md deleted file mode 100644 index 3031ff526..000000000 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/aws/_index.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Deploy InfluxDB Enterprise clusters on Amazon Web Services -description: Deploy InfluxDB Enterprise clusters on Amazon Web Services (AWS). -aliases: - - /enterprise_influxdb/v1.9/other-options/ - - /enterprise_influxdb/v1.9/install-and-deploy/aws/ - - /enterprise_influxdb/v1.9/install-and-deploy/deploying/aws/ -menu: - enterprise_influxdb_1_9: - name: AWS - identifier: deploy-on-aws - weight: 30 - parent: deploy-in-cloud-enterprise ---- - -The following articles detail how to deploy InfluxDB clusters in AWS: - -- [Deploy an InfluxDB Enterprise cluster on Amazon Web Services](/enterprise_influxdb/v1.9/install-and-deploy/deploying/aws/setting-up-template) -- [AWS configuration options](/enterprise_influxdb/v1.9/install-and-deploy/deploying/aws/config-options) diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/aws/config-options.md b/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/aws/config-options.md deleted file mode 100644 index 088faadce..000000000 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/aws/config-options.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: AWS configuration options -description: > - Configuration options when deploying InfluxDB Enterprise on Amazon Web Services (AWS). -aliases: - - /enterprise_influxdb/v1.9/install-and-deploy/aws/config-options/ - - /enterprise_influxdb/v1.9/install-and-deploy/aws/config-options/ - - /enterprise_influxdb/v1.9/install-and-deploy/deploying/aws/config-options/ -menu: - enterprise_influxdb_1_9: - name: AWS configuration options - weight: 30 - parent: deploy-on-aws ---- -When deploying InfluxDB Enterprise on AWS using the template described in [Deploy an InfluxDB Enterprise cluster on Amazon Web Services](/enterprise_influxdb/v1.9/install-and-deploy/aws/setting-up-template), the following configuration options are available: - -- **VPC ID**: The VPC ID of your existing Virtual Private Cloud (VPC). -- **Subnets**: A list of SubnetIds in your Virtual Private Cloud (VPC) where nodes will be created. The subnets must be in the same order as the availability zones they reside in. For a list of which availability zones correspond to which subnets, see the [Subnets section of your VPC dashboard](https://console.aws.amazon.com/vpc/home?region=us-east-1#subnets:sort=SubnetId). -- **Availability Zones**: Availability zones to correspond with your subnets above. The availability zones must be in the same order as their related subnets. For a list of which availability zones correspond to which subnets, see the [Subnets section of your VPC dashboard](https://console.aws.amazon.com/vpc/home?region=us-east-1#subnets:sort=SubnetId). -- **SSH Key Name**: An existing key pair to enable SSH access for the instances. For details on how to create a key pair, see [Creating a Key Pair Using Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair). -- **InfluxDB ingress CIDR**: The IP address range that can be used to connect to the InfluxDB API endpoint. To allow all traffic, enter 0.0.0.0/0. -- **SSH Access CIDR**: The IP address range that can be used to SSH into the EC2 instances. To allow all traffic, enter 0.0.0.0/0. -- **InfluxDB Enterprise License Key**: Your InfluxDB Enterprise license key. Applies only to BYOL. -- **InfluxDB Administrator Username**: Your InfluxDB administrator username. Applies only to BYOL. -- **InfluxDB Administrator Password**: Your InfluxDB administrator password. Applies only to BYOL. -- **InfluxDB Enterprise Version**: The version of InfluxDB. Defaults to current version. -- **Telegraf Version**: The version of Telegraf. Defaults to current version. -- **InfluxDB Data Node Disk Size**: The size in GB of the EBS io1 volume each data node. Defaults to 250. -- **InfluxDB Data Node Disk IOPS**: The IOPS of the EBS io1 volume on each data node. Defaults to 1000. -- **DataNodeInstanceType**: The instance type of the data node. Defaults to m5.large. -- **MetaNodeInstanceType**: The instance type of the meta node. Defaults to t3.small. -- **MonitorInstanceType**: The instance type of the monitor node. Defaults to t3.large. diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/aws/setting-up-template.md b/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/aws/setting-up-template.md deleted file mode 100644 index 6ff0f6583..000000000 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/aws/setting-up-template.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: Deploy an InfluxDB Enterprise cluster on Amazon Web Services -description: Deploy an InfluxDB Enterprise cluster on Amazon Web Services (AWS). -aliases: - - /enterprise_influxdb/v1.9/install-and-deploy/aws/setting-up-template/ - - /enterprise_influxdb/v1.9/install-and-deploy/deploying/aws/setting-up-template/ -menu: - enterprise_influxdb_1_9: - name: Deploy on Amazon Web Services - weight: 20 - parent: deploy-on-aws ---- - -Follow these steps to deploy an InfluxDB Enterprise cluster on AWS. - -## Step 1: Specify template - -After you complete the marketplace flow, you'll be directed to the Cloud Formation Template. - -1. In the Prepare template section, select **Template is ready**. -2. In the Specify template section, the **Amazon S3 URL** field is automatically populated with either the BYOL or integrated billing template, depending on the option you selected in the marketplace. -3. Click **Next**. - -## Step 2: Specify stack details - -1. In the Stack name section, enter a name for your stack. -2. Complete the Network Configuration section: - - **VPC ID**: Click the dropdown menu to fill in your VPC. - - **Subnets**: Select three subnets. - - **Availability Zones**: Select three availability zones to correspond with your subnets above. The availability zones must be in the same order as their related subnets. For a list of which availability zones correspond to which subnets, see the [Subnets section of your VPC dashboard](https://console.aws.amazon.com/vpc/home?region=us-east-1#subnets:sort=SubnetId). - - **SSH Key Name**: Select an existing key pair to enable SSH access for the instances. For details on how to create a key pair, see [Creating a Key Pair Using Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair). - - **InfluxDB ingress CIDR**: Enter the IP address range that can be used to connect to the InfluxDB API endpoint. To allow all traffic, enter 0.0.0.0/0. - - **SSH Access CIDR**: Enter the IP address range that can be used to SSH into the EC2 instances. To allow all traffic, enter 0.0.0.0/0. -3. Complete the **InfluxDB Configuration** section: - - **InfluxDB Enterprise License Key**: Applies only to BYOL. Enter your InfluxDB Enterprise license key. - - **InfluxDB Administrator Username**: Applies only to BYOL. Enter your InfluxDB administrator username. - - **InfluxDB Administrator Password**: Applies only to BYOL. Enter your InfluxDB administrator password. - - **InfluxDB Enterprise Version**: Defaults to current version. - - **Telegraf Version**: Defaults to current version. - - **InfluxDB Data Node Disk Size**: The size in GB of the EBS io1 volume each data node. Defaults to 250. - - **InfluxDB Data Node Disk IOPS**: The IOPS of the EBS io1 volume on each data node. Defaults to 1000. -4. Review the **Other Parameters** section and modify if needed. The fields in this section are all automatically populated and shouldn't require changes. - - **DataNodeInstanceType**: Defaults to m5.large. - - **MetaNodeInstanceType**: Defaults to t3.small. - - **MonitorInstanceType**: Defaults to t3.large. -5. Click **Next**. - -## Step 3: Configure stack options - -1. In the **Tags** section, enter any key-value pairs you want to apply to resources in the stack. -2. Review the **Permissions** and **Advanced options** sections. In most cases, there's no need to modify anything in these sections. -3. Click **Next**. - -## Step 4: Review - -1. Review the configuration options for all of the above sections. -2. In the **Capabilities** section, check the box acknowledging that AWS CloudFormation might create IAM resources. -3. Click **Create stack**. diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/azure.md b/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/azure.md deleted file mode 100644 index fecc7dff7..000000000 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/azure.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: Deploy an InfluxDB Enterprise cluster on Azure Cloud Platform -description: > - Deploy an InfluxDB Enterprise cluster on Microsoft Azure cloud computing service. -aliases: - - /enterprise_influxdb/v1.9/install-and-deploy/azure/ - - /enterprise_influxdb/v1.9/install-and-deploy/deploying/azure/ -menu: - enterprise_influxdb_1_9: - name: Azure - weight: 20 - parent: deploy-in-cloud-enterprise ---- - -For deploying InfluxDB Enterprise clusters on Microsoft Azure cloud computing service, InfluxData provides an [InfluxDB Enterprise application](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/influxdata.influxdb-enterprise-cluster) on the [Azure Marketplace](https://azuremarketplace.microsoft.com/) that makes the installation and setup process easy and straightforward. Clusters are deployed through an Azure Marketplace subscription and are ready for production. Billing occurs through your Azure subscription. - -> Please submit issues and feature requests for the Azure Marketplace deployment [through the related GitHub repository](https://github.com/influxdata/azure-resource-manager-influxdb-enterprise/issues/new) (requires a GitHub account) or by contacting [InfluxData Support](mailto:support@influxdata.com). - -## Prerequisites - -This guide requires the following: - -- Microsoft Azure account with access to the [Azure Marketplace](https://azuremarketplace.microsoft.com/). -- SSH access to cluster instances. - -To deploy InfluxDB Enterprise clusters on platforms other than Azure, see [Deploy InfluxDB Enterprise](/enterprise_influxdb/v1.9/install-and-deploy/). - -## Deploy a cluster - -1. Log in to your Azure Cloud Platform account and navigate to [InfluxData's InfluxDB Enterprise (Official Version) application](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/influxdata.influxdb-enterprise-cluster) on Azure Marketplace. - -2. Click **Get It Now**, read and agree to the terms of use, and then click **Continue**. Once in the Azure Portal, click **Create**. - -3. Select the subscription to use for your InfluxDB Enterprise cluster. Then select a resource group and region where the cluster resources will be deployed. - - > **Tip:** If you do not know which resource group to use, we recommend creating a new one for the InfluxDB Enterprise cluster. - -4. In the Instance Details section, set the OS username and SSH authentication type you will use to access the cluster VMs. For password authentication, enter a username and password. For SSH public key authentication, copy an SSH public key. The cluster VMs are built from an Ubuntu base image. - -5. Click **Next: Cluster Configuration**, and then enter details including the InfluxDB admin username and password, the number of meta and data nodes, and the VM size for both meta and data nodes. We recommend using the default VM sizes and increasing the data node VM size if you anticipate needing more resources for your workload. - - > **Note:** Make sure to save the InfluxDB admin credentials. They will be required to access InfluxDB. - -6. Click **Next: External Access & Chronograf**, and then do the following: - - - To create a separate instance to monitor the cluster and run [Chronograf](https://www.influxdata.com/time-series-platform/chronograf/), select **Yes**. Otherwise, select **No**. - - > **Note:** Adding a Chronograf instance will also configure that instance as an SSH bastion. All cluster instances will only be accessible through the Chronograf instance. - - - Select the appropriate access for the InfluxDB load balancer: **External** to allow external Internet access; otherwise, select **Internal**. - - {{% warn %}}The cluster uses HTTP by default. You must configure HTTPS after the cluster has been deployed.{{% /warn %}} - -7. Click **Next: Review + create** to validate your cluster configuration details. If validation passes, your InfluxDB Enterprise cluster is deployed. - - > **Note:** Some Azure accounts may have vCPU quotas limited to 10 vCPUs available in certain regions. Selecting VM sizes larger than the default can cause a validation error for exceeding the vCPU limit for the region. - -## Access InfluxDB - -Once the cluster is created, access the InfluxDB API at the IP address associated with the load balancer resource (`lb-influxdb`). If external access was configured during setup, the load balancer is publicly accessible. Otherwise, the load balancer is only accessible to the cluster's virtual network. - -Use the load balancer IP address and the InfluxDB admin credentials entered during the cluster creation to interact with InfluxDB Enterprise via the [`influx` CLI](/enterprise_influxdb/v1.9/tools/influx-cli/use-influx/) or use the InfluxDB's [query](/enterprise_influxdb/v1.9/guides/query_data/) and [write](/enterprise_influxdb/v1.9/guides/write_data/) HTTP APIs. - -## Access the cluster - -The InfluxDB Enterprise cluster's VMs are only reachable within the virtual network using the SSH credentails provided during setup. - -If a Chronograf instance has been added to the cluster, the Chronograf instance is publically accessible via SSH. The other VMs can then be reached from the Chronograf VM. - -## Testing - -Azure Resource Manager (ARM) templates used in the InfluxDB Enterprise offering on Azure Marketplace are [available for testing purposes](https://github.com/influxdata/azure-resource-manager-influxdb-enterprise). **Please note, these templates are under active development and not recommended for production.** - -### Next steps - -For an introduction to the InfluxDB database and the InfluxData Platform, see [Getting started with InfluxDB](/platform/introduction/getting-started). diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/google-cloud-platform.md b/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/google-cloud-platform.md deleted file mode 100644 index cd457f5c8..000000000 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/deploying/google-cloud-platform.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: Deploy an InfluxDB Enterprise cluster on Google Cloud Platform -description: > - Deploy an InfluxDB Enterprise cluster on Google Cloud Platform (GCP). -aliases: - - /enterprise_influxdb/v1.9/other-options/google-cloud/ - - /enterprise_influxdb/v1.9/install-and-deploy/google-cloud-platform/ - - /enterprise_influxdb/v1.9/install-and-deploy/deploying/google-cloud-platform/ -menu: - enterprise_influxdb_1_9: - name: GCP - weight: 30 - parent: deploy-in-cloud-enterprise ---- - -Complete the following steps to deploy an InfluxDB Enterprise cluster on Google Cloud Platform (GCP): - -1. [Verify prerequistes](#verify-prerequisites). -2. [Deploy a cluster](#deploy-a-cluster). -3. [Access the cluster](#access-the-cluster). - -After deploying your cluster, see [Getting started with InfluxDB](/platform/introduction/getting-started) for an introduction to InfluxDB database and the InfluxData platform. - ->**Note:** InfluxDB Enterprise on GCP is a self-managed product. For a fully managed InfluxDB experience, check out [InfluxDB Cloud](/influxdb/cloud/sign-up/). - -## Verify prerequisites - -Before deploying an InfluxDB Enterprise cluster on GCP, verify you have the following prerequisites: - -- A [Google Cloud Platform (GCP)](https://cloud.google.com/) account with access to the [GCP Marketplace](https://cloud.google.com/marketplace/). -- Access to [GCP Cloud Shell](https://cloud.google.com/shell/) or the [`gcloud` SDK and command line tools](https://cloud.google.com/sdk/). - -## Deploy a cluster - -1. Log in to your Google Cloud Platform account and go to [InfluxDB Enterprise](https://console.cloud.google.com/marketplace/details/influxdata-public/influxdb-enterprise-vm). - -2. Click **Launch** to create or select a project to open up your cluster's configuration page. - -3. Adjust cluster fields as needed, including: - - - Deployment name: Enter a name for the InfluxDB Enterprise cluster. - - InfluxDB Enterprise admin username: Enter the username of your cluster administrator. - - Zone: Select a region for your cluster. - - Network: Select a network for your cluster. - - Subnetwork: Select a subnetwork for your cluster, if applicable. - - > **Note:** The cluster is only accessible within the network (or subnetwork, if specified) where it's deployed. - -4. Adjust data node fields as needed, including: - - - Data node instance count: Enter the number of data nodes to include in your cluster (we recommend starting with the default, 2). - - Data node machine type: Select the virtual machine type to use for data nodes (by default, 4 vCPUs). Use the down arrow to scroll through list. Notice the amount of memory available for the selected machine. To alter the number of cores and memory for your selected machine type, click the **Customize** link. - - - - - (Optional) By default, the data node disk type is SSD Persistent Disk and the disk size is 250 GB. To alter these defaults, click More and update if needed. - - > **Note:** Typically, fields in collapsed sections don't need to be altered. - -5. Adjust meta node fields as needed, including: - - - Meta node instance count: Enter the number of meta nodes to include in your cluster (we recommend using the default, 3). - - Meta node machine type: Select the virtual machine type to use for meta nodes (by default, 1 vCPUs). Use the down arrow to scroll through list. Notice the amount of memory available for the selected machine. To alter the number of cores and memory for your selected machine type, click the **Customize** link. - - By default, the meta node disk type is SSD Persistent Disk and the disk size is 10 GB. Alter these defaults if needed. - -6. (Optional) Adjust boot disk options fields is needed. By default the boot disk type is Standard Persistent disk and boot disk is 10 GB . - -7. Accept terms and conditions by selecting both check boxes, and then click **Deploy** to launch the InfluxDB Enterprise cluster. - -The cluster may take a few minutes to fully deploy. If the deployment does not complete or reports an error, read through the list of [common deployment errors](https://cloud.google.com/marketplace/docs/troubleshooting). - -> **Important:** Make sure you save the "Admin username", "Admin password", and "Connection internal IP" values displayed on the screen. They are required to access the cluster. - -## Access the cluster - -Access the cluster's IP address from the GCP network (or subnetwork) specified when you deployed the cluster. A cluster can only be reached from instances or services in the same GCP network or subnetwork. - -1. In the GCP Cloud Shell or `gcloud` CLI, create a new instance to access the InfluxDB Enterprise cluster. - - ``` - gcloud compute instances create influxdb-access --image-family ubuntu-1804-lts --image-project ubuntu-os-cloud - ``` - -2. SSH into the instance. - - ``` - gcloud compute ssh influxdb-access - ``` - -3. On the instance, install the `influx` command line tool via the InfluxDB open source package. - - ``` - wget https://dl.influxdata.com/influxdb/releases/influxdb_{{< latest-patch >}}_amd64.deb - sudo dpkg -i influxdb_{{< latest-patch >}}_amd64.deb - ``` - -4. Access the InfluxDB Enterprise cluster using the following command with "Admin username", "Admin password", and "Connection internal IP" values from the deployment screen substituted for ``. - - ``` - influx -username -password -host -execute "CREATE DATABASE test" - - influx -username -password -host -execute "SHOW DATABASES" - ``` diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/_index.md b/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/_index.md deleted file mode 100644 index 7fa21d69a..000000000 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/_index.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Install an InfluxDB Enterprise cluster in your own environment -description: Install InfluxDB Enterprise in your own on-premise environment. -aliases: - - /enterprise_influxdb/v1.9/installation/ - - /enterprise_influxdb/v1.9/install-and-deploy/installation/ -menu: - enterprise_influxdb_1_9: - name: Install in your environment - weight: 10 - parent: Install and deploy ---- - -Complete the following steps to install an InfluxDB Enterprise cluster in your own environment: - -1. [Install InfluxDB Enterprise meta nodes](/enterprise_influxdb/v1.9/install-and-deploy/installation/meta_node_installation/) -2. [Install InfluxDB data nodes](/enterprise_influxdb/v1.9/install-and-deploy/installation/data_node_installation/) -3. [Install Chronograf](/enterprise_influxdb/v1.9/install-and-deploy/installation/chrono_install/) - -> **Note:** If you're looking for cloud infrastructure and services, check out how to deploy InfluxDB Enterprise (production-ready) on a cloud provider of your choice: [Azure](/enterprise_influxdb/v1.9/install-and-deploy/deploying/azure/), [GCP](/enterprise_influxdb/v1.9/install-and-deploy/deploying/google-cloud-platform/), or [AWS](/enterprise_influxdb/v1.9/install-and-deploy/deploying/aws/). diff --git a/content/enterprise_influxdb/v1.9/introduction/installation/_index.md b/content/enterprise_influxdb/v1.9/introduction/installation/_index.md new file mode 100644 index 000000000..533102d72 --- /dev/null +++ b/content/enterprise_influxdb/v1.9/introduction/installation/_index.md @@ -0,0 +1,20 @@ +--- +title: Install an InfluxDB Enterprise cluster +description: Install InfluxDB Enterprise in your own on-premise environment. +aliases: + - /enterprise_influxdb/v1.9/installation/ + - /enterprise_influxdb/v1.9/introduction/installation/installation/ + - /enterprise_influxdb/v1.9/introduction/install-and-deploy/ + - /enterprise_influxdb/v1.9/install-and-deploy/ +menu: + enterprise_influxdb_1_9: + name: Install + weight: 103 + parent: Introduction +--- + +Complete the following steps to install an InfluxDB Enterprise cluster in your own environment: + +1. [Install InfluxDB Enterprise meta nodes](/enterprise_influxdb/v1.9/introduction/installation/installation/meta_node_installation/) +2. [Install InfluxDB data nodes](/enterprise_influxdb/v1.9/introduction/installation/installation/data_node_installation/) +3. [Install Chronograf](/enterprise_influxdb/v1.9/introduction/installation/installation/chrono_install/) diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/chrono_install.md b/content/enterprise_influxdb/v1.9/introduction/installation/chrono_install.md similarity index 80% rename from content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/chrono_install.md rename to content/enterprise_influxdb/v1.9/introduction/installation/chrono_install.md index 1c9eb2e0d..792297272 100644 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/chrono_install.md +++ b/content/enterprise_influxdb/v1.9/introduction/installation/chrono_install.md @@ -2,12 +2,12 @@ title: Install Chronograf aliases: - /enterprise_influxdb/v1.9/installation/chrono_install/ - - /enterprise_influxdb/v1.9/install-and-deploy/installation/chrono_install/ + - /enterprise_influxdb/v1.9/introduction/installation/installation/chrono_install/ menu: enterprise_influxdb_1_9: name: Install Chronograf weight: 30 - parent: Install in your environment + parent: Install identifier: chrono_install --- diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/data_node_installation.md b/content/enterprise_influxdb/v1.9/introduction/installation/data_node_installation.md similarity index 96% rename from content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/data_node_installation.md rename to content/enterprise_influxdb/v1.9/introduction/installation/data_node_installation.md index 98495bab0..534ecce7b 100644 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/data_node_installation.md +++ b/content/enterprise_influxdb/v1.9/introduction/installation/data_node_installation.md @@ -2,12 +2,12 @@ title: Install InfluxDB Enterprise data nodes aliases: - /enterprise_influxdb/v1.9/installation/data_node_installation/ - - /enterprise_influxdb/v1.9/install-and-deploy/installation/data_node_installation/ + - /enterprise_influxdb/v1.9/introduction/installation/installation/data_node_installation/ menu: enterprise_influxdb_1_9: name: Install data nodes weight: 20 - parent: Install in your environment + parent: Install --- InfluxDB Enterprise offers highly scalable clusters on your infrastructure @@ -17,7 +17,7 @@ your InfluxDB Enterprise cluster: the data nodes. {{% warn %}} If you have not set up your meta nodes, please visit -[Installing meta nodes](/enterprise_influxdb/v1.9/install-and-deploy/installation/meta_node_installation/). +[Installing meta nodes](/enterprise_influxdb/v1.9/introduction/installation/installation/meta_node_installation/). Bad things can happen if you complete the following steps without meta nodes. {{% /warn %}} @@ -316,7 +316,7 @@ Once your data nodes are part of your cluster, do the following: - Set up [authentication](/enterprise_influxdb/v1.9/administration/configure/security/authentication/). Once you cluster is configured for authentication, if you want to add more users in addition to admin user, - see [Manage users and permissions](/enterprise_influxdb/v1.9/administration/manage/security/). + see [Manage users and permissions](/enterprise_influxdb/v1.9/administration/manage/users-and-permissions/). - [Enable TLS](/enterprise_influxdb/v1.9/guides/enable-tls/). -- [Set up Chronograf](/enterprise_influxdb/v1.9/install-and-deploy/installation/chrono_install) +- [Set up Chronograf](/enterprise_influxdb/v1.9/introduction/installation/installation/chrono_install) for UI visualization, dashboards, and management. diff --git a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/meta_node_installation.md b/content/enterprise_influxdb/v1.9/introduction/installation/meta_node_installation.md similarity index 97% rename from content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/meta_node_installation.md rename to content/enterprise_influxdb/v1.9/introduction/installation/meta_node_installation.md index 2affd5c2c..bd5362a12 100644 --- a/content/enterprise_influxdb/v1.9/introduction/install-and-deploy/installation/meta_node_installation.md +++ b/content/enterprise_influxdb/v1.9/introduction/installation/meta_node_installation.md @@ -2,12 +2,12 @@ title: Install InfluxDB Enterprise meta nodes aliases: - /enterprise_influxdb/v1.9/installation/meta_node_installation/ - - /enterprise_influxdb/v1.9/install-and-deploy/installation/meta_node_installation/ + - /enterprise_influxdb/v1.9/introduction/installation/installation/meta_node_installation/ menu: enterprise_influxdb_1_9: name: Install meta nodes weight: 10 - parent: Install in your environment + parent: Install --- InfluxDB Enterprise offers highly scalable clusters on your infrastructure @@ -257,4 +257,4 @@ Note that your cluster must have at least three meta nodes. If you do not see your meta nodes in the output, retry adding them to the cluster. -After your meta nodes are part of your cluster, [install data nodes](/enterprise_influxdb/v1.9/install-and-deploy/installation/data_node_installation/). +After your meta nodes are part of your cluster, [install data nodes](/enterprise_influxdb/v1.9/introduction/installation/installation/data_node_installation/). diff --git a/content/enterprise_influxdb/v1.9/introduction/installation_requirements.md b/content/enterprise_influxdb/v1.9/introduction/installation_requirements.md index d94c8a71d..8a26c0dab 100644 --- a/content/enterprise_influxdb/v1.9/introduction/installation_requirements.md +++ b/content/enterprise_influxdb/v1.9/introduction/installation_requirements.md @@ -12,7 +12,7 @@ menu: parent: Introduction --- -Review the installation requirements below, and then check out available options to [install and deploy InfluxDB Enterprise](/enterprise_influxdb/v1.9/install-and-deploy/). For an overview of the architecture and concepts in an InfluxDB Enterprise cluster, review [Clustering in InfluxDB Enterprise](/enterprise_influxdb/v1.9/concepts/clustering/). +Review the installation requirements below, and then check out available options to [install and deploy InfluxDB Enterprise](/enterprise_influxdb/v1.9/introduction/installation/). For an overview of the architecture and concepts in an InfluxDB Enterprise cluster, review [Clustering in InfluxDB Enterprise](/enterprise_influxdb/v1.9/concepts/clustering/). ## Requirements for InfluxDB Enterprise clusters diff --git a/content/enterprise_influxdb/v1.9/query_language/explore-schema.md b/content/enterprise_influxdb/v1.9/query_language/explore-schema.md index 0e37f47ac..5ec905aab 100644 --- a/content/enterprise_influxdb/v1.9/query_language/explore-schema.md +++ b/content/enterprise_influxdb/v1.9/query_language/explore-schema.md @@ -188,6 +188,8 @@ SHOW SERIES [ON ] [FROM_clause] [WHERE [ '`, you must specify the database with `USE ` in the [CLI](/enterprise_influxdb/v1.9/tools/influx-cli/use-influx/) or with the `db` query string parameter in the [InfluxDB API](/enterprise_influxdb/v1.9/tools/api/#query-string-parameters) request. +`SHOW SERIES` only returns series in the database's default retention policy, +and fails if there is no default retention policy. The `FROM`, `WHERE`, `LIMIT`, and `OFFSET` clauses are optional. The `WHERE` clause supports tag comparisons; field comparisons are not @@ -211,7 +213,7 @@ and on [Regular Expressions in Queries](/enterprise_influxdb/v1.9/query_language #### Run a `SHOW SERIES` query with the `ON` clause ```sql -// Returns series for all shards in the database +// Returns series for all shards in the database and default retention policy > SHOW SERIES ON NOAA_water_database key diff --git a/content/enterprise_influxdb/v1.9/query_language/manage-database.md b/content/enterprise_influxdb/v1.9/query_language/manage-database.md index ad52cf8d1..6b9c22ab0 100644 --- a/content/enterprise_influxdb/v1.9/query_language/manage-database.md +++ b/content/enterprise_influxdb/v1.9/query_language/manage-database.md @@ -87,7 +87,7 @@ If you attempt to create a database that already exists, InfluxDB does nothing a ``` The query creates a database called `NOAA_water_database`. -[By default](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#retention-autocreate--true), InfluxDB also creates the `autogen` retention policy and associates it with the `NOAA_water_database`. +[By default](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#retention-autocreate), InfluxDB also creates the `autogen` retention policy and associates it with the `NOAA_water_database`. ##### Create a database with a specific retention policy @@ -229,7 +229,7 @@ exist. The following sections cover how to create, alter, and delete retention policies. Note that when you create a database, InfluxDB automatically creates a retention policy named `autogen` which has infinite retention. -You may disable its auto-creation in the [configuration file](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#retention-autocreate--true). +You may disable its auto-creation in the [configuration file](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#retention-autocreate). ### Create retention policies with CREATE RETENTION POLICY diff --git a/content/enterprise_influxdb/v1.9/query_language/spec.md b/content/enterprise_influxdb/v1.9/query_language/spec.md index 4b1608854..85935c4eb 100644 --- a/content/enterprise_influxdb/v1.9/query_language/spec.md +++ b/content/enterprise_influxdb/v1.9/query_language/spec.md @@ -988,8 +988,7 @@ Estimates or counts exactly the cardinality of the series for the current databa [Series cardinality](/enterprise_influxdb/v1.9/concepts/glossary/#series-cardinality) is the major factor that affects RAM requirements. For more information, see: -- - +- [When do I need more RAM?](/enterprise_influxdb/v1.9/guides/hardware_sizing/#when-do-i-need-more-ram) in [Hardware Sizing Guidelines](/enterprise_influxdb/v1.9/guides/hardware_sizing/) - [Don't have too many series](/enterprise_influxdb/v1.9/concepts/schema_and_data_layout/#avoid-too-many-series) > **Note:** `ON `, `FROM `, `WITH KEY = `, `WHERE `, `GROUP BY `, and `LIMIT/OFFSET` clauses are optional. @@ -1037,8 +1036,24 @@ show_shards_stmt = "SHOW SHARDS" . ```sql SHOW SHARDS + +name: telegraf +id database retention_policy shard_group start_time end_time expiry_time owners +-- -------- ---------------- ----------- ---------- -------- ----------- ------ +16 telegraf autogen 6 2020-10-19T00:00:00Z 2020-10-26T00:00:00Z 2020-10-26T00:00:00Z 6,7,8 +17 telegraf autogen 6 2020-10-19T00:00:00Z 2020-10-26T00:00:00Z 2020-10-26T00:00:00Z 9,4,5 +21 telegraf autogen 8 2020-10-26T00:00:00Z 2020-11-02T00:00:00Z 2020-11-02T00:00:00Z 8,9,4 +22 telegraf autogen 8 2020-10-26T00:00:00Z 2020-11-02T00:00:00Z 2020-11-02T00:00:00Z 5,6,7 +26 telegraf autogen 10 2020-11-02T00:00:00Z 2020-11-09T00:00:00Z 2020-11-09T00:00:00Z 9,4,5 +27 telegraf autogen 10 2020-11-02T00:00:00Z 2020-11-09T00:00:00Z 2020-11-09T00:00:00Z 6,7,8 +31 telegraf autogen 12 2020-11-09T00:00:00Z 2020-11-16T00:00:00Z 2020-11-16T00:00:00Z 6,7,8 ``` +`SHOW SHARDS` outputs the following data: +- `id` column: Shard IDs that belong to the specified `database` and `retention policy`. +- `shard_group` column: Group number that a shard belongs to. Shards in the same shard group have the same `start_time` and `end_time`. This interval indicates how long the shard is active, and the `expiry_time` columns shows when the shard group expires. No timestamps will show under `expiry_time` if the retention policy duration is set to infinite. +- `owners` column: Shows the data nodes that own a shard. The number of nodes that own a shard is equal to the replication factor. In this example, the replication factor is 3, so 3 nodes own each shard. + ### SHOW STATS Returns detailed statistics on available components of an InfluxDB node and available (enabled) components. diff --git a/content/enterprise_influxdb/v1.9/supported_protocols/prometheus.md b/content/enterprise_influxdb/v1.9/supported_protocols/prometheus.md index 668def969..cb1dd915c 100644 --- a/content/enterprise_influxdb/v1.9/supported_protocols/prometheus.md +++ b/content/enterprise_influxdb/v1.9/supported_protocols/prometheus.md @@ -89,7 +89,7 @@ made to match the InfluxDB data structure: * Prometheus labels become InfluxDB tags. * All `# HELP` and `# TYPE` lines are ignored. * [v1.8.6 and later] Prometheus remote write endpoint drops unsupported Prometheus values (`NaN`,`-Inf`, and `+Inf`) rather than reject the entire batch. - * If [write trace logging is enabled (`[http] write-tracing = true`)](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#write-tracing--false), then summaries of dropped values are logged. + * If [write trace logging is enabled (`[http] write-tracing = true`)](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#write-tracing), then summaries of dropped values are logged. * If a batch of values contains values that are subsequently dropped, HTTP status code `204` is returned. ### Example: Parse Prometheus to InfluxDB diff --git a/content/enterprise_influxdb/v1.9/tools/api.md b/content/enterprise_influxdb/v1.9/tools/api.md index af3027f4a..b6eacf8fb 100644 --- a/content/enterprise_influxdb/v1.9/tools/api.md +++ b/content/enterprise_influxdb/v1.9/tools/api.md @@ -427,7 +427,7 @@ A successful [`CREATE DATABASE` query](/enterprise_influxdb/v1.9/query_language/ | u=\ | Optional if you haven't [enabled authentication](/enterprise_influxdb/v1.9/administration/authentication_and_authorization/#set-up-authentication). Required if you've enabled authentication.* | Sets the username for authentication if you've enabled authentication. The user must have read access to the database. Use with the query string parameter `p`. | \* InfluxDB does not truncate the number of rows returned for requests without the `chunked` parameter. -That behavior is configurable; see the [`max-row-limit`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#max-row-limit--0) +That behavior is configurable; see the [`max-row-limit`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#max-row-limit) configuration option for more information. \** The InfluxDB API also supports basic authentication. @@ -951,7 +951,7 @@ Errors are returned in JSON. | 400 Bad Request | Unacceptable request. Can occur with an InfluxDB line protocol syntax error or if a user attempts to write values to a field that previously accepted a different value type. The returned JSON offers further information. | | 401 Unauthorized | Unacceptable request. Can occur with invalid authentication credentials. | | 404 Not Found | Unacceptable request. Can occur if a user attempts to write to a database that does not exist. The returned JSON offers further information. | -| 413 Request Entity Too Large | Unaccetable request. It will occur if the payload of the POST request is bigger than the maximum size allowed. See [`max-body-size`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#max-body-size--25000000) parameter for more details. +| 413 Request Entity Too Large | Unaccetable request. It will occur if the payload of the POST request is bigger than the maximum size allowed. See [`max-body-size`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#max-body-size) parameter for more details. | 500 Internal Server Error | The system is overloaded or significantly impaired. Can occur if a user attempts to write to a retention policy that does not exist. The returned JSON offers further information. | #### Examples diff --git a/content/enterprise_influxdb/v1.9/tools/grafana.md b/content/enterprise_influxdb/v1.9/tools/grafana.md index c217c4434..3569d8c0b 100644 --- a/content/enterprise_influxdb/v1.9/tools/grafana.md +++ b/content/enterprise_influxdb/v1.9/tools/grafana.md @@ -23,7 +23,7 @@ to visualize data from your **InfluxDB Enterprise v1.8** instance. in your InfluxDB data node configuration file. {{% /note %}} -1. [Set up an InfluxDB Enterprise cluster](/enterprise_influxdb/v1.9/install-and-deploy/). +1. [Set up an InfluxDB Enterprise cluster](/enterprise_influxdb/v1.9/introduction/installation/). 2. [Sign up for Grafana Cloud](https://grafana.com/products/cloud/) or [download and install Grafana](https://grafana.com/grafana/download). 3. Visit your **Grafana Cloud user interface** (UI) or, if running Grafana locally, diff --git a/content/enterprise_influxdb/v1.9/tools/influxd-ctl.md b/content/enterprise_influxdb/v1.9/tools/influxd-ctl.md index ff09fecde..15ed1ce05 100644 --- a/content/enterprise_influxdb/v1.9/tools/influxd-ctl.md +++ b/content/enterprise_influxdb/v1.9/tools/influxd-ctl.md @@ -110,7 +110,9 @@ influxd-ctl -auth-type jwt -secret oatclusters show The `influxd-ctl` utility uses JWT authentication with the shared secret `oatclusters`. -If authentication is enabled in the cluster's [meta node configuration files](/enterprise_influxdb/v1.9/administration/config-meta-nodes#auth-enabled-false) and [data node configuration files](/enterprise_influxdb/v1.9/administration/config-data-nodes#meta-auth-enabled-false) and the `influxd-ctl` command does not include authentication details, the system returns: +If authentication is enabled in the cluster's [meta node configuration files](/enterprise_influxdb/v1.9/administration/configure/config-meta-nodes/#auth-enabled) +and [data node configuration files](/enterprise_influxdb/v1.9/administration/config-data-nodes/#meta-auth-enabled) +and the `influxd-ctl` command does not include authentication details, the system returns: ```bash Error: unable to parse authentication credentials. @@ -132,7 +134,10 @@ In the following example, the `influxd-ctl` utility uses basic authentication fo influxd-ctl -auth-type basic -user admini -pwd mouse show ``` -If authentication is enabled in the cluster's [meta node configuration files](/enterprise_influxdb/v1.9/administration/config-meta-nodes#auth-enabled-false) and [data node configuration files](/enterprise_influxdb/v1.9/administration/config-data-nodes#meta-auth-enabled-false) and the `influxd-ctl` command does not include authentication details, the system returns: +If authentication is enabled in the cluster's +[meta node configuration files](/enterprise_influxdb/v1.9/administration/config-meta-nodes/#auth-enabled-false) +and [data node configuration files](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#meta-auth-enabled) +and the `influxd-ctl` command does not include authentication details, the system returns: ```bash Error: unable to parse authentication credentials. @@ -246,6 +251,20 @@ Optional arguments are in brackets. Name of the single database to back up. +###### [ `-estimate` ] + +Provide estimated backup size and progress messages during backup. + +**Sample output:** + +``` +Backing up node backup_data_0_1:8088, db stress, rp autogen, shard 14 +Files: 8 / 9 Bytes: 189543424 / 231921095 Completed: 82% in 22s Estimated remaining: 3s +Files: 8 / 9 Bytes: 189543424 / 231921095 Completed: 82% in 23s Estimated remaining: 2s +Files: 9 / 9 Bytes: 231736320 / 231921095 Completed: 100% in 24s Estimated remaining: 447µs +Done backing up node backup_data_0_1:8088, db stress, rp autogen, shard 14 in 67ms: 42192896 bytes transferred +``` + ###### [ `-from ` ] TCP address of the target data node. @@ -351,7 +370,10 @@ Copied shard 22 from cluster-data-node-01:8088 to cluster-data-node-02:8088 ### `copy-shard-status` -Shows all in-progress [copy shard](#copy-shard) operations, including the shard's source node, destination node, database, [retention policy](/enterprise_influxdb/v1.9/concepts/glossary/#retention-policy-rp), shard ID, total size, current size, and the operation's start time. +Shows all in-progress [copy shard](#copy-shard) operations, +including the shard's source node, destination node, database, +[retention policy](/enterprise_influxdb/v1.9/concepts/glossary/#retention-policy-rp), +shard ID, total size, current size, and the operation's start time. #### Syntax @@ -935,6 +957,19 @@ To restore metadata, [restore a metadata backup](#restore-from-a-metadata-backup Show the contents of the backup. +###### [ `-meta-only-overwrite-force` ] + +Restore *metadata only* from a backup. + +{{% warn %}} +Only use this flag to restore from backups of the target cluster. +If you use this flag with metadata from a different cluster, you will lose data +(since metadata includes shard assignments to data nodes). + +See ["Back up and restore"](/enterprise_influxdb/v1.9/administration/backup-and-restore/#restore-overwrite-metadata-from-a-full-or-incremental-backup-to-fix-damaged-metadata) +for instructions on using this flag. +{{% /warn %}} + ###### [ `-newdb ` ] Name of the new database to restore to. diff --git a/content/enterprise_influxdb/v1.9/troubleshooting/frequently-asked-questions.md b/content/enterprise_influxdb/v1.9/troubleshooting/frequently-asked-questions.md index a429ba366..6f1807971 100644 --- a/content/enterprise_influxdb/v1.9/troubleshooting/frequently-asked-questions.md +++ b/content/enterprise_influxdb/v1.9/troubleshooting/frequently-asked-questions.md @@ -89,6 +89,7 @@ Where applicable, it links to outstanding issues on GitHub. * [Why am I seeing `error writing count stats ...: partial write` errors in my data node logs?](#why-am-i-seeing-error-writing-count-stats--partial-write-errors-in-my-data-node-logs) * [Why am I seeing `queue is full` errors in my data node logs?](#why-am-i-seeing-queue-is-full-errors-in-my-data-node-logs) * [Why am I seeing `unable to determine if "hostname" is a meta node` when I try to add a meta node with `influxd-ctl join`?](#why-am-i-seeing-unable-to-determine-if-hostname-is-a-meta-node-when-i-try-to-add-a-meta-node-with-influxd-ctl-join) +* [Why is InfluxDB reporting an out of memory (OOM) exception when my system has free memory?](#why-is-influxdb-reporting-an-out-of-memory-oom-exception-when-my-system-has-free-memory) --- @@ -177,7 +178,7 @@ an RP every 30 minutes. You may need to wait for the next RP check for InfluxDB to drop data that are outside the RP's new `DURATION` setting. The 30 minute interval is -[configurable](/enterprise_influxdb/v1.9/administration/config-data-nodes/#check-interval--30m0s). +[configurable](/enterprise_influxdb/v1.9/administration/config-data-nodes/#check-interval). Second, altering both the `DURATION` and `SHARD DURATION` of an RP can result in unexpected data retention. @@ -1282,7 +1283,7 @@ The `journalctl` output can be redirected to print the logs to a text file. With This is the expected behavior if you haven't joined the meta node to the cluster. The `503` errors should stop showing up in the logs once you -[join the meta node to the cluster](/enterprise_influxdb/v1.9/install-and-deploy/installation/meta_node_installation/#step-3-join-the-meta-nodes-to-the-cluster). +[join the meta node to the cluster](/enterprise_influxdb/v1.9/introduction/installation/installation/meta_node_installation/#step-3-join-the-meta-nodes-to-the-cluster). ## Why am I seeing a `409` error in some of my data node logs? @@ -1333,3 +1334,35 @@ Meta nodes use the `/status` endpoint to determine the current state of another `"nodeType":"meta","leader":"","httpAddr":":8091","raftAddr":":8089","peers":null}` If you are getting an error message while attempting to `influxd-ctl join` a new meta node, it means that the JSON string returned from the `/status` endpoint is incorrect. This generally indicates that the meta node configuration file is incomplete or incorrect. Inspect the HTTP response with `curl -v "http://:8091/status"` and make sure that the `hostname`, the `bind-address`, and the `http-bind-address` are correctly populated. Also check the `license-key` or `license-path` in the configuration file of the meta nodes. Finally, make sure that you specify the `http-bind-address` port in the join command, e.g. `influxd-ctl join hostname:8091`. + +## Why is InfluxDB reporting an out of memory (OOM) exception when my system has free memory? + +`mmap` is a Unix system call that maps files into memory. +As the number of shards in an InfluxDB Enterprise cluster increases, the number of memory maps increase. +If the number of maps exceeds the configured maximum limit, the node reports that it is out of memory. + +To check the current number of maps the `influxd` process is using: + +```sh +# Get the influxd process ID (PID) +PID=$(ps aux | awk '/influxd/ ${print 2}' + +# Count the number of maps associated with the influxd process +wc -l /proc/$PID/maps +``` + +The `max_map_count` file contains the maximum number of memory map areas a process may have. +The default limit is `65536`. +We recommend increasing this to `262144` (four times the default) by running the following: + +```sh +echo vm.max_map_count=262144 > /etc/sysctl.d/90-vm.max_map_count.conf +``` + +To make the changes permanent: + +```sh +sysctl --system +``` + +Restart the `influxd` process and repeat on each node in your cluster. diff --git a/content/flux/v0.x/data-types/basic/bool.md b/content/flux/v0.x/data-types/basic/bool.md index f008c6458..4f91e8473 100644 --- a/content/flux/v0.x/data-types/basic/bool.md +++ b/content/flux/v0.x/data-types/basic/bool.md @@ -71,7 +71,7 @@ and convert columns to booleans. ```js data - |> toBool() + |> toBool() ``` {{< flex >}} @@ -103,7 +103,7 @@ data ```js data - |> map(fn: (r) => ({ r with running: bool(v: r.running) })) + |> map(fn: (r) => ({ r with running: bool(v: r.running) })) ``` {{< flex >}} diff --git a/content/flux/v0.x/data-types/basic/bytes.md b/content/flux/v0.x/data-types/basic/bytes.md index e77eb6c5a..13a91949f 100644 --- a/content/flux/v0.x/data-types/basic/bytes.md +++ b/content/flux/v0.x/data-types/basic/bytes.md @@ -21,6 +21,7 @@ A **bytes** type represents a sequence of byte values. - [Bytes syntax](#bytes-syntax) - [Convert a column to bytes](#convert-a-column-to-bytes) +- [Include the string representation of bytes in a table](#include-the-string-representation-of-bytes-in-a-table) ## Bytes syntax Flux does not provide a bytes literal syntax. @@ -49,3 +50,37 @@ import "contrib/bonitoo-io/hex" hex.bytes(v: "FF5733") // Returns [255 87 51] (bytes) ``` + +## Include the string representation of bytes in a table + +Use [`display()`](/flux/v0.x/stdlib/universe/display/) to return the string +representation of bytes and include it as a column value. +`display()` represents bytes types as a string of lowercase hexadecimal +characters prefixed with `0x`. + +```js +import "sampledata" + +sampledata.string() + |> map(fn: (r) => ({r with _value: display(v: bytes(v: r._value))})) +``` + +#### Output + +| tag | _time | _value (string) | +| --- | :------------------- | ------------------------------------------: | +| t1 | 2021-01-01T00:00:00Z | 0x736d706c5f673971637a73 | +| t1 | 2021-01-01T00:00:10Z | 0x736d706c5f306d6776396e | +| t1 | 2021-01-01T00:00:20Z | 0x736d706c5f706877363634 | +| t1 | 2021-01-01T00:00:30Z | 0x736d706c5f6775767a7934 | +| t1 | 2021-01-01T00:00:40Z | 0x736d706c5f357633636365 | +| t1 | 2021-01-01T00:00:50Z | 0x736d706c5f7339666d6779 | + +| tag | _time | _value (string) | +| --- | :------------------- | ------------------------------------------: | +| t2 | 2021-01-01T00:00:00Z | 0x736d706c5f623565696461 | +| t2 | 2021-01-01T00:00:10Z | 0x736d706c5f6575346f7870 | +| t2 | 2021-01-01T00:00:20Z | 0x736d706c5f356737747a34 | +| t2 | 2021-01-01T00:00:30Z | 0x736d706c5f736f78317574 | +| t2 | 2021-01-01T00:00:40Z | 0x736d706c5f77666d373537 | +| t2 | 2021-01-01T00:00:50Z | 0x736d706c5f64746e326276 | diff --git a/content/flux/v0.x/data-types/basic/duration.md b/content/flux/v0.x/data-types/basic/duration.md index d0a9a80c2..9020fc9a5 100644 --- a/content/flux/v0.x/data-types/basic/duration.md +++ b/content/flux/v0.x/data-types/basic/duration.md @@ -59,6 +59,18 @@ Flux supports the following unit specifiers: 3d12h4m25s // 3 days, 12 hours, 4 minutes, and 25 seconds ``` +{{% note %}} +#### Do not include leading zeros in duration literals +The integer part of a duration literal should not contain leading zeros. +Leading zeros are parsed as separate integer literals. +For example: + +```js +01m // parsed as 0 (integer literal) and 1m (duration literal) +02h05m // parsed as 0 (integer literal), 2h (duration literal), 0 (integer literal), and 5m (duration literal) +``` +{{% /note %}} + ## Convert data types to durations Use the [`duration()` function](/flux/v0.x/stdlib/universe/duration/) to convert the following [basic types](/flux/v0.x/data-types/basic/) to durations: diff --git a/content/flux/v0.x/data-types/basic/float.md b/content/flux/v0.x/data-types/basic/float.md index 6d9b1976c..71826dba4 100644 --- a/content/flux/v0.x/data-types/basic/float.md +++ b/content/flux/v0.x/data-types/basic/float.md @@ -115,7 +115,7 @@ and convert columns to floats. ```js data - |> toFloat() + |> toFloat() ``` {{< flex >}} @@ -147,7 +147,7 @@ data ```js data - |> map(fn: (r) => ({ r with index: float(v: r.index) })) + |> map(fn: (r) => ({ r with index: float(v: r.index) })) ``` {{< flex >}} diff --git a/content/flux/v0.x/data-types/basic/int.md b/content/flux/v0.x/data-types/basic/int.md index 74f70c532..048d4ae8c 100644 --- a/content/flux/v0.x/data-types/basic/int.md +++ b/content/flux/v0.x/data-types/basic/int.md @@ -119,7 +119,7 @@ and convert columns to integers. ```js data - |> toInt() + |> toInt() ``` {{< flex >}} @@ -151,7 +151,7 @@ data ```js data - |> map(fn: (r) => ({ r with uid: int(v: r.uid) })) + |> map(fn: (r) => ({ r with uid: int(v: r.uid) })) ``` {{< flex >}} diff --git a/content/flux/v0.x/data-types/basic/null.md b/content/flux/v0.x/data-types/basic/null.md index 2425d5fc3..e319a793b 100644 --- a/content/flux/v0.x/data-types/basic/null.md +++ b/content/flux/v0.x/data-types/basic/null.md @@ -37,7 +37,7 @@ if a column value is _null_. ##### Filter out rows with null values ```js data - |> filter(fn: (r) => exists r._value) + |> filter(fn: (r) => exists r._value) ``` {{< flex >}} diff --git a/content/flux/v0.x/data-types/basic/string.md b/content/flux/v0.x/data-types/basic/string.md index 79a72fa34..687016891 100644 --- a/content/flux/v0.x/data-types/basic/string.md +++ b/content/flux/v0.x/data-types/basic/string.md @@ -111,7 +111,7 @@ use the [`toString()` function](/flux/v0.x/stdlib/universe/tostring/). ```js data - |> toString() + |> toString() ``` {{< flex >}} @@ -143,7 +143,7 @@ data ```js data - |> map(fn: (r) => ({ r with level: string(v: r.level) })) + |> map(fn: (r) => ({ r with level: string(v: r.level) })) ``` {{< flex >}} {{% flex-content %}} diff --git a/content/flux/v0.x/data-types/basic/time.md b/content/flux/v0.x/data-types/basic/time.md index 58cf39642..1135de4bb 100644 --- a/content/flux/v0.x/data-types/basic/time.md +++ b/content/flux/v0.x/data-types/basic/time.md @@ -72,7 +72,7 @@ and convert columns to time. ```js data - |> toTime() + |> toTime() ``` {{< flex >}} @@ -104,7 +104,7 @@ data ```js data - |> map(fn: (r) => ({ r with epoch_ns: time(v: r.epoch_ns) })) + |> map(fn: (r) => ({ r with epoch_ns: time(v: r.epoch_ns) })) ``` {{< flex >}} @@ -171,7 +171,7 @@ date.truncate(t: t0, unit: 1mo) ```js data - |> truncateTimeColumn(unit: 1m) + |> truncateTimeColumn(unit: 1m) ``` {{< flex >}} diff --git a/content/flux/v0.x/data-types/basic/uint.md b/content/flux/v0.x/data-types/basic/uint.md index 55970c4e1..adeaa22c4 100644 --- a/content/flux/v0.x/data-types/basic/uint.md +++ b/content/flux/v0.x/data-types/basic/uint.md @@ -117,7 +117,7 @@ and convert columns to uintegers. ```js data - |> toUInt() + |> toUInt() ``` {{< flex >}} @@ -149,7 +149,7 @@ data ```js data - |> map(fn: (r) => ({ r with uid: uint(v: r.uid) })) + |> map(fn: (r) => ({ r with uid: uint(v: r.uid) })) ``` {{< flex >}} diff --git a/content/flux/v0.x/data-types/composite/array.md b/content/flux/v0.x/data-types/composite/array.md index adb5a1453..60f84223e 100644 --- a/content/flux/v0.x/data-types/composite/array.md +++ b/content/flux/v0.x/data-types/composite/array.md @@ -66,11 +66,35 @@ arr[2] - [Get the length of an array](#get-the-length-of-an-array) - [Create a stream of tables from an array](#create-a-stream-of-tables-from-an-array) - [Compare arrays](#compare-arrays) +- [Filter an array](#filter-an-array) +- [Merge two arrays](#merge-two-arrays) +- [Return the string representation of an array](#return-the-string-representation-of-an-array) +- [Include the string representation of an array in a table](#include-the-string-representation-of-an-array-in-a-table) ### Iterate over an array -{{% note %}} -Flux currently does not provide a way to iterate over an array. -{{% /note %}} +1. Import the [`experimental/array` package](/flux/v0.x/stdlib/experimental/array/). +2. Use [`array.map`](/flux/v0.x/stdlib/experimental/array/map/) to iterate over + elements in an array, apply a function to each element, and then return a new + array. + +```js +import "experimental/array" + +a = [ + {fname: "John", lname: "Doe", age: 42}, + {fname: "Jane", lname: "Doe", age: 40}, + {fname: "Jacob", lname: "Dozer", age: 21}, +] + +a |> array.map(fn: (x) => ({statement: "${x.fname} ${x.lname} is ${x.age} years old."})) + +// Returns +// [ +// {statement: "John Doe is 42 years old."}, +// {statement: "Jane Doe is 40 years old."}, +// {statement: "Jacob Dozer is 21 years old."} +// ] +``` ### Check if a value exists in an array Use the [`contains` function](/flux/v0.x/stdlib/universe/contains/) to check if @@ -105,9 +129,9 @@ length(arr: names) import "array" arr = [ - {fname: "John", lname: "Doe", age: "37"}, - {fname: "Jane", lname: "Doe", age: "32"}, - {fname: "Jack", lname: "Smith", age: "56"} + {fname: "John", lname: "Doe", age: "37"}, + {fname: "Jane", lname: "Doe", age: "32"}, + {fname: "Jack", lname: "Smith", age: "56"}, ] array.from(rows: arr) @@ -120,7 +144,6 @@ array.from(rows: arr) | Jane | Doe | 32 | | Jack | Smith | 56 | - ### Compare arrays Use the `==` [comparison operator](/flux/v0.x/spec/operators/#comparison-operators) to check if two arrays are equal. @@ -132,4 +155,73 @@ Equality is based on values, their type, and order. [12300.0, 34500.0] == [float(v: "1.23e+04"), float(v: "3.45e+04")] // Returns true -``` \ No newline at end of file +``` + +### Filter an array +1. Import the [`experimental/array` package](/flux/v0.x/stdlib/experimental/array/). +2. Use [`array.filter`](/flux/v0.x/stdlib/experimental/array/filter/) to iterate + over and evaluate elements in an array with a predicate function and then + return a new array with only elements that match the predicate. + +```js +import "experimental/array" + +a = [1, 2, 3, 4, 5] + +a |> array.filter(fn: (x) => x >= 3) +// Returns [3, 4, 5] +``` + +### Merge two arrays +1. Import the [`experimental/array` package](/flux/v0.x/stdlib/experimental/array/). +2. Use [`array.concat`](/flux/v0.x/stdlib/experimental/array/concat/) to merge + two arrays. + +```js +import "experimental/array" + +a = [1, 2, 3] +b = [4, 5, 6] + +a |> array.concat(v: b) +// Returns [1, 2, 3, 4, 5, 6] +``` + +### Return the string representation of an array +Use [`display()`](/flux/v0.x/stdlib/universe/display/) to return Flux literal +representation of an array as a string. + +```js +arr = [1, 2, 3] + +display(v: arr) + +// Returns "[1, 2, 3]" +``` + +### Include the string representation of an array in a table +Use [`display()`](/flux/v0.x/stdlib/universe/display/) to return Flux literal +representation of an array as a string and include it as a column value. + +```js +import "sampledata" + +sampledata.string() + |> map(fn: (r) => ({_time: r._time, exampleArray: display(v: [r.tag, r._value])})) +``` + +#### Output +| _time (time) | exampleArray (string) | +| :--------------------------------------- | :----------------------------------------------- | +| 2021-01-01T00:00:00Z | [t1, smpl_g9qczs] | +| 2021-01-01T00:00:10Z | [t1, smpl_0mgv9n] | +| 2021-01-01T00:00:20Z | [t1, smpl_phw664] | +| 2021-01-01T00:00:30Z | [t1, smpl_guvzy4] | +| 2021-01-01T00:00:40Z | [t1, smpl_5v3cce] | +| 2021-01-01T00:00:50Z | [t1, smpl_s9fmgy] | +| 2021-01-01T00:00:00Z | [t2, smpl_b5eida] | +| 2021-01-01T00:00:10Z | [t2, smpl_eu4oxp] | +| 2021-01-01T00:00:20Z | [t2, smpl_5g7tz4] | +| 2021-01-01T00:00:30Z | [t2, smpl_sox1ut] | +| 2021-01-01T00:00:40Z | [t2, smpl_wfm757] | +| 2021-01-01T00:00:50Z | [t2, smpl_dtn2bv] | diff --git a/content/flux/v0.x/data-types/composite/dict.md b/content/flux/v0.x/data-types/composite/dict.md index 28269d63e..09367932a 100644 --- a/content/flux/v0.x/data-types/composite/dict.md +++ b/content/flux/v0.x/data-types/composite/dict.md @@ -62,11 +62,12 @@ To reference values in a dictionary: ```js import "dict" -positions = [ - "Manager": "Jane Doe", - "Asst. Manager": "Jack Smith", - "Clerk": "John Doe" -] +positions = + [ + "Manager": "Jane Doe", + "Asst. Manager": "Jack Smith", + "Clerk": "John Doe", + ] dict.get(dict: positions, key: "Manager", default: "Unknown position") // Returns Jane Doe @@ -80,6 +81,8 @@ dict.get(dict: positions, key: "Teller", default: "Unknown position") - [Create a dictionary from a list](#create-a-dictionary-from-a-list) - [Insert a key-value pair into a dictionary](#insert-a-key-value-pair-into-a-dictionary) - [Remove a key-value pair from a dictionary](#remove-a-key-value-pair-from-a-dictionary) +- [Return the string representation of a dictionary](#return-the-string-representation-of-a-dictionary) +- [Include the string representation of a dictionary in a table](#include-the-string-representation-of-a-dictionary-in-a-table) ### Create a dictionary from a list 1. Import the [`dict` package](/flux/v0.x/stdlib/dict/). @@ -90,10 +93,7 @@ dict.get(dict: positions, key: "Teller", default: "Unknown position") ```js import "dict" -list = [ - {key: "k1", value: "v1"}, - {key: "k2", value: "v2"} -] +list = [{key: "k1", value: "v1"}, {key: "k2", value: "v2"}] dict.fromList(pairs: list) // Returns [k1: v1, k2: v2] @@ -109,11 +109,7 @@ import "dict" exampleDict = ["k1": "v1", "k2": "v2"] -dict.insert( - dict: exampleDict, - key: "k3", - value: "v3" -) +dict.insert(dict: exampleDict, key: "k3", value: "v3") // Returns [k1: v1, k2: v2, k3: v3] ``` @@ -127,9 +123,46 @@ import "dict" exampleDict = ["k1": "v1", "k2": "v2"] -dict.remove( - dict: exampleDict, - key: "k2" -) +dict.remove(dict: exampleDict, key: "k2") // Returns [k1: v1] ``` + +### Return the string representation of a dictionary +Use [`display()`](/flux/v0.x/stdlib/universe/display/) to return Flux literal +representation of a dictionary as a string. + +```js +x = ["a": 1, "b": 2, "c": 3] + +display(v: x) + +// Returns "[a: 1, b: 2, c: 3]" +``` + +### Include the string representation of a dictionary in a table +Use [`display()`](/flux/v0.x/stdlib/universe/display/) to return Flux literal +representation of a dictionary as a string and include it as a column value. + +```js +import "sampledata" + +sampledata.string() + |> map(fn: (r) => ({_time: r._time, exampleDict: display(v: ["tag": r.tag, "value":r._value])})) +``` + +#### Output + +| \_time (time) | exampleDict (string) | +| :---------------------------------------- | :----------------------------------------------- | +| 2021-01-01T00:00:00Z | [tag: t1, value: smpl_g9qczs] | +| 2021-01-01T00:00:10Z | [tag: t1, value: smpl_0mgv9n] | +| 2021-01-01T00:00:20Z | [tag: t1, value: smpl_phw664] | +| 2021-01-01T00:00:30Z | [tag: t1, value: smpl_guvzy4] | +| 2021-01-01T00:00:40Z | [tag: t1, value: smpl_5v3cce] | +| 2021-01-01T00:00:50Z | [tag: t1, value: smpl_s9fmgy] | +| 2021-01-01T00:00:00Z | [tag: t2, value: smpl_b5eida] | +| 2021-01-01T00:00:10Z | [tag: t2, value: smpl_eu4oxp] | +| 2021-01-01T00:00:20Z | [tag: t2, value: smpl_5g7tz4] | +| 2021-01-01T00:00:30Z | [tag: t2, value: smpl_sox1ut] | +| 2021-01-01T00:00:40Z | [tag: t2, value: smpl_wfm757] | +| 2021-01-01T00:00:50Z | [tag: t2, value: smpl_dtn2bv] | diff --git a/content/flux/v0.x/data-types/composite/function.md b/content/flux/v0.x/data-types/composite/function.md index c06ade7c9..171075af2 100644 --- a/content/flux/v0.x/data-types/composite/function.md +++ b/content/flux/v0.x/data-types/composite/function.md @@ -63,6 +63,3 @@ A Flux **function** literal contains the following: ## Define functions _For information about defining custom functions, see [Define custom functions](/flux/v0.x/define-functions/)._ - - - diff --git a/content/flux/v0.x/data-types/composite/record.md b/content/flux/v0.x/data-types/composite/record.md index 3b7767533..9e02b96dd 100644 --- a/content/flux/v0.x/data-types/composite/record.md +++ b/content/flux/v0.x/data-types/composite/record.md @@ -58,11 +58,7 @@ and specify the key to reference. Specify the record to access followed by a period (`.`) and the property key. ```js -c = { - name: "John Doe", - address: "123 Main St.", - id: 1123445 -} +c = {name: "John Doe", address: "123 Main St.", id: 1123445} c.name // Returns John Doe @@ -80,11 +76,7 @@ Use bracket notation to access keys with special or whitespace characters. {{% /note %}} ```js -c = { - "Company Name": "ACME", - "Street Address": "123 Main St.", - id: 1123445 -} +c = {"Company Name": "ACME", "Street Address": "123 Main St.", id: 1123445} c["Company Name"] // Returns ACME @@ -97,14 +89,15 @@ c["id"] To reference nested records, use chained dot or bracket notation for each nested level. ```js -customer = { - name: "John Doe", - address: { - street: "123 Main St.", - city: "Pleasantville", - state: "New York" - } -} +customer = + { + name: "John Doe", + address: { + street: "123 Main St.", + city: "Pleasantville", + state: "New York" + } + } customer.address.street // Returns 123 Main St. @@ -136,6 +129,8 @@ _To dynamically reference keys in a composite type, consider using a - [Extend a record](#extend-a-record) - [List keys in a record](#list-keys-in-a-record) - [Compare records](#compare-records) +- [Return the string representation of a record](#return-the-string-representation-of-a-record) +- [Include the string representation of a record in a table](#include-the-string-representation-of-a-record-in-a-table) ### Extend a record Use the **`with` operator** to extend a record. @@ -143,10 +138,7 @@ The `with` operator overwrites record properties if the specified keys exists or adds the new properties if the keys do not exist. ```js -c = { - name: "John Doe", - id: 1123445 -} +c = {name: "John Doe", id: 1123445} {c with spouse: "Jane Doe", pet: "Spot"} // Returns {id: 1123445, name: John Doe, pet: Spot, spouse: Jane Doe} @@ -160,10 +152,7 @@ c = { ```js import "experimental" -c = { - name: "John Doe", - id: 1123445 -} +c = {name: "John Doe", id: 1123445} experimental.objectKeys(o: c) // Returns [name, id] @@ -181,3 +170,43 @@ Equality is based on keys, their values, and types. {foo: 12300.0, bar: 34500.0} == {bar: float(v: "3.45e+04"), foo: float(v: "1.23e+04")} // Returns true ``` + +### Return the string representation of a record +Use [`display()`](/flux/v0.x/stdlib/universe/display/) to return the Flux literal +representation of a record as a string. + +```js +x = {a: 1, b: 2, c: 3} + +display(v: x) + +// Returns "{a: 1, b: 2, c: 3}" +``` + +### Include the string representation of a record in a table +Use [`display()`](/flux/v0.x/stdlib/universe/display/) to return the Flux literal +representation of a record as a string and include it as a column value. + +```js +import "sampledata" + +sampledata.string() + |> map(fn: (r) => ({_time: r._time, exampleRecord: display(v: {tag: r.tag, value:r._value})})) +``` + +#### Output + +| \_time (time) | exampleRecord (string) | +| :---------------------------------------- | :----------------------------------------------- | +| 2021-01-01T00:00:00Z | {tag: t1, value: smpl_g9qczs} | +| 2021-01-01T00:00:10Z | {tag: t1, value: smpl_0mgv9n} | +| 2021-01-01T00:00:20Z | {tag: t1, value: smpl_phw664} | +| 2021-01-01T00:00:30Z | {tag: t1, value: smpl_guvzy4} | +| 2021-01-01T00:00:40Z | {tag: t1, value: smpl_5v3cce} | +| 2021-01-01T00:00:50Z | {tag: t1, value: smpl_s9fmgy} | +| 2021-01-01T00:00:00Z | {tag: t2, value: smpl_b5eida} | +| 2021-01-01T00:00:10Z | {tag: t2, value: smpl_eu4oxp} | +| 2021-01-01T00:00:20Z | {tag: t2, value: smpl_5g7tz4} | +| 2021-01-01T00:00:30Z | {tag: t2, value: smpl_sox1ut} | +| 2021-01-01T00:00:40Z | {tag: t2, value: smpl_wfm757} | +| 2021-01-01T00:00:50Z | {tag: t2, value: smpl_dtn2bv} | diff --git a/content/flux/v0.x/data-types/regexp.md b/content/flux/v0.x/data-types/regexp.md index 95b77ba92..8bf4ce0ea 100644 --- a/content/flux/v0.x/data-types/regexp.md +++ b/content/flux/v0.x/data-types/regexp.md @@ -100,11 +100,7 @@ regexp.compile(v: "^- [a-z0-9]{7}") ```js import "regexp" -regexp.replaceAllString( - r: /a(x*)b/, - v: "-ab-axxb-", - t: "T" -) +regexp.replaceAllString(r: /a(x*)b/, v: "-ab-axxb-", t: "T") // Returns "-T-T-" ``` @@ -120,10 +116,7 @@ regexp.replaceAllString( ```js import "regexp" -regexp.findString( - r: /foo.?/, - v: "seafood fool" -) +regexp.findString(r: /foo.?/, v: "seafood fool") // Returns "food" ``` diff --git a/content/flux/v0.x/define-functions.md b/content/flux/v0.x/define-functions.md index 438c445d8..1e082710a 100644 --- a/content/flux/v0.x/define-functions.md +++ b/content/flux/v0.x/define-functions.md @@ -107,14 +107,14 @@ each row, modify the `_value`, and then return the updated row. ##### Function definition ```js multByX = (tables=<-, x) => - tables - |> map(fn: (r) => ({ r with _value: r._value * x })) + tables + |> map(fn: (r) => ({r with _value: r._value * x})) ``` ##### Example usage ```js data - |> multByX(x: 2.0) + |> multByX(x: 2.0) ``` {{< flex >}} @@ -162,20 +162,22 @@ the updated row with a new `speed` column. ##### Function definition ```js speed = (tables=<-, unit="m") => - tables - |> map(fn: (r) => { - elapsedHours = float(v: int(v: duration(v: r.elapsed))) / float(v: int(v: 1h)) - distance = float(v: r.distance) - speed = distance / elapsedHours - - return { r with speed: "${speed} ${unit}ph" } - }) + tables + |> map( + fn: (r) => { + elapsedHours = float(v: int(v: duration(v: r.elapsed))) / float(v: int(v: 1h)) + distance = float(v: r.distance) + speed = distance / elapsedHours + + return {r with speed: "${speed} ${unit}ph"} + }, + ) ``` ##### Example usage ```js data - |> speed() + |> speed() ``` {{< flex >}} @@ -212,9 +214,9 @@ To create custom functions with variables scoped to the function, ```js functionName = (param) => { - exampleVar = "foo" + exampleVar = "foo" - return exampleVar + return exampleVar } ``` @@ -229,12 +231,17 @@ a numeric input value: ```js alertLevel = (v) => { - level = if float(v:v) >= 90.0 then "crit" - else if float(v:v) >= 80.0 then "warn" - else if float(v:v) >= 65.0 then "info" - else "ok" - - return level + level = + if float(v: v) >= 90.0 then + "crit" + else if float(v: v) >= 80.0 then + "warn" + else if float(v: v) >= 65.0 then + "info" + else + "ok" + + return level } alertLevel(v: 87.3) @@ -252,27 +259,30 @@ to create a dictionary of HEX codes and their corresponding names. import "dict" hexName = (hex) => { - hexNames = dict.fromList(pairs: [ - {key: "#00ffff", value: "Aqua"}, - {key: "#000000", value: "Black"}, - {key: "#0000ff", value: "Blue"}, - {key: "#ff00ff", value: "Fuchsia"}, - {key: "#808080", value: "Gray"}, - {key: "#008000", value: "Green"}, - {key: "#00ff00", value: "Lime"}, - {key: "#800000", value: "Maroon"}, - {key: "#000080", value: "Navy"}, - {key: "#808000", value: "Olive"}, - {key: "#800080", value: "Purple"}, - {key: "#ff0000", value: "Red"}, - {key: "#c0c0c0", value: "Silver"}, - {key: "#008080", value: "Teal"}, - {key: "#ffffff", value: "White"}, - {key: "#ffff00", value: "Yellow"}, - ]) - name = dict.get(dict: hexNames, key: hex, default: "No known name") - - return name + hexNames = + dict.fromList( + pairs: [ + {key: "#00ffff", value: "Aqua"}, + {key: "#000000", value: "Black"}, + {key: "#0000ff", value: "Blue"}, + {key: "#ff00ff", value: "Fuchsia"}, + {key: "#808080", value: "Gray"}, + {key: "#008000", value: "Green"}, + {key: "#00ff00", value: "Lime"}, + {key: "#800000", value: "Maroon"}, + {key: "#000080", value: "Navy"}, + {key: "#808000", value: "Olive"}, + {key: "#800080", value: "Purple"}, + {key: "#ff0000", value: "Red"}, + {key: "#c0c0c0", value: "Silver"}, + {key: "#008080", value: "Teal"}, + {key: "#ffffff", value: "White"}, + {key: "#ffff00", value: "Yellow"}, + ], + ) + name = dict.get(dict: hexNames, key: hex, default: "No known name") + + return name } hexName(hex: "#000000") diff --git a/content/flux/v0.x/get-started/_index.md b/content/flux/v0.x/get-started/_index.md index a74165c95..5d32d96e3 100644 --- a/content/flux/v0.x/get-started/_index.md +++ b/content/flux/v0.x/get-started/_index.md @@ -39,10 +39,10 @@ To see how to retrieve data from a source, select the data source: InfluxDB, CSV {{% code-tab-content %}} ```js from(bucket: "example-bucket") - |> range(start: -1d) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> mean() - |> yield(name: "_results") + |> range(start: -1d) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> mean() + |> yield(name: "_results") ``` {{% /code-tab-content %}} {{% code-tab-content %}} @@ -50,10 +50,10 @@ from(bucket: "example-bucket") import "csv" csv.from(file: "path/to/example/data.csv") - |> range(start: -1d) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> mean() - |> yield(name: "_results") + |> range(start: -1d) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> mean() + |> yield(name: "_results") ``` {{% /code-tab-content %}} {{% code-tab-content %}} @@ -63,11 +63,11 @@ import "sql" sql.from( driverName: "postgres", dataSourceName: "postgresql://user:password@localhost", - query:"SELECT * FROM TestTable" - ) - |> filter(fn: (r) => r.UserID == "123ABC456DEF") - |> mean(column: "purchase_total") - |> yield(name: "_results") + query: "SELECT * FROM TestTable", +) + |> filter(fn: (r) => r.UserID == "123ABC456DEF") + |> mean(column: "purchase_total") + |> yield(name: "_results") ``` {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} diff --git a/content/flux/v0.x/get-started/data-model.md b/content/flux/v0.x/get-started/data-model.md index 2b23d642a..a528b047e 100644 --- a/content/flux/v0.x/get-started/data-model.md +++ b/content/flux/v0.x/get-started/data-model.md @@ -89,7 +89,7 @@ to modify group keys in a stream of tables. ```js data - |> group(columns: ["foo", "bar"], mode: "by") + |> group(columns: ["foo", "bar"], mode: "by") ``` ### Table grouping example diff --git a/content/flux/v0.x/get-started/query-basics.md b/content/flux/v0.x/get-started/query-basics.md index 2cd1aabb1..48c256775 100644 --- a/content/flux/v0.x/get-started/query-basics.md +++ b/content/flux/v0.x/get-started/query-basics.md @@ -27,11 +27,11 @@ The majority of basic Flux queries include the following steps: - [Process](#process) ```js -from(bucket: "example-bucket") // ── Source - |> range(start: -1d) // ── Filter on time - |> filter(fn: (r) => r._field == "foo") // ── Filter on column values - |> group(columns: ["sensorID"]) // ── Shape - |> mean() // ── Process +from(bucket: "example-bucket") // ── Source + |> range(start: -1d) // ── Filter on time + |> filter(fn: (r) => r._field == "foo") // ── Filter on column values + |> group(columns: ["sensorID"]) // ── Shape + |> mean() // ── Process ``` ### Source @@ -135,7 +135,7 @@ To actually query data from InfluxDB, replace `sample.data()` with the import "influxdata/influxdb/sample" sample.data(set: "airSensor") - |> range(start: -1h) + |> range(start: -1h) ``` 3. Use [`filter()`](/flux/v0.x/stdlib/universe/filter/) to filter rows based on @@ -147,8 +147,8 @@ To actually query data from InfluxDB, replace `sample.data()` with the import "influxdata/influxdb/sample" sample.data(set: "airSensor") - |> range(start: -1h) - |> filter(fn: (r) => r._field == "co") + |> range(start: -1h) + |> filter(fn: (r) => r._field == "co") ``` 4. Use [`mean()`](/flux/v0.x/stdlib/universe/mean/) to calculate the average value @@ -161,9 +161,9 @@ To actually query data from InfluxDB, replace `sample.data()` with the import "influxdata/influxdb/sample" sample.data(set: "airSensor") - |> range(start: -1h) - |> filter(fn: (r) => r._field == "co") - |> mean() + |> range(start: -1h) + |> filter(fn: (r) => r._field == "co") + |> mean() ``` 5. Use [`group()`](/flux/v0.x/stdlib/universe/group) to [restructure tables](/flux/v0.x/get-started/data-model/#restructure-tables) @@ -173,10 +173,10 @@ To actually query data from InfluxDB, replace `sample.data()` with the import "influxdata/influxdb/sample" sample.data(set: "airSensor") - |> range(start: -1h) - |> filter(fn: (r) => r._field == "co") - |> mean() - |> group() + |> range(start: -1h) + |> filter(fn: (r) => r._field == "co") + |> mean() + |> group() ``` Results from this basic query should be similar to the following: diff --git a/content/flux/v0.x/get-started/syntax-basics.md b/content/flux/v0.x/get-started/syntax-basics.md index e3e5c595c..8f4b59d3b 100644 --- a/content/flux/v0.x/get-started/syntax-basics.md +++ b/content/flux/v0.x/get-started/syntax-basics.md @@ -28,6 +28,7 @@ This guide walks through how Flux handles a few simple expressions. - [Dictionaries](#dictionaries) - [Functions](#functions) - [Regular expression types](#regular-expression-types) + - [View the string representation of any Flux type](#view-the-string-representation-of-any-flux-type) - [Packages](#packages) - [Examples of basic syntax](#examples-of-basic-syntax) - [Define data stream variables](#define-data-stream-variables) @@ -149,6 +150,7 @@ The following basic types do not have a literal syntax, but can be created in ot ### Composite Types Flux [composite types](/flux/v0.x/data-types/composite/) are constructed from Flux [basic types](#basic-types). +All composite types have a Flux literal representation. - [Records](#records) - [Arrays](#arrays) @@ -281,6 +283,17 @@ regex = /^foo/ // Returns false ``` +### View the string representation of any Flux type +Use [`display()`](/flux/v0.x/stdlib/universe/display) to output the Flux literal +representation of any value as a string. + +```js +x = bytes(v: "foo") + +display(v: x) +// Returns "0x666f6f" +``` + ## Packages The [Flux standard library](/flux/v0.x/stdlib/) is organized into [packages](/flux/v0.x/spec/packages/) that contain functions and package-specific options. @@ -310,17 +323,18 @@ to query sample air sensor data and assigns different streams of data to unique ```js import "influxdata/influxdb/sample" -data = sample.data(set: "airSensor") - |> range(start: -15m) - |> filter(fn: (r) => r._measurement == "airSensors") +data = + sample.data(set: "airSensor") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "airSensors") temperature = - data - |> filter(fn: (r) => r._field == "temperature") + data + |> filter(fn: (r) => r._field == "temperature") humidity = - data - |> filter(fn: (r) => r._field == "humidity") + data + |> filter(fn: (r) => r._field == "humidity") ``` These variables can be used in other functions, such as `join()`, while keeping @@ -334,9 +348,9 @@ to find the top `n` results in the data set. ```js topN = (tables=<-, n) => - tables - |> sort(desc: true) - |> limit(n: n) + tables + |> sort(desc: true) + |> limit(n: n) ``` Use the custom function `topN` and the `humidity` data stream variable defined @@ -344,7 +358,7 @@ above to return the top three data points in each input table. ```js humidity - |> topN(n:3) + |> topN(n:3) ``` _For more information about creating custom functions, see [Define custom functions](/flux/v0.x/define-functions)._ diff --git a/content/flux/v0.x/prometheus/metric-types/counter.md b/content/flux/v0.x/prometheus/metric-types/counter.md index c179c51d2..95c3bafcd 100644 --- a/content/flux/v0.x/prometheus/metric-types/counter.md +++ b/content/flux/v0.x/prometheus/metric-types/counter.md @@ -71,12 +71,9 @@ On counter reset, `increase()` assumes no increase. ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "prometheus" and - r._field == "http_query_request_bytes" - ) - |> increase() + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "prometheus" and r._field == "http_query_request_bytes") + |> increase() ``` {{< flex >}} @@ -130,12 +127,9 @@ On counter reset, `increase()` assumes no increase. ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "http_query_request_bytes" and - r._field == "counter" - ) - |> increase() + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "http_query_request_bytes" and r._field == "counter") + |> increase() ``` {{< flex >}} @@ -191,13 +185,10 @@ between subsequent values. ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "prometheus" and - r._field == "http_query_request_bytes" - ) - |> increase() - |> difference() + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "prometheus" and r._field == "http_query_request_bytes") + |> increase() + |> difference() ``` {{< flex >}} @@ -240,13 +231,10 @@ from(bucket: "example-bucket") ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "http_query_request_bytes" and - r._field == "counter" - ) - |> increase() - |> difference() + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "http_query_request_bytes" and r._field == "counter") + |> increase() + |> difference() ``` {{< flex >}} @@ -303,13 +291,10 @@ customize the rate unit. ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "prometheus" and - r._field == "http_query_request_bytes" - ) - |> increase() - |> derivative() + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "prometheus" and r._field == "http_query_request_bytes") + |> increase() + |> derivative() ``` {{< flex >}} @@ -352,13 +337,10 @@ from(bucket: "example-bucket") ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "http_query_request_bytes" and - r._field == "counter" - ) - |> increase() - |> derivative() + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "http_query_request_bytes" and r._field == "counter") + |> increase() + |> derivative() ``` {{< flex >}} @@ -427,13 +409,10 @@ in specified time windows: import "experimental/aggregate" from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "prometheus" and - r._field == "http_query_request_bytes" - ) - |> increase() - |> aggregate.rate(every: 15s, unit: 1s) + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "prometheus" and r._field == "http_query_request_bytes") + |> increase() + |> aggregate.rate(every: 15s, unit: 1s) ``` {{< flex >}} @@ -479,13 +458,10 @@ from(bucket: "example-bucket") import "experimental/aggregate" from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "http_query_request_bytes" and - r._field == "counter" - ) - |> increase() - |> aggregate.rate(every: 15s, unit: 1s) + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "http_query_request_bytes" and r._field == "counter") + |> increase() + |> aggregate.rate(every: 15s, unit: 1s) ``` {{< flex >}} diff --git a/content/flux/v0.x/prometheus/metric-types/gauge.md b/content/flux/v0.x/prometheus/metric-types/gauge.md index c83e1237e..2faf6fda2 100644 --- a/content/flux/v0.x/prometheus/metric-types/gauge.md +++ b/content/flux/v0.x/prometheus/metric-types/gauge.md @@ -65,12 +65,9 @@ Select the appropriate metric format version below. ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "prometheus" and - r._field == "go_goroutines" - ) - |> derivative(nonNegative: true) + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "prometheus" and r._field == "go_goroutines") + |> derivative(nonNegative: true) ``` {{< flex >}} @@ -122,12 +119,9 @@ from(bucket: "example-bucket") ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "go_goroutines" and - r._field == "gauge" - ) - |> derivative(nonNegative: true) + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "go_goroutines" and r._field == "gauge") + |> derivative(nonNegative: true) ``` {{< flex >}} @@ -194,12 +188,9 @@ from(bucket: "example-bucket") import "experimental/aggregate" from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "prometheus" and - r._field == "go_goroutines" - ) - |> aggregate.rate(every: 10s, unit: 1s) + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "prometheus" and r._field == "go_goroutines") + |> aggregate.rate(every: 10s, unit: 1s) ``` {{< flex >}} @@ -262,12 +253,9 @@ from(bucket: "example-bucket") import "experimental/aggregate" from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "go_goroutines" and - r._field == "gauge" - ) - |> aggregate.rate(every: 10s, unit: 1s) + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "go_goroutines" and r._field == "gauge") + |> aggregate.rate(every: 10s, unit: 1s) ``` {{< flex >}} diff --git a/content/flux/v0.x/prometheus/metric-types/histogram.md b/content/flux/v0.x/prometheus/metric-types/histogram.md index 0507d959c..a1f260629 100644 --- a/content/flux/v0.x/prometheus/metric-types/histogram.md +++ b/content/flux/v0.x/prometheus/metric-types/histogram.md @@ -78,11 +78,11 @@ is **not compatible** with the format of Prometheus histogram data stored in Inf import "experimental/prometheus" from(bucket: "example-bucket") - |> start(range: -1h) - |> filter(fn: (r) => r._measurement == "prometheus") - |> filter(fn: (r) => r._field == "qc_all_duration_seconds") - |> aggregateWindow(every: 1m, fn: mean, createEmpty: false) - |> prometheus.histogramQuantile(quantile: 0.99) + |> start(range: -1h) + |> filter(fn: (r) => r._measurement == "prometheus") + |> filter(fn: (r) => r._field == "qc_all_duration_seconds") + |> aggregateWindow(every: 1m, fn: mean, createEmpty: false) + |> prometheus.histogramQuantile(quantile: 0.99) ``` {{% /tab-content %}} @@ -99,10 +99,10 @@ from(bucket: "example-bucket") import "experimental/prometheus" from(bucket: "example-bucket") - |> start(range: -1h) - |> filter(fn: (r) => r._measurement == "qc_all_duration_seconds") - |> aggregateWindow(every: 1m, fn: mean, createEmpty: false) - |> prometheus.histogramQuantile(quantile: 0.99, metricVersion: 1) + |> start(range: -1h) + |> filter(fn: (r) => r._measurement == "qc_all_duration_seconds") + |> aggregateWindow(every: 1m, fn: mean, createEmpty: false) + |> prometheus.histogramQuantile(quantile: 0.99, metricVersion: 1) ``` {{% /tab-content %}} {{< /tabs-wrapper >}} @@ -135,17 +135,20 @@ histogramQuantile: unexpected null in the countColumn ```js import "experimental/prometheus" -data = from(bucket: "example-bucket") - |> start(range: -1h) - |> filter(fn: (r) => r._measurement == "prometheus") - |> filter(fn: (r) => r._field == "qc_all_duration_seconds") - |> aggregateWindow(every: 1m, fn: mean, createEmpty: false) - -union(tables: [ - data |> prometheus.histogramQuantile(quantile: 0.99), - data |> prometheus.histogramQuantile(quantile: 0.5), - data |> prometheus.histogramQuantile(quantile: 0.25) -]) +data = + from(bucket: "example-bucket") + |> start(range: -1h) + |> filter(fn: (r) => r._measurement == "prometheus") + |> filter(fn: (r) => r._field == "qc_all_duration_seconds") + |> aggregateWindow(every: 1m, fn: mean, createEmpty: false) + +union( + tables: [ + data |> prometheus.histogramQuantile(quantile: 0.99), + data |> prometheus.histogramQuantile(quantile: 0.5), + data |> prometheus.histogramQuantile(quantile: 0.25), + ], +) ``` {{% /code-tab-content %}} @@ -153,17 +156,19 @@ union(tables: [ ```js import "experimental/prometheus" -data = from(bucket: "example-bucket") - |> start(range: -1h) - |> filter(fn: (r) => r._measurement == "qc_all_duration_seconds") - |> aggregateWindow(every: 1m, fn: mean, createEmpty: false) +data = + from(bucket: "example-bucket") + |> start(range: -1h) + |> filter(fn: (r) => r._measurement == "qc_all_duration_seconds") + |> aggregateWindow(every: 1m, fn: mean, createEmpty: false) -union(tables: [ - data |> prometheus.histogramQuantile(quantile: 0.99, metricVersion: 1), - data |> prometheus.histogramQuantile(quantile: 0.5, metricVersion: 1), - data |> prometheus.histogramQuantile(quantile: 0.25, metricVersion: 1) -]) - +union( + tables: [ + data |> prometheus.histogramQuantile(quantile: 0.99, metricVersion: 1), + data |> prometheus.histogramQuantile(quantile: 0.5, metricVersion: 1), + data |> prometheus.histogramQuantile(quantile: 0.25, metricVersion: 1), + ], +) ``` {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} diff --git a/content/flux/v0.x/prometheus/metric-types/summary.md b/content/flux/v0.x/prometheus/metric-types/summary.md index 857d828c5..d79f4cf1d 100644 --- a/content/flux/v0.x/prometheus/metric-types/summary.md +++ b/content/flux/v0.x/prometheus/metric-types/summary.md @@ -64,9 +64,9 @@ Prometheus summary metrics provide quantile values that can be visualized withou ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "prometheus") - |> filter(fn: (r) => r._field == "go_gc_duration_seconds") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "prometheus") + |> filter(fn: (r) => r._field == "go_gc_duration_seconds") ``` {{% /tab-content %}} {{% tab-content %}} @@ -76,9 +76,9 @@ from(bucket: "example-bucket") ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "go_gc_duration_seconds") - |> filter(fn: (r) => r._field != "count" and r._field != "sum") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "go_gc_duration_seconds") + |> filter(fn: (r) => r._field != "count" and r._field != "sum") ``` {{% /tab-content %}} {{< /tabs-wrapper >}} @@ -106,16 +106,11 @@ derive an average summary value. ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "prometheus") - |> filter(fn: (r) => - r._field == "go_gc_duration_seconds_count" or - r._field == "go_gc_duration_seconds_sum" - ) - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> map(fn: (r) => ({ r with - _value: r.go_gc_duration_seconds_sum / r.go_gc_duration_seconds_count - })) + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "prometheus") + |> filter(fn: (r) => r._field == "go_gc_duration_seconds_count" or r._field == "go_gc_duration_seconds_sum") + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({r with _value: r.go_gc_duration_seconds_sum / r.go_gc_duration_seconds_count})) ``` {{% /tab-content %}} {{% tab-content %}} @@ -128,11 +123,11 @@ from(bucket: "example-bucket") ```js from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "go_gc_duration_seconds") - |> filter(fn: (r) => r._field == "count" or r._field == "sum") - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> map(fn: (r) => ({ r with _value: r.sum / r.count })) + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "go_gc_duration_seconds") + |> filter(fn: (r) => r._field == "count" or r._field == "sum") + |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") + |> map(fn: (r) => ({ r with _value: r.sum / r.count })) ``` {{% /tab-content %}} {{< /tabs-wrapper >}} \ No newline at end of file diff --git a/content/flux/v0.x/prometheus/scrape-prometheus.md b/content/flux/v0.x/prometheus/scrape-prometheus.md index 40d5722c1..023c89515 100644 --- a/content/flux/v0.x/prometheus/scrape-prometheus.md +++ b/content/flux/v0.x/prometheus/scrape-prometheus.md @@ -166,14 +166,9 @@ To write scraped Prometheus metrics to InfluxDB: ```js import "experimental/prometheus" - + prometheus.scrape(url: "http://example.com/metrics") - |> to( - bucket: "example-bucket", - host: "http://localhost:8086", - org: "example-org", - token: "mYsuP3R5eCR37t0K3n" - ) + |> to(bucket: "example-bucket", host: "http://localhost:8086", org: "example-org", token: "mYsuP3R5eCR37t0K3n") ``` ### Write Prometheus metrics to InfluxDB at regular intervals @@ -183,11 +178,8 @@ scrape Prometheus metrics in an [InfluxDB task](/influxdb/cloud/process-data/get ```js import "experimental/prometheus" -option task = { - name: "Scrape Prometheus metrics", - every: 10s -} - +option task = {name: "Scrape Prometheus metrics", every: 10s} + prometheus.scrape(url: "http://example.com/metrics") - |> to(bucket: "example-bucket") + |> to(bucket: "example-bucket") ``` diff --git a/content/flux/v0.x/query-data/bigtable.md b/content/flux/v0.x/query-data/bigtable.md index 25f4d4442..54c637748 100644 --- a/content/flux/v0.x/query-data/bigtable.md +++ b/content/flux/v0.x/query-data/bigtable.md @@ -37,10 +37,10 @@ To query [Google Cloud Bigtable](https://cloud.google.com/bigtable/) with Flux: import "experimental/bigtable" bigtable.from( - token: "mySuPeRseCretTokEn", - project: "exampleProjectID", - instance: "exampleInstanceID", - table: "example-table" + token: "mySuPeRseCretTokEn", + project: "exampleProjectID", + instance: "exampleInstanceID", + table: "example-table", ) ``` @@ -65,9 +65,9 @@ bigtable_project = secrets.get(key: "BIGTABLE_PROJECT_ID") bigtable_instance = secrets.get(key: "BIGTABLE_INSTANCE_ID") bigtable.from( - token: bigtable_token, - project: bigtable_project, - instance: bigtable_instance, - table: "example-table" + token: bigtable_token, + project: bigtable_project, + instance: bigtable_instance, + table: "example-table" ) ``` diff --git a/content/flux/v0.x/query-data/csv.md b/content/flux/v0.x/query-data/csv.md index bda267776..588fac421 100644 --- a/content/flux/v0.x/query-data/csv.md +++ b/content/flux/v0.x/query-data/csv.md @@ -19,7 +19,8 @@ list_code_example: | ```js import "csv" - csvData = " + csvData = + " #group,false,false,true,true,true,false,false #datatype,string,long,string,string,string,long,double #default,_result,,,,,, @@ -29,7 +30,7 @@ list_code_example: | ,,1,air-sensors,humidity,TLM0200,1627049400000000000,35.64 ,,1,air-sensors,humidity,TLM0200,1627049700000000000,35.67 " - + csv.from(csv: csvData) ``` --- @@ -91,7 +92,8 @@ to execute Flux queries._ ```js import "csv" -csvData = " +csvData = + " #group,false,false,true,true,true,false,false #datatype,string,long,string,string,string,long,double #default,_result,,,,,, @@ -143,7 +145,8 @@ csv.from(csv: csvData) ```js import "csv" -csvData = " +csvData = + " dataset,metric,sensorID,timestamp,value air-sensors,humidity,TLM0100,1627049400000000000,34.79 air-sensors,humidity,TLM0100,1627049700000000000,34.65 diff --git a/content/flux/v0.x/query-data/influxdb.md b/content/flux/v0.x/query-data/influxdb.md index 1db11ad6e..be9f5b46a 100644 --- a/content/flux/v0.x/query-data/influxdb.md +++ b/content/flux/v0.x/query-data/influxdb.md @@ -34,11 +34,9 @@ InfluxDB requires queries to be time-bound, so `from()` must always be followed ```js from(bucket: "example-bucket") - |> range(start: -1h) + |> range(start: -1h) ``` - - ## Query InfluxDB Cloud or 2.x remotely To query InfluxDB Cloud or 2.x remotely, provide the following parameters in addition to **bucket** or **bucketID**. @@ -50,10 +48,10 @@ in addition to **bucket** or **bucketID**. ```js from( - bucket: "example-bucket", - host: "http://localhost:8086", - org: "example-org", - token: "mYSup3r5Ecr3T70keN" + bucket: "example-bucket", + host: "http://localhost:8086", + org: "example-org", + token: "mYSup3r5Ecr3T70keN", ) ``` @@ -64,7 +62,7 @@ For example, to query data from the `autogen` retention policy in the `telegraf` ```js from(bucket: "telegraf/autogen") - |> range(start: -30m) + |> range(start: -30m) ``` To query the [default retention policy](/{{< latest "influxdb" "v1" >}}/query_language/manage-database/#create-a-retention-policy) in a database, use the same bucket naming @@ -72,7 +70,7 @@ convention, but do not provide a retention policy: ```js from(bucket: "telegraf/") - |> range(start: -30m) + |> range(start: -30m) ``` diff --git a/content/flux/v0.x/query-data/prometheus.md b/content/flux/v0.x/query-data/prometheus.md deleted file mode 100644 index da643a946..000000000 --- a/content/flux/v0.x/query-data/prometheus.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -title: Scrape Prometheus metrics -list_title: Prometheus -description: > - Use [`prometheus.scrape`](/flux/v0.x/stdlib/experimental/prometheus/scrape) to - scrape Prometheus-formatted metrics from an HTTP-accessible endpoint using Flux. -menu: - flux_0_x: - name: Prometheus - parent: Query data sources -weight: 104 -list_code_example: | - ```js - import "experimental/prometheus" - - prometheus.scrape(url: "http://example.com/metrics") - ``` -draft: true ---- - -To scrape Prometheus-formatted metrics from an HTTP-accessible endpoint using Flux: - -1. Import the [`experimental/prometheus` package](/flux/v0.x/stdlib/experimental/prometheus/). -2. Use [`prometheus.scrape`](/flux/v0.x/stdlib/experimental/prometheus/scrape). - Use the **url** parameter to provide the URL to scrape metrics from. - -{{< keep-url >}} -```js -import "experimental/prometheus" - -prometheus.scrape(url: "http://localhost:8086/metrics") -``` - -## Results structure -`prometheus.scrape()` returns a [stream of tables](/flux/v0.x/get-started/data-model/#stream-of-tables) -with the following columns: - -- **_time**: Data timestamp -- **_measurement**: `prometheus` -- **_field**: [Prometheus metric name](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels) -- **_value**: [Prometheus metric value](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels) -- **url**: URL metrics were scraped from -- **Label columns**: A column for each [Prometheus label](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels). - The column label is the label name and the column value is the label value. - -Tables are grouped by **_measurement**, **_field**, and **Label columns**. - -{{% note %}} -#### Columns with the underscore prefix -Columns with the underscore (`_`) prefix are considered "system" columns. -Some Flux functions require these columns to function properly. -{{% /note %}} - -### Example Prometheus query results -The following are example Prometheus metrics scraped from the InfluxDB OSS 2.x `/metrics` endpoint: - -```sh -# HELP go_goroutines Number of goroutines that currently exist. -# TYPE go_goroutines gauge -go_goroutines 1263 -# HELP go_info Information about the Go environment. -# TYPE go_info gauge -go_info{version="go1.16.3"} 1 -# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use. -# TYPE go_memstats_alloc_bytes gauge -go_memstats_alloc_bytes 2.6598832e+07 -# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed. -# TYPE go_memstats_alloc_bytes_total counter -go_memstats_alloc_bytes_total 1.42276424e+09 -# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table. -# TYPE go_memstats_buck_hash_sys_bytes gauge -go_memstats_buck_hash_sys_bytes 5.259247e+06 -``` - -When scraped by Flux, these metrics return the following stream of tables: - -| _time | _measurement | _field | _value | url | -| :------------------- | :----------- | :------------ | -----: | :---------------------------- | -| 2021-01-01T00:00:00Z | prometheus | go_goroutines | 1263 | http://localhost:8086/metrics | - -| _time | _measurement | _field | _value | url | version | -| :------------------- | :----------- | :------ | -----: | :---------------------------- | -------- | -| 2021-01-01T00:00:00Z | prometheus | go_info | 1 | http://localhost:8086/metrics | go1.16.3 | - -| _time | _measurement | _field | _value | url | -| :------------------- | :----------- | :---------------------- | -------: | :---------------------------- | -| 2021-01-01T00:00:00Z | prometheus | go_memstats_alloc_bytes | 26598832 | http://localhost:8086/metrics | - -| _time | _measurement | _field | _value | url | -| :------------------- | :----------- | :---------------------------- | ---------: | :---------------------------- | -| 2021-01-01T00:00:00Z | prometheus | go_memstats_alloc_bytes_total | 1422764240 | http://localhost:8086/metrics | - -| _time | _measurement | _field | _value | url | -| :------------------- | :----------- | :------------------------------ | ------: | :---------------------------- | -| 2021-01-01T00:00:00Z | prometheus | go_memstats_buck_hash_sys_bytes | 5259247 | http://localhost:8086/metrics | - -{{% note %}} -#### Write Prometheus metrics to InfluxDB -To write scraped Prometheus metrics to InfluxDB: - -1. Use `prometheus.scrape()` to scrape Prometheus metrics. -2. Use [`to()`](/flux/v0.x/stdlib/influxdata/influxdb/to/) to write the scraped - metrics to InfluxDB. - -```js -import "experimental/prometheus" - -prometheus.scrape(url: "http://example.com/metrics") - |> to( - bucket: "example-bucket", - host: "http://localhost:8086", - org: "example-org", - token: "mYsuP3R5eCR37t0K3n" - ) -``` - -To scrape Prometheus metrics and write them to InfluxDB at regular intervals, -use the example above to [create an InfluxDB task](/influxdb/cloud/process-data/get-started/). -{{% /note %}} diff --git a/content/flux/v0.x/query-data/sql/_index.md b/content/flux/v0.x/query-data/sql/_index.md index 56ce7b438..a32b9081c 100644 --- a/content/flux/v0.x/query-data/sql/_index.md +++ b/content/flux/v0.x/query-data/sql/_index.md @@ -15,9 +15,9 @@ list_code_example: | import "sql" sql.from( - driverName: "postgres", - dataSourceName: "postgresql://user:password@localhost", - query:"SELECT * FROM TestTable" + driverName: "postgres", + dataSourceName: "postgresql://user:password@localhost", + query:"SELECT * FROM TestTable", ) ``` --- @@ -70,9 +70,9 @@ username = secrets.get(key: "POSTGRES_USER") password = secrets.get(key: "POSTGRES_PASS") sql.from( - driverName: "postgres", - dataSourceName: "postgresql://${username}:${password}@localhost:5432", - query: "SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: "postgresql://${username}:${password}@localhost:5432", + query: "SELECT * FROM example_table", ) ``` @@ -105,9 +105,9 @@ Given the following **example_table** in a MySQL database: import "sql" sql.from( - driver: "mysql", - dataSourceName: "username:passwOrd@tcp(localhost:3306)/db", - query: "SELECT ID, Name FROM example_table" + driver: "mysql", + dataSourceName: "username:passwOrd@tcp(localhost:3306)/db", + query: "SELECT ID, Name FROM example_table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/amazon-rds.md b/content/flux/v0.x/query-data/sql/amazon-rds.md index 6bee10cc3..6c35911b1 100644 --- a/content/flux/v0.x/query-data/sql/amazon-rds.md +++ b/content/flux/v0.x/query-data/sql/amazon-rds.md @@ -14,11 +14,11 @@ related: list_code_example: | ```js import "sql" - + sql.from( - driverName: "postgres", - dataSourceName: "postgresql://my-instance.123456789012.us-east-1.rds.amazonaws.com:5432", - query: "SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: "postgresql://my-instance.123456789012.us-east-1.rds.amazonaws.com:5432", + query: "SELECT * FROM example_table", ) ``` --- @@ -39,9 +39,9 @@ with Flux: import "sql" sql.from( - driverName: "postgres", - dataSourceName: "postgresql://my-instance.123456789012.us-east-1.rds.amazonaws.com:5432", - query: "SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: "postgresql://my-instance.123456789012.us-east-1.rds.amazonaws.com:5432", + query: "SELECT * FROM example_table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/athena.md b/content/flux/v0.x/query-data/sql/athena.md index 1975daf76..8a408fb38 100644 --- a/content/flux/v0.x/query-data/sql/athena.md +++ b/content/flux/v0.x/query-data/sql/athena.md @@ -15,9 +15,10 @@ list_code_example: | import "sql" sql.from( - driverName: "awsathena", - dataSourceName: "s3://myorgqueryresults/?accessID=12ab34cd56ef®ion=region-name&secretAccessKey=y0urSup3rs3crEtT0k3n", - query: "GO SELECT * FROM Example.Table" + driverName: "awsathena", + dataSourceName: + "s3://myorgqueryresults/?accessID=12ab34cd56ef®ion=region-name&secretAccessKey=y0urSup3rs3crEtT0k3n", + query: "GO SELECT * FROM Example.Table", ) ``` --- @@ -35,9 +36,10 @@ To query [Amazon Athena](https://aws.amazon.com/athena) with Flux: import "sql" sql.from( - driverName: "awsathena", - dataSourceName: "s3://myorgqueryresults/?accessID=12ab34cd56ef®ion=region-name&secretAccessKey=y0urSup3rs3crEtT0k3n", - query: "GO SELECT * FROM Example.Table" + driverName: "awsathena", + dataSourceName: + "s3://myorgqueryresults/?accessID=12ab34cd56ef®ion=region-name&secretAccessKey=y0urSup3rs3crEtT0k3n", + query: "GO SELECT * FROM Example.Table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/bigquery.md b/content/flux/v0.x/query-data/sql/bigquery.md index 30f62fa90..b7d7debd3 100644 --- a/content/flux/v0.x/query-data/sql/bigquery.md +++ b/content/flux/v0.x/query-data/sql/bigquery.md @@ -15,9 +15,9 @@ list_code_example: | import "sql" sql.from( - driverName: "bigquery", - dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y", - query: "SELECT * FROM exampleTable" + driverName: "bigquery", + dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y", + query: "SELECT * FROM exampleTable", ) ``` --- @@ -35,9 +35,9 @@ To query [Google BigQuery](https://cloud.google.com/bigquery) with Flux: import "sql" sql.from( - driverName: "bigquery", - dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y", - query: "SELECT * FROM exampleTable" + driverName: "bigquery", + dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y", + query: "SELECT * FROM exampleTable", ) ``` diff --git a/content/flux/v0.x/query-data/sql/cockroachdb.md b/content/flux/v0.x/query-data/sql/cockroachdb.md index 803cf7879..fdccf4bfe 100644 --- a/content/flux/v0.x/query-data/sql/cockroachdb.md +++ b/content/flux/v0.x/query-data/sql/cockroachdb.md @@ -13,11 +13,12 @@ related: list_code_example: | ```js import "sql" - + sql.from( - driverName: "postgres", - dataSourceName: "postgresql://username:password@localhost:26257/cluster_name.defaultdb?sslmode=verify-full&sslrootcert=certs_dir/cc-ca.crt", - query: "SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: + "postgresql://username:password@localhost:26257/cluster_name.defaultdb?sslmode=verify-full&sslrootcert=certs_dir/cc-ca.crt", + query: "SELECT * FROM example_table", ) ``` --- @@ -35,9 +36,10 @@ To query [CockroachDB](https://www.cockroachlabs.com/) with Flux: import "sql" sql.from( - driverName: "postgres", - dataSourceName: "postgresql://username:password@localhost:26257/cluster_name.defaultdb?sslmode=verify-full&sslrootcert=certs_dir/cc-ca.crt", - query: "SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: + "postgresql://username:password@localhost:26257/cluster_name.defaultdb?sslmode=verify-full&sslrootcert=certs_dir/cc-ca.crt", + query: "SELECT * FROM example_table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/mariadb.md b/content/flux/v0.x/query-data/sql/mariadb.md index 3d9937ed7..2e18f4024 100644 --- a/content/flux/v0.x/query-data/sql/mariadb.md +++ b/content/flux/v0.x/query-data/sql/mariadb.md @@ -13,11 +13,11 @@ related: list_code_example: | ```js import "sql" - + sql.from( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - query: "SELECT * FROM example_table" + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + query: "SELECT * FROM example_table", ) ``` --- @@ -35,9 +35,9 @@ To query [MariaDB](https://mariadb.org/) with Flux: import "sql" sql.from( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - query: "SELECT * FROM example_table" + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + query: "SELECT * FROM example_table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/mysql.md b/content/flux/v0.x/query-data/sql/mysql.md index d7279c148..00c609d3d 100644 --- a/content/flux/v0.x/query-data/sql/mysql.md +++ b/content/flux/v0.x/query-data/sql/mysql.md @@ -15,9 +15,9 @@ list_code_example: | import "sql" sql.from( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - query: "SELECT * FROM example_table" + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + query: "SELECT * FROM example_table", ) ``` --- @@ -35,9 +35,9 @@ To query [MySQL](https://www.mysql.com/) with Flux: import "sql" sql.from( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - query: "SELECT * FROM example_table" + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + query: "SELECT * FROM example_table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/percona.md b/content/flux/v0.x/query-data/sql/percona.md index 39f8d12d3..1bbff3ff1 100644 --- a/content/flux/v0.x/query-data/sql/percona.md +++ b/content/flux/v0.x/query-data/sql/percona.md @@ -15,9 +15,9 @@ list_code_example: | import "sql" sql.from( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - query: "SELECT * FROM example_table" + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + query: "SELECT * FROM example_table", ) ``` --- @@ -35,9 +35,9 @@ To query [Percona](https://www.percona.com/) with Flux: import "sql" sql.from( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - query: "SELECT * FROM example_table" + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + query: "SELECT * FROM example_table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/postgresql.md b/content/flux/v0.x/query-data/sql/postgresql.md index 581362e73..6c17b7c0c 100644 --- a/content/flux/v0.x/query-data/sql/postgresql.md +++ b/content/flux/v0.x/query-data/sql/postgresql.md @@ -15,9 +15,9 @@ list_code_example: | import "sql" sql.from( - driverName: "postgres", - dataSourceName: "postgresql://username:password@localhost:5432", - query: "SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: "postgresql://username:password@localhost:5432", + query: "SELECT * FROM example_table", ) ``` --- @@ -35,9 +35,9 @@ To query [PostgreSQL](https://www.postgresql.org/) with Flux: import "sql" sql.from( - driverName: "postgres", - dataSourceName: "postgresql://username:password@localhost:5432", - query: "SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: "postgresql://username:password@localhost:5432", + query: "SELECT * FROM example_table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/sap-hana.md b/content/flux/v0.x/query-data/sql/sap-hana.md index 9f3664966..a55aca492 100644 --- a/content/flux/v0.x/query-data/sql/sap-hana.md +++ b/content/flux/v0.x/query-data/sql/sap-hana.md @@ -15,9 +15,9 @@ list_code_example: | import "sql" sql.from( - driverName: "hdb", - dataSourceName: "hdb://username:password@myserver:30015", - query: "SELECT * FROM SCHEMA.TABLE" + driverName: "hdb", + dataSourceName: "hdb://username:password@myserver:30015", + query: "SELECT * FROM SCHEMA.TABLE", ) ``` --- @@ -35,9 +35,9 @@ To query [SAP HANA](https://www.sap.com/products/hana.html) with Flux: import "sql" sql.from( - driverName: "hdb", - dataSourceName: "hdb://username:password@myserver:30015", - query: "SELECT * FROM SCHEMA.TABLE" + driverName: "hdb", + dataSourceName: "hdb://username:password@myserver:30015", + query: "SELECT * FROM SCHEMA.TABLE", ) ``` diff --git a/content/flux/v0.x/query-data/sql/snowflake.md b/content/flux/v0.x/query-data/sql/snowflake.md index 282413f2b..466abd418 100644 --- a/content/flux/v0.x/query-data/sql/snowflake.md +++ b/content/flux/v0.x/query-data/sql/snowflake.md @@ -15,9 +15,9 @@ list_code_example: | import "sql" sql.from( - driverName: "snowflake", - dataSourceName: "user:password@account/db/exampleschema?warehouse=wh", - query: "SELECT * FROM example_table" + driverName: "snowflake", + dataSourceName: "user:password@account/db/exampleschema?warehouse=wh", + query: "SELECT * FROM example_table", ) ``` --- @@ -35,9 +35,9 @@ To query [Snowflake](https://www.snowflake.com/) with Flux: import "sql" sql.from( - driverName: "snowflake", - dataSourceName: "user:password@account/db/exampleschema?warehouse=wh", - query: "SELECT * FROM example_table" + driverName: "snowflake", + dataSourceName: "user:password@account/db/exampleschema?warehouse=wh", + query: "SELECT * FROM example_table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/sql-server.md b/content/flux/v0.x/query-data/sql/sql-server.md index 417eb328c..15f0f6457 100644 --- a/content/flux/v0.x/query-data/sql/sql-server.md +++ b/content/flux/v0.x/query-data/sql/sql-server.md @@ -16,9 +16,9 @@ list_code_example: | import "sql" sql.from( - driverName: "sqlserver", - dataSourceName: "sqlserver://user:password@localhost:1433?database=examplebdb", - query: "GO SELECT * FROM Example.Table" + driverName: "sqlserver", + dataSourceName: "sqlserver://user:password@localhost:1433?database=examplebdb", + query: "GO SELECT * FROM Example.Table", ) ``` --- @@ -36,9 +36,9 @@ To query [Microsoft SQL Server](https://www.microsoft.com/sql-server/) with Flux import "sql" sql.from( - driverName: "sqlserver", - dataSourceName: "sqlserver://user:password@localhost:1433?database=examplebdb", - query: "GO SELECT * FROM Example.Table" + driverName: "sqlserver", + dataSourceName: "sqlserver://user:password@localhost:1433?database=examplebdb", + query: "GO SELECT * FROM Example.Table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/sqlite.md b/content/flux/v0.x/query-data/sql/sqlite.md index 8b557c9b6..5c3d7e17f 100644 --- a/content/flux/v0.x/query-data/sql/sqlite.md +++ b/content/flux/v0.x/query-data/sql/sqlite.md @@ -15,9 +15,9 @@ list_code_example: | import "sql" sql.from( - driverName: "sqlite3", - dataSourceName: "file:/path/to/example.db?cache=shared&mode=ro", - query: "SELECT * FROM example_table" + driverName: "sqlite3", + dataSourceName: "file:/path/to/example.db?cache=shared&mode=ro", + query: "SELECT * FROM example_table", ) ``` --- @@ -35,9 +35,9 @@ To query [SQLite](https://www.sqlite.org/index.html) with Flux: import "sql" sql.from( - driverName: "sqlite3", - dataSourceName: "file:/path/to/example.db?cache=shared&mode=ro", - query: "SELECT * FROM example_table" + driverName: "sqlite3", + dataSourceName: "file:/path/to/example.db?cache=shared&mode=ro", + query: "SELECT * FROM example_table", ) ``` diff --git a/content/flux/v0.x/query-data/sql/vertica.md b/content/flux/v0.x/query-data/sql/vertica.md index 6b973d24a..42e00a1c5 100644 --- a/content/flux/v0.x/query-data/sql/vertica.md +++ b/content/flux/v0.x/query-data/sql/vertica.md @@ -16,9 +16,9 @@ list_code_example: | import "sql" sql.from( - driverName: "vertica", - dataSourceName: "vertica://username:password@localhost:5432", - query: "SELECT * FROM public.example_table" + driverName: "vertica", + dataSourceName: "vertica://username:password@localhost:5432", + query: "SELECT * FROM public.example_table", ) ``` --- @@ -36,9 +36,9 @@ To query [Vertica](https://www.vertica.com/) with Flux: import "sql" sql.from( - driverName: "vertica", - dataSourceName: "vertica://username:password@localhost:5433/dbname", - query: "SELECT * FROM public.example_table" + driverName: "vertica", + dataSourceName: "vertica://username:password@localhost:5433/dbname", + query: "SELECT * FROM public.example_table", ) ``` diff --git a/content/flux/v0.x/release-notes.md b/content/flux/v0.x/release-notes.md index 6fb15a0f3..5cb77ed60 100644 --- a/content/flux/v0.x/release-notes.md +++ b/content/flux/v0.x/release-notes.md @@ -10,6 +10,269 @@ aliases: - /influxdb/cloud/reference/release-notes/flux/ --- +## v0.165.0 [2022-04-25] + +### Features +- Add support for options in the `testcase` extension. +- Vectorize addition operations in `map()`. +- Add location support to `date.truncate()`. +- Accept string literals in properties of a record type. +- Add trace option to the `flux` CLI. +- Add `EquiJoinPredicateRule`. + +### Bug fixes +- Update `map()` test case to include a range. +- Don't set `BaseLocation.file` to `Some("")`. +- Fix `strings.joinStr` panic when it receives a null value. +- Remove 64bit misalignment. +- Fix memory releases and add checked allocator to the end of tests. + +--- + +## v0.164.1 [2022-04-18] + +### Bug fixes +- Remove an extraneous `go generate` statement. + +--- + +## v0.164.0 [2022-04-13] + +### Features +- Allow Go to pass compilation options to Rust. + +### Bug fixes +- Do not assume integers are 64bit integers. +- Update `prometheus.scrape` type signature to correctly return a stream. + +--- + +## v0.163.0 [2022-04-07] + +### Features +- Report skipped tests. + +### Bug fixes +- Update transformation transport adapter to always invoke `finish`. +- Add support for "soft paragraphs" (paragraphs that contain single newline + characters) in inline Flux documentation. + +--- + +## v0.162.0 [2022-04-05] + +### Features +- Add [OpenTracing spans](https://opentracing.io/docs/overview/spans/) to the Flux runtime. +- Add the `cffi` feature to reduce WASM binary size. +- Replace the main `flux` CLI with a new `flux` CLI that starts a Flux REPL by + default or executes a Flux script via stdin. +- Track freed memory with `SetFinalizer`. +- Move [`addDuration()`](/flux/v0.x/stdlib/date/addduration/) and + [`subDuration()`](/flux/v0.x/stdlib/date/subduration/) from the `experimental` + package to the `date` package. + +### Bug fixes +- Improve error messages for column conflicts in pivot operations. +- Create OpenTracing spans for transformations using the proper context. +- Add errors to OpenTracing spans created for transformations. +- Restore required features hidden behind the `cffi` feature. + +--- + +## v0.161.0 [2022-03-24] + +### Features +- Re-enable the dialer pool and update dependency injection. + +### Bug fixes +- Check length boundary for lower bound of [`strings.substring()`](/flux/v0.x/stdlib/strings/substring/). + +--- + +## v0.160.0 [2022-03-22] + +### Features +- Remove the `concurrencyLimit` feature flag and keep it in the dependencies. +- Add MQTT Docker integration test. +- Enable dialer pool. +- Add an IOx-specific unpivot function to the `internal` package. + +### Bug fixes +- Update [`join()`](/flux/v0.x/stdlib/universe/join/) to properly handle divergent schemas. +- Fix line endings in the `testcase` format to prevent unnecessarily nesting the + body of a test case. +- Make [`strings.substring()`](/flux/v0.x/stdlib/strings/substring/) check bounds correctly. +- Fix duration and integer literal scanning. +- Make `testcase` a semantic check instead of an error. +- Skip parallel merge when selecting the result name based on side effects. +- Add metadata headers to inline documentation. + +--- + +## v0.159.0 [2022-03-14] + +### Features +- Added a `finish` state to parallel-merge and always protect with a mutex lock. + +### Bug fixes +- Use a fork of the `gosnowflake` library to prevent file transfers. +- When encoding Flux types as JSON, encode dictionary types as JSON objects. +- Upgrade Apache Arrow to v7. + +--- + +## v0.158.0 [2022-03-09] + +### Features +- Add inline documentation to the `universe` package. +- Factor parallel execution into the concurrency quota calculation. + +### Bug fixes +- Add parallel merges with no successors to the results set. +- Correctly use range in an updated `map()` test. + +--- + +## v0.157.0 [2022-03-03] + +### Features +- Update `fill()` to use narrow transformation. +- Add an attribute-based instantiation of parallel execution nodes. +- Expose the `Record::fields` iterator. +- Allow the `estimate_tdigest` method in `quantile()` to process any numeric value. +- Optimize `aggregateWindow()` for specific aggregate transformations. + +### Bug fixes +- Update vectorized `map()` to handle missing columns. +- Remove duplicate line in `Makefile`. +- Fix `cargo doc` build errors. +- Reclassify CSV-decoding errors as user errors. +- Update `iox.from()` and `generate.from()` to use proper stream annotation. + +--- + +## v0.156.0 [2022-02-22] + +### Features +- Add second pass to physical planner for parallelization rules. +- Separate streams from arrays in the type system. +- Add function to internal/debug to check feature flag values. +- Allow feature flags to record metrics if configured. +- Add extra verbose level to dump AST of test. +- Explain what `[A], [A:B]` etc means in errors. + +### Bug fixes +- Make `buckets()` function return a stream. +- Remove unnecessary `TableObject` guards. +- Copy `TagColumns` in `to()` that may get modified into the transformation. +- Update tests to use explicit yields. + +--- + +## v0.155.1 [2022-02-15] + +### Bug fixes +- Update tests to use an explicit yield. + +--- + +## v0.155.0 [2022-02-14] + +### Features +- Add new [experimental array functions](/flux/v0.x/stdlib/experimental/array/) + for operating on arrays. + +### Bug fixes +- Add `stop` parameter to [InfluxDB schema functions](/flux/v0.x/stdlib/influxdata/influxdb/schema/). +- Remove `os.Exit` calls and allow `defer executor.Close` to run. +- Properly handle time zone transitions when there is no daylight savings time + in the specified time zone. + +--- + +## v0.154.0 [2022-02-09] + +### Features +- Add [`requests.peek()`](/flux/v0.x/stdlib/experimental/http/requests/peek/) to + return HTTP response data in a table. +- Add [`display()`](/flux/v0.x/stdlib/universe/display/) to represent any value as a string. +- Create a version of `map()` that is columnar and supports vectorization. +- Support vectorized functions. + +### Bug fixes +- Add time vector to the `values` package. +- Set the correct type for vectorized functions. + +--- + +## v0.153.0 [2022-02-07] + +### Features +- Connect language server protocol (LSP) features through the Flux crate. +- Add conversion from `flux.Bounds` to `plan/execute.Bounds`. +- Re-index all bound variables to start from 0. + +### Bug fixes +- Int feature flags work properly when returned as floats. + +--- + +## v0.152.0 [2022-01-31] + +### Features +- Add the [`experimental/http/requests` package](/flux/v0.x/stdlib/experimental/http/requests/) + to support generic HTTP requests. +- Add [`experimental/iox` package](/flux/v0.x/stdlib/experimental/iox/) and a + placeholder for the `iox.from()` function. +- Add dependency hooks to the dependency subsystem. +- Remove unneeded feature flags. + +### Bug fixes +- Revert update to the dependencies package. +- Return false if contains gets invalid value. + +--- + +## v0.151.1 [2022-01-24] + +### Features +- Update to Rust 1.58.1. + +--- + +## v0.151.0 [2022-01-20] + +### Features +- Expose `MonoType::parameter` and `MonoType::field`. + +### Bug fixes +- Support writing unsigned integers with the `http` provider. + +--- + +## v0.150.1 [2022-01-19] + +### Bug fixes +- Remove duplicate `die` builtin in the `universe` package. + +--- + +## v0.150.0 [2022-01-19] + +### Features +- Update inline documentation in the following packages: + - date + - experimental + - testing + - timezone + - types + +### Bug fixes +- Make iterating the hashmap deterministic. +- Quote SQL identifiers to mitigate the risk of SQL injection. + +--- + ## v0.149.0 [2022-01-12] ### Features diff --git a/content/flux/v0.x/spec/lexical-elements.md b/content/flux/v0.x/spec/lexical-elements.md index 65c2630a5..be90705c1 100644 --- a/content/flux/v0.x/spec/lexical-elements.md +++ b/content/flux/v0.x/spec/lexical-elements.md @@ -144,6 +144,7 @@ decimals = decimal_digit { decimal_digit } . A _duration literal_ is a representation of a length of time. It has an integer part and a duration unit part. +The integer part must be a valid Flux integer and should not contain leading zeros. Multiple durations may be specified together and the resulting duration is the sum of each smaller part. When several durations are specified together, larger units must appear before smaller ones, and there can be no repeated units. diff --git a/content/flux/v0.x/stdlib/array/_index.md b/content/flux/v0.x/stdlib/array/_index.md index 781b633a8..67664fe54 100644 --- a/content/flux/v0.x/stdlib/array/_index.md +++ b/content/flux/v0.x/stdlib/array/_index.md @@ -9,7 +9,6 @@ aliases: - /influxdb/cloud/reference/flux/stdlib/experimental/array/ - /influxdb/v2.0/reference/flux/stdlib/array/ - /influxdb/cloud/reference/flux/stdlib/array/ - - /flux/v0.x/stdlib/experimental/array/ menu: flux_0_x_ref: name: array diff --git a/content/flux/v0.x/stdlib/array/from.md b/content/flux/v0.x/stdlib/array/from.md index ad442af6c..8d87e4bb9 100644 --- a/content/flux/v0.x/stdlib/array/from.md +++ b/content/flux/v0.x/stdlib/array/from.md @@ -25,11 +25,13 @@ All records must have the same keys and data types. ```js import "array" -array.from(rows: [ - {_time: 2020-01-01T00:00:00Z, _field: "exampleField", _value: 3, foo: "bar"}, - {_time: 2020-01-01T00:01:00Z, _field: "exampleField", _value: 4, foo: "bar"}, - {_time: 2020-01-01T00:02:00Z, _field: "exampleField", _value: 1, foo: "bar"} -]) +array.from( + rows: [ + {_time: 2020-01-01T00:00:00Z, _field: "exampleField", _value: 3, foo: "bar"}, + {_time: 2020-01-01T00:01:00Z, _field: "exampleField", _value: 4, foo: "bar"}, + {_time: 2020-01-01T00:02:00Z, _field: "exampleField", _value: 1, foo: "bar"}, + ], +) ``` ## Parameters @@ -43,10 +45,7 @@ Array of records to construct a table with. ```js import "array" -rows = [ - {foo: "bar", baz: 21.2}, - {foo: "bar", baz: 23.8} -] +rows = [{foo: "bar", baz: 21.2}, {foo: "bar", baz: 23.8}] array.from(rows: rows) ``` @@ -56,12 +55,9 @@ array.from(rows: rows) import "influxdata/influxdb/v1" import "array" -tags = v1.tagValues( - bucket: "example-bucket", - tag: "host" -) - +tags = v1.tagValues(bucket: "example-bucket", tag: "host") wildcard_tag = array.from(rows: [{_value: "*"}]) union(tables: [tags, wildcard_tag]) + ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/alerta/alert.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/alerta/alert.md index ec599402c..45132337f 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/alerta/alert.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/alerta/alert.md @@ -19,21 +19,21 @@ The `alerta.alert()` function sends an alert to [Alerta](https://www.alerta.io/) import "contrib/bonitoo-io/alerta" alerta.alert( - url: "https://alerta.io:8080/alert", - apiKey: "0Xx00xxXx00Xxx0x0X", - resource: "example-resource", - event: "Example event", - environment: "", - severity: "critical", - service: [], - group: "", - value: "", - text: "", - tags: [], - attributes: {}, - origin: "InfluxDB", - type: "", - timestamp: now(), + url: "https://alerta.io:8080/alert", + apiKey: "0Xx00xxXx00Xxx0x0X", + resource: "example-resource", + event: "Example event", + environment: "", + severity: "critical", + service: [], + group: "", + value: "", + text: "", + tags: [], + attributes: {}, + origin: "InfluxDB", + type: "", + timestamp: now(), ) ``` @@ -116,32 +116,29 @@ import "influxdata/influxdb/secrets" apiKey = secrets.get(key: "ALERTA_API_KEY") lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "level" - ) - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "level") + |> last() + |> findRecord(fn: (key) => true, idx: 0) severity = if lastReported._value > 50 then "warning" else "ok" alerta.alert( - url: "https://alerta.io:8080/alert", - apiKey: apiKey, - resource: "example-resource", - event: "Example event", - environment: "Production", - severity: severity, - service: ["example-service"], - group: "example-group", - value: string(v: lastReported._value), - text: "Service is ${severity}. The last reported value was ${string(v: lastReported._value)}.", - tags: ["ex1", "ex2"], - attributes: {}, - origin: "InfluxDB", - type: "exampleAlertType", - timestamp: now(), + url: "https://alerta.io:8080/alert", + apiKey: apiKey, + resource: "example-resource", + event: "Example event", + environment: "Production", + severity: severity, + service: ["example-service"], + group: "example-group", + value: string(v: lastReported._value), + text: "Service is ${severity}. The last reported value was ${string(v: lastReported._value)}.", + tags: ["ex1", "ex2"], + attributes: {}, + origin: "InfluxDB", + type: "exampleAlertType", + timestamp: now(), ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/alerta/endpoint.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/alerta/endpoint.md index 080b1f2d5..01ec85e8e 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/alerta/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/alerta/endpoint.md @@ -23,10 +23,10 @@ _**Function type:** Output_ import "contrib/bonitoo-io/alerta" alerta.endpoint( - url: "https://alerta.io:8080/alert, - apiKey: "0Xx00xxXx00Xxx0x0X", - environment: "", - origin: "InfluxDB" + url: "https://alerta.io:8080/alert", + apiKey: "0Xx00xxXx00Xxx0x0X", + environment: "", + origin: "InfluxDB" ) ``` @@ -87,31 +87,30 @@ import "contrib/bonitoo-io/alerta" import "influxdata/influxdb/secrets" apiKey = secrets.get(key: "ALERTA_API_KEY") -endpoint = alerta.endpoint( - url: "https://alerta.io:8080/alert", - apiKey: apiKey, - environment: "Production", - origin: "InfluxDB" -) +endpoint = + alerta.endpoint(url: "https://alerta.io:8080/alert", apiKey: apiKey, environment: "Production", origin: "InfluxDB") -crit_events = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") +crit_events = + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") crit_events - |> endpoint(mapFn: (r) => { - return { r with - resource: "example-resource", - event: "example-event", - severity: "critical", - service: r.service, - group: "example-group", - value: r.status, - text: "Status is critical.", - tags: ["ex1", "ex2"], - attributes: {}, - type: "exampleAlertType", - timestamp: now(), - } - })() + |> endpoint( + mapFn: (r) => { + return {r with + resource: "example-resource", + event: "example-event", + severity: "critical", + service: r.service, + group: "example-group", + value: r.status, + text: "Status is critical.", + tags: ["ex1", "ex2"], + attributes: {}, + type: "exampleAlertType", + timestamp: now(), + } + }, + )() ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/int.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/int.md index 446087173..0e82f9a03 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/int.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/int.md @@ -51,11 +51,12 @@ _The following example uses data provided by the [`sampledata` package](/flux/v0 ```js import "sampledata" -data = sampledata.int() - |> map(fn: (r) => ({ r with _value: hex.string(v: r._value) })) +data = + sampledata.int() + |> map(fn: (r) => ({r with _value: hex.string(v: r._value)})) data - |> map(fn:(r) => ({ r with _value: hex.int(v: r._value) })) + |> map(fn: (r) => ({r with _value: hex.int(v: r._value)})) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/string.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/string.md index 02d92ac3a..5a265d8c2 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/string.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/string.md @@ -114,11 +114,12 @@ _The following example uses data provided by the [`sampledata` package](/flux/v0 import "sampledata" import "contrib/bonitoo-io/hex" -data = sampledata.int() - |> map(fn: (r) => ({ r with _value: r._value * 1000 })) +data = + sampledata.int() + |> map(fn: (r) => ({r with _value: r._value * 1000})) data - |> map(fn:(r) => ({ r with _value: hex.string(v: r.foo) })) + |> map(fn: (r) => ({r with _value: hex.string(v: r.foo)})) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/uint.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/uint.md index 4cc90faf0..135f44530 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/uint.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/hex/uint.md @@ -49,13 +49,15 @@ hex.uint(v: "-d431") _The following example uses data provided by the [`sampledata` package](/flux/v0.x/stdlib/sampledata/)._ ```js +import "contrib/bonitoo-io/hex" import "sampledata" -data = sampledata.uint() - |> map(fn: (r) => ({ r with _value: hex.string(v: r._value) })) +data = + sampledata.uint() + |> map(fn: (r) => ({r with _value: hex.string(v: r._value)})) data - |> map(fn:(r) => ({ r with _value: hex.uint(v: r._value) })) + |> map(fn: (r) => ({r with _value: hex.uint(v: r._value)})) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/servicenow/endpoint.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/servicenow/endpoint.md index 0fadf90a1..46ca80942 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/servicenow/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/servicenow/endpoint.md @@ -71,24 +71,28 @@ import "influxdata/influxdb/secrets" username = secrets.get(key: "SERVICENOW_USERNAME") password = secrets.get(key: "SERVICENOW_PASSWORD") -endpoint = servicenow.endpoint( - url: "https://example-tenant.service-now.com/api/global/em/jsonv2", - username: username, - password: password -) +endpoint = + servicenow.endpoint( + url: "https://example-tenant.service-now.com/api/global/em/jsonv2", + username: username, + password: password, + ) -crit_events = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") +crit_events = + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") crit_events - |> endpoint(mapFn: (r) => ({ - node: r.host, - metricType: r._measurement, - resource: r.instance, - metricName: r._field, - severity: "critical", - additionalInfo: { "devId": r.dev_id } - }) + |> endpoint( + mapFn: (r) => + ({ + node: r.host, + metricType: r._measurement, + resource: r.instance, + metricName: r._field, + severity: "critical", + additionalInfo: {"devId": r.dev_id}, + }), )() ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/servicenow/event.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/servicenow/event.md index fe7dc0361..60cd242c8 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/servicenow/event.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/servicenow/event.md @@ -27,7 +27,7 @@ servicenow.event( resource: "", metricName: "", messageKey: "", - additionalInfo: {} + additionalInfo: {}, ) ``` @@ -103,11 +103,11 @@ username = secrets.get(key: "SERVICENOW_USERNAME") password = secrets.get(key: "SERVICENOW_PASSWORD") lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_idle") - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_idle") + |> last() + |> findRecord(fn: (key) => true, idx: 0) servicenow.event( url: "https://tenant.service-now.com/api/global/em/jsonv2", @@ -118,9 +118,12 @@ servicenow.event( resource: lastReported.instance, metricName: lastReported._field, severity: - if lastReported._value < 1.0 then "critical" - else if lastReported._value < 5.0 then "warning" - else "info", - additionalInfo: {"devId": r.dev_id} + if lastReported._value < 1.0 then + "critical" + else if lastReported._value < 5.0 then + "warning" + else + "info", + additionalInfo: {"devId": r.dev_id}, ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/alert.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/alert.md index a533c5b40..370afa650 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/alert.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/alert.md @@ -36,7 +36,7 @@ tickscript.alert( warn: (r) => false, info: (r) => false, ok: (r) => true, - topic: "" + topic: "", ) ``` @@ -92,28 +92,24 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi [TICKscript](#) {{% /code-tabs %}} {{% code-tab-content %}} -```javascript +```js import "contrib/bonitoo-io/tickscript" -option task = {name: "Example task", every: 1m;} +option task = {name: "Example task", every: 1m} -check = tickscript.defineCheck( - id: "000000000000", - name: "Errors", - type: "threshold" -) +check = tickscript.defineCheck(id: "000000000000", name: "Errors", type: "threshold") from(bucket: "example-bucket") - |> range(start: -task.every) - |> filter(fn: (r) => r._measurement == "errors" and r._field == "value") - |> count() - |> tickscript.alert( - check: {check with _check_id: "task/${r.service}"}, - message: "task/${r.service} is ${r._level} value: ${r._value}", - crit: (r) => r._value > 30, - warn: (r) => r._value > 20, - info: (r) => r._value > 10 - ) + |> range(start: -task.every) + |> filter(fn: (r) => r._measurement == "errors" and r._field == "value") + |> count() + |> tickscript.alert( + check: {check with _check_id: "task/${r.service}"}, + message: "task/${r.service} is ${r._level} value: ${r._value}", + crit: (r) => r._value > 30, + warn: (r) => r._value > 20, + info: (r) => r._value > 10, + ) ``` {{% /code-tab-content %}} {{% code-tab-content %}} diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/compute.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/compute.md index 3586fe5d2..4635209c5 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/compute.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/compute.md @@ -26,9 +26,9 @@ that changes a column's name and optionally applies an aggregate or selector fun import "contrib/bonitoo-io/tickscript" tickscript.compute( - column: "_value", - fn: sum, - as: "example-name" + column: "_value", + fn: sum, + as: "example-name", ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/deadman.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/deadman.md index bcffa61ca..d0547910e 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/deadman.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/deadman.md @@ -28,12 +28,12 @@ _This function is comparable to the [Kapacitor AlertNode deadman](/{{< latest "k import "contrib/bonitoo-io/tickscript" tickscript.deadman( - check: {}, - measurement: "example-measurement", - threshold: 0, - id: (r)=>"${r._check_id}", - message: (r) => "Deadman Check: ${r._check_name} is: " + (if r.dead then "dead" else "alive"), - topic: "" + check: {}, + measurement: "example-measurement", + threshold: 0, + id: (r) => "${r._check_id}", + message: (r) => "Deadman Check: ${r._check_name} is: " + (if r.dead then "dead" else "alive"), + topic: "", ) ``` @@ -82,16 +82,16 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi ```javascript import "contrib/bonitoo-io/tickscript" -option task = {name: "Example task", every: 1m;} +option task = {name: "Example task", every: 1m} from(bucket: "example-bucket") - |> range(start: -task.every) - |> filter(fn: (r) => r._measurement == "pulse" and r._field == "value") - |> tickscript.deadman( - check: tickscript.defineCheck(id: "000000000000", name: "task/${r.service}"), - measurement: "pulse", - threshold: 2 - ) + |> range(start: -task.every) + |> filter(fn: (r) => r._measurement == "pulse" and r._field == "value") + |> tickscript.deadman( + check: tickscript.defineCheck(id: "000000000000", name: "task/${r.service}"), + measurement: "pulse", + threshold: 2, + ) ``` {{% /code-tab-content %}} {{% code-tab-content %}} diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/definecheck.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/definecheck.md index 48b23e83a..3c4ca35e3 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/definecheck.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/definecheck.md @@ -23,9 +23,9 @@ This check data specifies information about the check in the InfluxDB monitoring import "contrib/bonitoo-io/tickscript" tickscript.defineCheck( - id: "000000000000", - name: "Example check name", - type: "custom" + id: "000000000000", + name: "Example check name", + type: "custom", ) ``` @@ -55,10 +55,7 @@ Default is `custom`. ```javascript import "contrib/bonitoo-io/tickscript" -tickscript.defineCheck( - id: "000000000000", - name: "Example check name", -) +tickscript.defineCheck(id: "000000000000", name: "Example check name") // The function above returns: { // _check_id: "000000000000", diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/groupby.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/groupby.md index 44f256092..791e6e553 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/groupby.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/groupby.md @@ -29,9 +29,7 @@ or [`tickscript.selectWindow()`](/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript ```js import "contrib/bonitoo-io/tickscript" -tickscript.groupBy( - columns: ["exampleColumn"] -) +tickscript.groupBy(columns: ["exampleColumn"]) ``` ## Parameters @@ -51,7 +49,5 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "contrib/bonitoo-io/tickscript" data - |> tickscript.groupBy( - columns: ["host", "region"] - ) + |> tickscript.groupBy(columns: ["host", "region"]) ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/join.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/join.md index a81e2bc2f..1977c59c9 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/join.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/join.md @@ -26,9 +26,9 @@ _This function is comparable to the [Kapacitor JoinNode](/{{< latest "kapacitor" import "contrib/bonitoo-io/tickscript" tickscript.join( - tables: {t1: example1, t2: example2} - on: ["_time"], - measurement: "example-measurement" + tables: {t1: example1, t2: example2}, + on: ["_time"], + measurement: "example-measurement", ) ``` @@ -91,9 +91,9 @@ metrics = //... states = //... tickscript.join( - tables: {metric: metrics, state: states}, - on: ["_time", "host"], - measurement: "example-m" + tables: {metric: metrics, state: states}, + on: ["_time", "host"], + measurement: "example-m", ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/select.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/select.md index 19bc145cd..f1a8af1b4 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/select.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/select.md @@ -25,9 +25,9 @@ an aggregate or selector function to values in the column. import "contrib/bonitoo-io/tickscript" tickscript.select( - column: "_value", - fn: sum, - as: "example-name" + column: "_value", + fn: sum, + as: "example-name", ) ``` @@ -74,7 +74,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "contrib/bonitoo-io/tickscript" data - |> tickscript.select(as: "example-name") + |> tickscript.select(as: "example-name") ``` {{< flex >}} @@ -103,10 +103,7 @@ data import "contrib/bonitoo-io/tickscript" data - |> tickscript.select( - as: "sum", - fn: sum - ) + |> tickscript.select(as: "sum", fn: sum) ``` {{< flex >}} @@ -133,10 +130,7 @@ data import "contrib/bonitoo-io/tickscript" data - |> tickscript.select( - as: "max", - fn: max - ) + |> tickscript.select(as: "max", fn: max) ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/selectwindow.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/selectwindow.md index 5f4606800..0587fc1da 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/selectwindow.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/tickscript/selectwindow.md @@ -25,11 +25,11 @@ and applies an aggregate or selector function the specified column for each wind import "contrib/bonitoo-io/tickscript" tickscript.selectWindow( - column: "_value", - fn: sum, - as: "example-name", - every: 1m, - defaultValue: 0.0, + column: "_value", + fn: sum, + as: "example-name", + every: 1m, + defaultValue: 0.0, ) ``` @@ -81,12 +81,12 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "contrib/bonitoo-io/tickscript" data - |> tickscript.selectWindow( - fn: sum, - as: "example-name", - every: 1h, - defaultValue: 0.0 - ) + |> tickscript.selectWindow( + fn: sum, + as: "example-name", + every: 1h, + defaultValue: 0.0, + ) ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/victorops/alert.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/victorops/alert.md index 6bc3e1380..a5cbf0dcd 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/victorops/alert.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/victorops/alert.md @@ -26,13 +26,13 @@ Splunk acquired VictorOps and VictorOps is now import "contrib/bonitoo-io/victorops" victorops.alert( - url: "https://alert.victorops.com/integrations/generic/00000000/alert/${api_key}/${routing_key}", - monitoringTool: "", - messageType: "CRITICAL", - entityID: "", - entityDisplayName: "", - stateMessage: "", - timestamp: now() + url: "https://alert.victorops.com/integrations/generic/00000000/alert/${api_key}/${routing_key}", + monitoringTool: "", + messageType: "CRITICAL", + entityID: "", + entityDisplayName: "", + stateMessage: "", + timestamp: now(), ) ``` @@ -91,20 +91,23 @@ apiKey = secrets.get(key: "VICTOROPS_API_KEY") routingKey = secrets.get(key: "VICTOROPS_ROUTING_KEY") lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_idle") - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_idle") + |> last() + |> findRecord(fn: (key) => true, idx: 0) victorops.alert( - url: "https://alert.victorops.com/integrations/generic/00000000/alert/${apiKey}/${routingKey}", - messageType: - if lastReported._value < 1.0 then "CRITICAL" - else if lastReported._value < 5.0 then "WARNING" - else "INFO", - entityID: "example-alert-1", - entityDisplayName: "Example Alert 1", - stateMessage: "Last reported cpu_idle was ${string(v: r._value)}." + url: "https://alert.victorops.com/integrations/generic/00000000/alert/${apiKey}/${routingKey}", + messageType: + if lastReported._value < 1.0 then + "CRITICAL" + else if lastReported._value < 5.0 then + "WARNING" + else + "INFO", + entityID: "example-alert-1", + entityDisplayName: "Example Alert 1", + stateMessage: "Last reported cpu_idle was ${string(v: r._value)}.", ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/victorops/endpoint.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/victorops/endpoint.md index c9edc2ae4..7b139b8e2 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/victorops/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/victorops/endpoint.md @@ -26,7 +26,7 @@ Splunk acquired VictorOps and VictorOps is now import "contrib/bonitoo-io/victorops" victorops.endpoint( - url: "https://alert.victorops.com/integrations/generic/00000000/alert${apiKey}/${routingKey}", + url: "https://alert.victorops.com/integrations/generic/00000000/alert${apiKey}/${routingKey}", ) ``` @@ -75,18 +75,21 @@ routingKey = secrets.get(key: "VICTOROPS_ROUTING_KEY") url = "https://alert.victorops.com/integrations/generic/00000000/alert/${apiKey}/${routingKey}" endpoint = victorops.endpoint(url: url) -crit_events = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") +crit_events = + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") crit_events - |> endpoint(mapFn: (r) => ({ - monitoringTool: "InfluxDB" - messageType: "CRITICAL", - entityID: "${r.host}-${r._field)-critical", - entityDisplayName: "Critical alert for ${r.host}", - stateMessage: "${r.host} is in a critical state. ${r._field} is ${string(v: r._value)}.", - timestamp: now() - }) - )() + |> endpoint( + mapFn: (r) => + ({ + monitoringTool: "InfluxDB", + messageType: "CRITICAL", + entityID: "${r.host}-${r._field}-critical", + entityDisplayName: "Critical alert for ${r.host}", + stateMessage: "${r.host} is in a critical state. ${r._field} is ${string(v: r._value)}.", + timestamp: now(), + }), + )() ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/zenoss/endpoint.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/zenoss/endpoint.md index aa02b0a9d..a893d368e 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/zenoss/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/zenoss/endpoint.md @@ -20,13 +20,13 @@ The `zenoss.endpoint()` function sends events to Zenoss using data from input ro import "contrib/bonitoo-io/zenoss" zenoss.endpoint( - url: "https://example.zenoss.io:8080/zport/dmd/evconsole_router", - username: "example-user", - password: "example-password", - action: "EventsRouter", - method: "add_event", - type: "rpc", - tid: 1 + url: "https://example.zenoss.io:8080/zport/dmd/evconsole_router", + username: "example-user", + password: "example-password", + action: "EventsRouter", + method: "add_event", + type: "rpc", + tid: 1, ) ``` @@ -94,26 +94,25 @@ import "influxdata/influxdb/secrets" url = "https://tenant.zenoss.io:8080/zport/dmd/evconsole_router" username = secrets.get(key: "ZENOSS_USERNAME") password = secrets.get(key: "ZENOSS_PASSWORD") -endpoint = zenoss.endpoint( - url: url, - username: username, - password: password -) +endpoint = zenoss.endpoint(url: url, username: username, password: password) -crit_events = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") +crit_events = + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") crit_events - |> endpoint(mapFn: (r) => ({ - summary: "Critical event for ${r.host}", - device: r.deviceID, - component: r.host, - severity: "Critical", - eventClass: "/App", - eventClassKey: "", - collector: "", - message: "${r.host} is in a critical state.", - }) - )() + |> endpoint( + mapFn: (r) => + ({ + summary: "Critical event for ${r.host}", + device: r.deviceID, + component: r.host, + severity: "Critical", + eventClass: "/App", + eventClassKey: "", + collector: "", + message: "${r.host} is in a critical state.", + }), + )() ``` diff --git a/content/flux/v0.x/stdlib/contrib/bonitoo-io/zenoss/event.md b/content/flux/v0.x/stdlib/contrib/bonitoo-io/zenoss/event.md index 1b7aacddd..4ca055cdc 100644 --- a/content/flux/v0.x/stdlib/contrib/bonitoo-io/zenoss/event.md +++ b/content/flux/v0.x/stdlib/contrib/bonitoo-io/zenoss/event.md @@ -19,21 +19,21 @@ The `zenoss.event()` function sends an event to [Zenoss](https://www.zenoss.com/ import "contrib/bonitoo-io/zenoss" zenoss.event( - url: "https://example.zenoss.io:8080/zport/dmd/evconsole_router", - username: "example-user", - password: "example-password", - action: "EventsRouter", - method: "add_event", - type: "rpc", - tid: 1, - summary: "", - device: "", - component: "", - severity: "Critical", - eventClass: "", - eventClassKey: "", - collector: "", - message: "" + url: "https://example.zenoss.io:8080/zport/dmd/evconsole_router", + username: "example-user", + password: "example-password", + action: "EventsRouter", + method: "add_event", + type: "rpc", + tid: 1, + summary: "", + device: "", + component: "", + severity: "Critical", + eventClass: "", + eventClassKey: "", + collector: "", + message: "", ) ``` @@ -119,11 +119,11 @@ username = secrets.get(key: "ZENOSS_USERNAME") password = secrets.get(key: "ZENOSS_PASSWORD") lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_idle") - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_idle") + |> last() + |> findRecord(fn: (key) => true, idx: 0) zenoss.event( url: "https://tenant.zenoss.io:8080/zport/dmd/evconsole_router", @@ -133,9 +133,13 @@ zenoss.event( component: "CPU", eventClass: "/App", severity: - if lastReported._value < 1.0 then "Critical" - else if lastReported._value < 5.0 then "Warning" - else if lastReported._value < 20.0 then "Info" - else "Clear" + if lastReported._value < 1.0 then + "Critical" + else if lastReported._value < 5.0 then + "Warning" + else if lastReported._value < 20.0 then + "Info" + else + "Clear", ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/chobbs/discord/endpoint.md b/content/flux/v0.x/stdlib/contrib/chobbs/discord/endpoint.md index fbd6de869..e970fd856 100644 --- a/content/flux/v0.x/stdlib/contrib/chobbs/discord/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/chobbs/discord/endpoint.md @@ -24,10 +24,10 @@ and data from table rows. import "contrib/chobbs/discord" discord.endpoint( - webhookToken: "mySuPerSecRetTokEn", - webhookID: "123456789", - username: "username", - avatar_url: "https://example.com/avatar_pic.jpg" + webhookToken: "mySuPerSecRetTokEn", + webhookID: "123456789", + username: "username", + avatar_url: "https://example.com/avatar_pic.jpg", ) ``` @@ -68,19 +68,13 @@ import "influxdata/influxdb/secrets" import "contrib/chobbs/discord" discordToken = secrets.get(key: "DISCORD_TOKEN") -endpoint = telegram.endpoint( - webhookToken: discordToken, - webhookID: "123456789", - username: "critBot" -) +endpoint = telegram.endpoint(webhookToken: discordToken, webhookID: "123456789", username: "critBot") -crit_statuses = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") +crit_statuses = + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") crit_statuses - |> endpoint(mapFn: (r) => ({ - content: "The status is critical!", - }) - )() + |> endpoint(mapFn: (r) => ({content: "The status is critical!"}))() ``` diff --git a/content/flux/v0.x/stdlib/contrib/chobbs/discord/send.md b/content/flux/v0.x/stdlib/contrib/chobbs/discord/send.md index f302cd3e5..1df4846d7 100644 --- a/content/flux/v0.x/stdlib/contrib/chobbs/discord/send.md +++ b/content/flux/v0.x/stdlib/contrib/chobbs/discord/send.md @@ -21,11 +21,11 @@ a [Discord webhook](https://support.discord.com/hc/en-us/articles/228383668-Intr import "contrib/chobbs/discord" discord.send( - webhookToken: "mySuPerSecRetTokEn", - webhookID: "123456789", - username: "username", - content: "This is an example message", - avatar_url: "https://example.com/avatar_pic.jpg" + webhookToken: "mySuPerSecRetTokEn", + webhookID: "123456789", + username: "username", + content: "This is an example message", + avatar_url: "https://example.com/avatar_pic.jpg", ) ``` @@ -50,24 +50,23 @@ Override the Discord webhook's default avatar. ##### Send the last reported status to Discord ```js -import "contrib/chobbs/discord" import "influxdata/influxdb/secrets" token = secrets.get(key: "DISCORD_TOKEN") lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses") - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses") + |> last() + |> findRecord(fn: (key) => true, idx: 0) discord.send( - webhookToken:token, - webhookID: "1234567890", - username: "chobbs", - content: "The current status is \"${lastReported.status}\".", - avatar_url: "https://staff-photos.net/pic.jpg" + webhookToken: token, + webhookID: "1234567890", + username: "chobbs", + content: "The current status is \"${lastReported.status}\".", + avatar_url: "https://staff-photos.net/pic.jpg", ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/jsternberg/influxdb/select.md b/content/flux/v0.x/stdlib/contrib/jsternberg/influxdb/select.md index 36bf6b4c1..9d5f7b754 100644 --- a/content/flux/v0.x/stdlib/contrib/jsternberg/influxdb/select.md +++ b/content/flux/v0.x/stdlib/contrib/jsternberg/influxdb/select.md @@ -30,15 +30,15 @@ Results are similar to those returned by InfluxQL `SELECT` statements. import "contrib/jsternberg/influxdb" influxdb.select( - from: "example-bucket", - start: -1d, - stop: now(), - m: "example-measurement", - fields: [], - where: (r) => true, - host: "https://example.com", - org: "example-org", - token: "MySuP3rSecr3Tt0k3n" + from: "example-bucket", + start: -1d, + stop: now(), + m: "example-measurement", + fields: [], + where: (r) => true, + host: "https://example.com", + org: "example-org", + token: "MySuP3rSecr3Tt0k3n", ) ``` @@ -109,24 +109,14 @@ InfluxDB [API token](/{{< latest "influxdb" >}}/security/tokens/). ```js import "contrib/jsternberg/influxdb" -influxdb.select( - from: "example-bucket", - start: -1d, - m: "example-measurement", - fields: ["field1"] -) +influxdb.select(from: "example-bucket", start: -1d, m: "example-measurement", fields: ["field1"]) ``` ##### Query multiple fields ```js import "contrib/jsternberg/influxdb" -influxdb.select( - from: "example-bucket", - start: -1d, - m: "example-measurement", - fields: ["field1", "field2", "field3"] -) +influxdb.select(from: "example-bucket", start: -1d, m: "example-measurement", fields: ["field1", "field2", "field3"]) ``` ##### Query all fields and filter by tags @@ -134,10 +124,10 @@ influxdb.select( import "contrib/jsternberg/influxdb" influxdb.select( - from: "example-bucket", - start: -1d, - m: "example-measurement", - where: (r) => r.host == "host1" and r.region == "us-west" + from: "example-bucket", + start: -1d, + m: "example-measurement", + where: (r) => r.host == "host1" and r.region == "us-west", ) ``` @@ -149,12 +139,12 @@ import "influxdata/influxdb/secrets" token = secrets.get(key: "INFLUXDB_CLOUD_TOKEN") influxdb.select( - from: "example-bucket", - start: -1d, - m: "example-measurement", - fields: ["field1", "field2"], - host: "https://cloud2.influxdata.com", - org: "example-org", - token: token + from: "example-bucket", + start: -1d, + m: "example-measurement", + fields: ["field1", "field2"], + host: "https://cloud2.influxdata.com", + org: "example-org", + token: token, ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/jsternberg/rows/_index.md b/content/flux/v0.x/stdlib/contrib/jsternberg/rows/_index.md deleted file mode 100644 index 1dea372bf..000000000 --- a/content/flux/v0.x/stdlib/contrib/jsternberg/rows/_index.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Flux rows package -list_title: rows package -description: > - The Flux `rows` package provides additional functions for remapping values in rows. - Import the `contrib/jsternberg/rows` package. -aliases: - - /influxdb/v2.0/reference/flux/stdlib/contrib/rows/ - - /influxdb/cloud/reference/flux/stdlib/contrib/rows/ -menu: - flux_0_x_ref: - name: rows - parent: jsternberg -weight: 201 -flux/v0.x/tags: [functions, package] -introduced: 0.77.0 ---- - -The Flux `rows` package provides additional functions for remapping values in rows. -Import the `contrib/jsternberg/rows` package: - -```js -import "contrib/jsternberg/rows" -``` - -{{< children type="functions" show="pages" >}} diff --git a/content/flux/v0.x/stdlib/contrib/jsternberg/rows/map.md b/content/flux/v0.x/stdlib/contrib/jsternberg/rows/map.md deleted file mode 100644 index 9751d8a9e..000000000 --- a/content/flux/v0.x/stdlib/contrib/jsternberg/rows/map.md +++ /dev/null @@ -1,224 +0,0 @@ ---- -title: rows.map() function -description: > - The `rows.map()` function is an alternate implementation of [`map()`](/flux/v0.x/stdlib/universe/map/) - that is faster, but more limited than `map()`. -aliases: - - /influxdb/v2.0/reference/flux/stdlib/contrib/rows/map/ - - /influxdb/cloud/reference/flux/stdlib/contrib/rows/map/ -menu: - flux_0_x_ref: - name: rows.map - parent: rows -weight: 301 -flux/v0.x/tags: [transformations] -related: - - /flux/v0.x/stdlib/universe/map/ -introduced: 0.77.0 ---- - -The `rows.map()` function is an alternate implementation of [`map()`](/flux/v0.x/stdlib/universe/map/) -that is faster, but more limited than `map()`. -`rows.map()` cannot modify [groups keys](/flux/v0.x/get-started/data-model/#group-key) and, -therefore, does not need to regroup tables. -**Attempts to change columns in the group key are ignored.** - - -```js -import "contrib/jsternberg/rows" - -rows.map( fn: (r) => ({_value: r._value * 100.0})) -``` - -## Parameters - -### fn {data-type="function"} - -A single argument function to apply to each record. -The return value must be a record. - -{{% note %}} -Use the `with` operator to preserve columns **not** in the group and **not** -explicitly mapped in the operation. -{{% /note %}} - -### tables {data-type="stream of tables"} -Input data. -Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressions)). - -## Examples - -- [Perform mathemtical operations on column values](#perform-mathemtical-operations-on-column-values) -- [Preserve all columns in the operation](#preserve-all-columns-in-the-operation) -- [Attempt to remap columns in the group key](#attempt-to-remap-columns-in-the-group-key) - ---- - -### Perform mathemtical operations on column values -The following example returns the square of each value in the `_value` column: - -```js -import "contrib/jsternberg/rows" - -data - |> rows.map(fn: (r) => ({ _value: r._value * r._value })) -``` - -{{% note %}} -#### Important notes -The `_time` column is dropped because: - -- It's not in the group key. -- It's not explicitly mapped in the operation. -- The `with` operator was not used to include existing columns. -{{% /note %}} - -{{< flex >}} -{{% flex-content %}} -#### Input tables - -**Group key:** `tag,_field` - -| tag | _field | _time | _value | -| --- |:------ |:----- | ------:| -| tag1 | foo | 0001 | 1.9 | -| tag1 | foo | 0002 | 2.4 | -| tag1 | foo | 0003 | 2.1 | - -| tag | _field | _time | _value | -|:--- |:------ |:----- | ------:| -| tag2 | bar | 0001 | 3.1 | -| tag2 | bar | 0002 | 3.8 | -| tag2 | bar | 0003 | 1.7 | -{{% /flex-content %}} - -{{% flex-content %}} -#### Output tables - -**Group key:** `tag,_field` - -| tag | _field | _value | -| --- |:------ | ------:| -| tag1 | foo | 3.61 | -| tag1 | foo | 5.76 | -| tag1 | foo | 4.41 | - -| tag | _field | _value | -|:--- |:------ | ------:| -| tag2 | bar | 9.61 | -| tag2 | bar | 14.44 | -| tag2 | bar | 2.89 | -{{% /flex-content %}} -{{< /flex >}} - ---- - -### Preserve all columns in the operation -Use the `with` operator in your mapping operation to preserve all columns, -including those not in the group key, without explicitly remapping them. - -```js -import "contrib/jsternberg/rows" - -data - |> rows.map(fn: (r) => ({ r with _value: r._value * r._value })) -``` - -{{% note %}} -#### Important notes -- The mapping operation remaps the `_value` column. -- The `with` operator preserves all other columns not in the group key (`_time`). -{{% /note %}} - -{{< flex >}} -{{% flex-content %}} -#### Input tables - -**Group key:** `tag,_field` - -| tag | _field | _time | _value | -|:--- |:------ |:----- | ------:| -| tag1 | foo | 0001 | 1.9 | -| tag1 | foo | 0002 | 2.4 | -| tag1 | foo | 0003 | 2.1 | - -| tag | _field | _time | _value | -| --- |:------ |:----- | ------:| -| tag2 | bar | 0001 | 3.1 | -| tag2 | bar | 0002 | 3.8 | -| tag2 | bar | 0003 | 1.7 | -{{% /flex-content %}} - -{{% flex-content %}} -#### Output tables - -**Group key:** `tag,_field` - -| tag | _field | _time | _value | -|:--- |:------ |:----- | ------:| -| tag1 | foo | 0001 | 3.61 | -| tag1 | foo | 0002 | 5.76 | -| tag1 | foo | 0003 | 4.41 | - -| tag | _field | _time | _value | -| --- |:------ |:----- | ------:| -| tag2 | bar | 0001 | 9.61 | -| tag2 | bar | 0002 | 14.44 | -| tag2 | bar | 0003 | 2.89 | -{{% /flex-content %}} -{{< /flex >}} - ---- - -### Attempt to remap columns in the group key - -```js -import "contrib/jsternberg/rows" - -data - |> rows.map(fn: (r) => ({ r with tag: "tag3" })) -``` - -{{% note %}} -#### Important notes -- Remapping the `tag` column to `"tag3"` is ignored because `tag` is part of the group key. -- The `with` operator preserves columns not in the group key (`_time` and `_value`). -{{% /note %}} - -{{< flex >}} -{{% flex-content %}} -#### Input tables - -**Group key:** `tag,_field` - -| tag | _field | _time | _value | -| --- |:------ |:----- | ------:| -| tag1 | foo | 0001 | 1.9 | -| tag1 | foo | 0002 | 2.4 | -| tag1 | foo | 0003 | 2.1 | - -| tag | _field | _time | _value | -|:--- |:------ |:----- | ------:| -| tag2 | bar | 0001 | 3.1 | -| tag2 | bar | 0002 | 3.8 | -| tag2 | bar | 0003 | 1.7 | -{{% /flex-content %}} - -{{% flex-content %}} -#### Output tables - -**Group key:** `tag,_field` - -| tag | _field | _time | _value | -| --- |:------ |:----- | ------:| -| tag1 | foo | 0001 | 1.9 | -| tag1 | foo | 0002 | 2.4 | -| tag1 | foo | 0003 | 2.1 | - -| tag | _field | _time | _value | -|:--- |:------ |:----- | ------:| -| tag2 | bar | 0001 | 3.1 | -| tag2 | bar | 0002 | 3.8 | -| tag2 | bar | 0003 | 1.7 | -{{% /flex-content %}} -{{< /flex >}} diff --git a/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/_index.md b/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/_index.md index 8102143b8..20e1ab368 100644 --- a/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/_index.md +++ b/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/_index.md @@ -53,6 +53,6 @@ Sending alert timestamps to BigPanda is optional, but if you choose to send them convert timestamps to **epoch second timestamps**: ```js -// - |> map(fn: (r) => ({ r with secTime: int(v: r._time) / 1000000000 })) +data + |> map(fn: (r) => ({ r with secTime: int(v: r._time) / 1000000000 })) ``` diff --git a/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/endpoint.md b/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/endpoint.md index 1d9bceddc..172259413 100644 --- a/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/endpoint.md @@ -21,9 +21,9 @@ using data from input rows. import "contrib/rhajek/bigpanda" bigpanda.endpoint( - url: "https://api.bigpanda.io/data/v2/alerts", - token: "my5uP3rS3cRe7t0k3n", - appKey: "example-app-key" + url: "https://api.bigpanda.io/data/v2/alerts", + token: "my5uP3rS3cRe7t0k3n", + appKey: "example-app-key", ) ``` @@ -67,22 +67,21 @@ import "influxdata/influxdb/secrets" import "json" token = secrets.get(key: "BIGPANDA_API_KEY") -endpoint = bigpanda.endpoint( - token: token, - appKey: "example-app-key" -) +endpoint = bigpanda.endpoint(token: token, appKey: "example-app-key") -crit_events = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") +crit_events = + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") crit_events - |> endpoint(mapFn: (r) => { - return { r with - status: "critical", - check: "critical-status-check", - description: "${r._field} is critical: ${string(v: r._value)}" - tags: json.encode(v: [{"name": "host", "value": r.host}]), - } - })() + |> endpoint( + mapFn: (r) => { + return {r with status: "critical", + check: "critical-status-check", + description: "${r._field} is critical: ${string(v: r._value)}", + tags: json.encode(v: [{"name": "host", "value": r.host}]), + } + }, + )() ``` diff --git a/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/sendalert.md b/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/sendalert.md index 7bc3c1f35..75e3f466a 100644 --- a/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/sendalert.md +++ b/content/flux/v0.x/stdlib/contrib/rhajek/bigpanda/sendalert.md @@ -19,11 +19,11 @@ The `bigpanda.sendAlert()` function sends an alert to [BigPanda](https://www.big import "contrib/rhajek/bigpanda" bigpanda.sendAlert( - url: "https://api.bigpanda.io/data/v2/alerts", - token: "my5uP3rS3cRe7t0k3n", - appKey: "example-app-key", - status: "critical", - rec: {}, + url: "https://api.bigpanda.io/data/v2/alerts", + token: "my5uP3rS3cRe7t0k3n", + appKey: "example-app-key", + status: "critical", + rec: {}, ) ``` @@ -68,23 +68,20 @@ import "json" token = secrets.get(key: "BIGPANDA_API_KEY") lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "level" - ) - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "level") + |> last() + |> findRecord(fn: (key) => true, idx: 0) bigpanda.sendAlert( - token: token, - appKey: "example-app-key", - status: bigpanda.statusFromLevel(level: "${lastReported.status}"), - rec: { - tags: json.encode(v: [{"name": "host", "value": "my-host"}]), - check: "my-check", - description: "${lastReported._field} is ${lastReported.status}: ${string(v: lastReported._value)}" - } + token: token, + appKey: "example-app-key", + status: bigpanda.statusFromLevel(level: "${lastReported.status}"), + rec: { + tags: json.encode(v: [{"name": "host", "value": "my-host"}]), + check: "my-check", + description: "${lastReported._field} is ${lastReported.status}: ${string(v: lastReported._value)}", + }, ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/endpoint.md b/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/endpoint.md index 9c499fedf..5952e1bf3 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/endpoint.md @@ -20,9 +20,9 @@ The `opsgenie.endpoint()` function sends an alert message to Opsgenie using data import "contrib/sranka/opsgenie" opsgenie.endpoint( - url: "https://api.opsgenie.com/v2/alerts", - apiKey: "YoUrSup3R5ecR37AuThK3y", - entity: "example-entity" + url: "https://api.opsgenie.com/v2/alerts", + apiKey: "YoUrSup3R5ecR37AuThK3y", + entity: "example-entity", ) ``` @@ -72,22 +72,25 @@ import "contrib/sranka/opsgenie" apiKey = secrets.get(key: "OPSGENIE_APIKEY") endpoint = opsgenie.endpoint(apiKey: apiKey) -crit_statuses = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") +crit_statuses = + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") crit_statuses - |> endpoint(mapFn: (r) => ({ - message: "Great Scott!- Disk usage is: ${r.status}.", - alias: "disk-usage-${r.status}", - description: "", - priority: "P3", - responders: ["user:john@example.com", "team:itcrowd"], - tags: [], - entity: "my-lab", - actions: [], - details: "{}", - visibleTo: [] - }) - )() + |> endpoint( + mapFn: (r) => + ({ + message: "Great Scott!- Disk usage is: ${r.status}.", + alias: "disk-usage-${r.status}", + description: "", + priority: "P3", + responders: ["user:john@example.com", "team:itcrowd"], + tags: [], + entity: "my-lab", + actions: [], + details: "{}", + visibleTo: [], + }), + )() ``` diff --git a/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/responderstojson.md b/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/responderstojson.md index 412c17dc2..95d24afa3 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/responderstojson.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/responderstojson.md @@ -19,13 +19,10 @@ strings to a string-encoded JSON array that can be embedded in an alert message. ```js import "contrib/sranka/opsgenie" +import "contrib/sranka/opsgenie" + opsgenie.respondersToJSON( - v: [ - "user:example-user", - "team:example-team", - "escalation:example-escalation", - "schedule:example-schedule" - ] + v: ["user:example-user", "team:example-team", "escalation:example-escalation", "schedule:example-schedule"], ) // Returns "[ diff --git a/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/sendalert.md b/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/sendalert.md index dca105119..7e6581476 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/sendalert.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/opsgenie/sendalert.md @@ -19,18 +19,18 @@ The `opsgenie.sendAlert()` function sends an alert message to Opsgenie. import "contrib/sranka/opsgenie" opsgenie.sendAlert( - url: "https://api.opsgenie.com/v2/alerts", - apiKey: "YoUrSup3R5ecR37AuThK3y", - message: "Example message", - alias: "Example alias", - description: "Example description", - priority: "P3", - responders: ["user:john@example.com", "team:itcrowd"], - tags: ["tag1", "tag2"], - entity: "example-entity", - actions: ["action1", "action2"], - details: "{}", - visibleTo: [] + url: "https://api.opsgenie.com/v2/alerts", + apiKey: "YoUrSup3R5ecR37AuThK3y", + message: "Example message", + alias: "Example alias", + description: "Example description", + priority: "P3", + responders: ["user:john@example.com", "team:itcrowd"], + tags: ["tag1", "tag2"], + entity: "example-entity", + actions: ["action1", "action2"], + details: "{}", + visibleTo: [], ) ``` @@ -99,16 +99,16 @@ import "contrib/sranka/opsgenie" apiKey = secrets.get(key: "OPSGENIE_APIKEY") lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses") - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses") + |> last() + |> findRecord(fn: (key) => true, idx: 0) - opsgenie.sendAlert( - apiKey: apiKey, - message: "Disk usage is: ${lastReported.status}.", - alias: "example-disk-usage", - responders: ["user:john@example.com", "team:itcrowd"] - ) +opsgenie.sendAlert( + apiKey: apiKey, + message: "Disk usage is: ${lastReported.status}.", + alias: "example-disk-usage", + responders: ["user:john@example.com", "team:itcrowd"], +) ``` diff --git a/content/flux/v0.x/stdlib/contrib/sranka/sensu/endpoint.md b/content/flux/v0.x/stdlib/contrib/sranka/sensu/endpoint.md index 90da236e7..a21529a3c 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/sensu/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/sensu/endpoint.md @@ -28,11 +28,11 @@ using data from table rows. import "contrib/sranka/sensu" sensu.endpoint( - url: "http://localhost:8080", - apiKey: "mYSuP3rs3cREtApIK3Y", - handlers: [], - namespace: "default", - entityName: "influxdb" + url: "http://localhost:8080", + apiKey: "mYSuP3rs3cREtApIK3Y", + handlers: [], + namespace: "default", + entityName: "influxdb", ) ``` @@ -86,20 +86,13 @@ import "influxdata/influxdb/secrets" import "contrib/sranka/sensu" token = secrets.get(key: "TELEGRAM_TOKEN") -endpoint = sensu.endpoint( - url: "http://localhost:8080", - apiKey: apiKey -) +endpoint = sensu.endpoint(url: "http://localhost:8080", apiKey: apiKey) -crit_statuses = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") +crit_statuses = + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") crit_statuses - |> endpoint(mapFn: (r) => ({ - checkName: "critStatus", - text: "Status is critical", - status: 2 - }) - )() + |> endpoint(mapFn: (r) => ({checkName: "critStatus", text: "Status is critical", status: 2}))() ``` diff --git a/content/flux/v0.x/stdlib/contrib/sranka/sensu/event.md b/content/flux/v0.x/stdlib/contrib/sranka/sensu/event.md index 16816e195..6727c9b9b 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/sensu/event.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/sensu/event.md @@ -25,15 +25,15 @@ The `sensu.event()` function sends a single event to the import "contrib/sranka/sensu" sensu.event( - url: "http://localhost:8080", - apiKey: "mYSuP3rs3cREtApIK3Y", - checkName: "checkName", - text: "Event output text", - handlers: [], - status: 0, - state: "passing", - namespace: "default", - entityName: "influxdb" + url: "http://localhost:8080", + apiKey: "mYSuP3rs3cREtApIK3Y", + checkName: "checkName", + text: "Event output text", + handlers: [], + status: 0, + state: "passing", + namespace: "default", + entityName: "influxdb", ) ``` @@ -104,16 +104,16 @@ import "contrib/sranka/sensu" apiKey = secrets.get(key: "SENSU_API_KEY") lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses") - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses") + |> last() + |> findRecord(fn: (key) => true, idx: 0) - sensu.event( - url: "http://localhost:8080", - apiKey: apiKey, - checkName: "diskUsage", - text: "Disk usage is **${lastReported.status}**.", - ) +sensu.event( + url: "http://localhost:8080", + apiKey: apiKey, + checkName: "diskUsage", + text: "Disk usage is **${lastReported.status}**.", +) ``` diff --git a/content/flux/v0.x/stdlib/contrib/sranka/teams/endpoint.md b/content/flux/v0.x/stdlib/contrib/sranka/teams/endpoint.md index f83690d1c..52944788e 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/teams/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/teams/endpoint.md @@ -21,9 +21,7 @@ using data from table rows. ```js import "contrib/sranka/teams" -teams.endpoint( - url: "https://outlook.office.com/webhook/example-webhook" -) +teams.endpoint(url: "https://outlook.office.com/webhook/example-webhook") ``` ## Parameters @@ -57,15 +55,14 @@ import "contrib/sranka/teams" url = "https://outlook.office.com/webhook/example-webhook" endpoint = teams.endpoint(url: url) -crit_statuses = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") +crit_statuses = + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") crit_statuses - |> endpoint(mapFn: (r) => ({ - title: "Disk Usage" - text: "Disk usage is: **${r.status}**.", - summary: "Disk usage is ${r.status}" - }) - )() + |> endpoint( + mapFn: (r) => + ({title: "Disk Usage", text: "Disk usage is: **${r.status}**.", summary: "Disk usage is ${r.status}"}), + )() ``` diff --git a/content/flux/v0.x/stdlib/contrib/sranka/teams/message.md b/content/flux/v0.x/stdlib/contrib/sranka/teams/message.md index eb48bc5ab..08a0f65f1 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/teams/message.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/teams/message.md @@ -21,10 +21,10 @@ an [incoming webhook](https://docs.microsoft.com/microsoftteams/platform/webhook import "contrib/sranka/teams" teams.message( - url: "https://outlook.office.com/webhook/example-webhook", - title: "Example message title", - text: "Example message text", - summary: "", + url: "https://outlook.office.com/webhook/example-webhook", + title: "Example message title", + text: "Example message text", + summary: "", ) ``` @@ -51,16 +51,16 @@ If no summary is provided, Flux generates the summary from the message text. import "contrib/sranka/teams" lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses") - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses") + |> last() + |> findRecord(fn: (key) => true, idx: 0) teams.message( - url: "https://outlook.office.com/webhook/example-webhook", - title: "Disk Usage" - text: "Disk usage is: *${lastReported.status}*.", - summary: "Disk usage is ${lastReported.status}" + url: "https://outlook.office.com/webhook/example-webhook", + title: "Disk Usage", + text: "Disk usage is: *${lastReported.status}*.", + summary: "Disk usage is ${lastReported.status}", ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/sranka/telegram/endpoint.md b/content/flux/v0.x/stdlib/contrib/sranka/telegram/endpoint.md index 58d715d3e..fd99eae82 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/telegram/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/telegram/endpoint.md @@ -22,10 +22,10 @@ using data from table rows. import "contrib/sranka/telegram" telegram.endpoint( - url: "https://api.telegram.org/bot", - token: "S3crEtTel3gRamT0k3n", - parseMode: "MarkdownV2", - disableWebPagePreview: false, + url: "https://api.telegram.org/bot", + token: "S3crEtTel3gRamT0k3n", + parseMode: "MarkdownV2", + disableWebPagePreview: false, ) ``` @@ -79,15 +79,11 @@ import "contrib/sranka/telegram" token = secrets.get(key: "TELEGRAM_TOKEN") endpoint = telegram.endpoint(token: token) -crit_statuses = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") +crit_statuses = + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and status == "crit") crit_statuses - |> endpoint(mapFn: (r) => ({ - channel: "-12345", - text: "Disk usage is **${r.status}**.", - silent: true - }) - )() + |> endpoint(mapFn: (r) => ({channel: "-12345", text: "Disk usage is **${r.status}**.", silent: true}))() ``` diff --git a/content/flux/v0.x/stdlib/contrib/sranka/telegram/message.md b/content/flux/v0.x/stdlib/contrib/sranka/telegram/message.md index 138f3fd3f..431ac7678 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/telegram/message.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/telegram/message.md @@ -21,13 +21,13 @@ the [`sendMessage` method of the Telegram Bot API](https://core.telegram.org/bot import "contrib/sranka/telegram" telegram.message( - url: "https://api.telegram.org/bot", - token: "S3crEtTel3gRamT0k3n", - channel: "-12345", - text: "Example message text", - parseMode: "MarkdownV2", - disableWebPagePreview: false, - silent: true + url: "https://api.telegram.org/bot", + token: "S3crEtTel3gRamT0k3n", + channel: "-12345", + text: "Example message text", + parseMode: "MarkdownV2", + disableWebPagePreview: false, + silent: true, ) ``` @@ -75,15 +75,11 @@ import "contrib/sranka/telegram" token = secrets.get(key: "TELEGRAM_TOKEN") lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses") - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses") + |> last() + |> findRecord(fn: (key) => true, idx: 0) - telegram.message( - token: token, - channel: "-12345" - text: "Disk usage is **${lastReported.status}**.", - ) +telegram.message(token: token, channel: "-12345", text: "Disk usage is **${lastReported.status}**.") ``` diff --git a/content/flux/v0.x/stdlib/contrib/sranka/webexteams/endpoint.md b/content/flux/v0.x/stdlib/contrib/sranka/webexteams/endpoint.md index 09fb721c7..2f55c76ec 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/webexteams/endpoint.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/webexteams/endpoint.md @@ -21,8 +21,8 @@ includes data from input rows to a Webex room. import "contrib/sranka/webexteams" webexteams.endpoint( - url: "https://webexapis.com", - token: "token" + url: "https://webexapis.com", + token: "token", ) ``` @@ -63,13 +63,16 @@ import "influxdata/influxdb/secrets" token = secrets.get(key: "WEBEX_API_KEY") from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses") - |> last() - |> tableFind(fn: (key) => true) - |> webexteams.endpoint(token: token)(mapFn: (r) => ({ - roomId: "Y2lzY29zcGFyazovL3VzL1JPT00vYmJjZWIxYWQtNDNmMS0zYjU4LTkxNDctZjE0YmIwYzRkMTU0", - text: "", - markdown: "Disk usage is **${r.status}**.", - }) + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses") + |> last() + |> tableFind(fn: (key) => true) + |> webexteams.endpoint(token: token)( + mapFn: (r) => + ({ + roomId: "Y2lzY29zcGFyazovL3VzL1JPT00vYmJjZWIxYWQtNDNmMS0zYjU4LTkxNDctZjE0YmIwYzRkMTU0", + text: "", + markdown: "Disk usage is **${r.status}**.", + }), + ) ``` \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/contrib/sranka/webexteams/message.md b/content/flux/v0.x/stdlib/contrib/sranka/webexteams/message.md index b623de3c9..33c0f0545 100644 --- a/content/flux/v0.x/stdlib/contrib/sranka/webexteams/message.md +++ b/content/flux/v0.x/stdlib/contrib/sranka/webexteams/message.md @@ -20,11 +20,11 @@ The `webexteams.message()` function sends a single message to Webex using the import "contrib/sranka/webexteams" webexteams.message(, - url: "https://webexapis.com" - token: "My5uP3rs3cRe7T0k3n", - roomId: "Y2lzY29zcGFyazovL3VzL1JPT00vYmJjZWIxYWQtNDNmMS0zYjU4LTkxNDctZjE0YmIwYzRkMTU0", - text: "Example plain text message", - markdown: "Example [markdown message](https://developer.webex.com/docs/api/basics)." + url: "https://webexapis.com" + token: "My5uP3rs3cRe7T0k3n", + roomId: "Y2lzY29zcGFyazovL3VzL1JPT00vYmJjZWIxYWQtNDNmMS0zYjU4LTkxNDctZjE0YmIwYzRkMTU0", + text: "Example plain text message", + markdown: "Example [markdown message](https://developer.webex.com/docs/api/basics).", ) ``` @@ -60,16 +60,16 @@ import "influxdata/influxdb/secrets" apiToken = secrets.get(key: "WEBEX_API_TOKEN") lastReported = - from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses") - |> last() - |> findRecord(fn: (key) => true, idx: 0) + from(bucket: "example-bucket") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses") + |> last() + |> findRecord(fn: (key) => true, idx: 0) webexteams.message( - token: apiToken, - roomId: "Y2lzY29zcGFyazovL3VzL1JPT00vYmJjZWIxYWQtNDNmMS0zYjU4LTkxNDctZjE0YmIwYzRkMTU0", - text: "Disk usage is ${lastReported.status}.", - markdown: "Disk usage is **${lastReported.status}**." + token: apiToken, + roomId: "Y2lzY29zcGFyazovL3VzL1JPT00vYmJjZWIxYWQtNDNmMS0zYjU4LTkxNDctZjE0YmIwYzRkMTU0", + text: "Disk usage is ${lastReported.status}.", + markdown: "Disk usage is **${lastReported.status}**.", ) ``` diff --git a/content/flux/v0.x/stdlib/contrib/tomhollingworth/events/duration.md b/content/flux/v0.x/stdlib/contrib/tomhollingworth/events/duration.md index 767496e3d..a3e50a904 100644 --- a/content/flux/v0.x/stdlib/contrib/tomhollingworth/events/duration.md +++ b/content/flux/v0.x/stdlib/contrib/tomhollingworth/events/duration.md @@ -29,11 +29,11 @@ specified [`stop`](#stop) time. import "contrib/tomhollingworth/events" events.duration( - unit: 1ns, - columnName: "duration", - timeColumn: "_time", - stopColumn: "_stop", - stop: 2020-01-01T00:00:00Z + unit: 1ns, + columnName: "duration", + timeColumn: "_time", + stopColumn: "_stop", + stop: 2020-01-01T00:00:00Z, ) ``` @@ -83,10 +83,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "contrib/tomhollingworth/events" data - |> events.duration( - unit: 1m, - stop: 2020-01-02T00:00:00Z - ) + |> events.duration(unit: 1m, stop: 2020-01-02T00:00:00Z) ``` {{< flex >}} @@ -133,19 +130,11 @@ The example below includes output values of `events.duration()`, `elapsed()`, an {{% flex-content %}} ##### Functions ```js -data |> events.duration( - unit: 1m, - stop: 2020-01-02T00:00:00Z -) +data |> events.duration(unit: 1m, stop: 2020-01-02T00:00:00Z) -data |> elapsed( - unit: 1m -) +data |> elapsed(unit: 1m) -data |> stateDuration( - unit: 1m, - fn: (r) => true -) +data |> stateDuration(unit: 1m, fn: (r) => true) ``` {{% /flex-content %}} {{< /flex >}} diff --git a/content/flux/v0.x/stdlib/csv/from.md b/content/flux/v0.x/stdlib/csv/from.md index a2efd97c7..49799069c 100644 --- a/content/flux/v0.x/stdlib/csv/from.md +++ b/content/flux/v0.x/stdlib/csv/from.md @@ -25,15 +25,15 @@ Each record in the table represents a single point in the series. import "csv" csv.from( - csv: csvData, - mode: "annotations" + csv: csvData, + mode: "annotations", ) // OR csv.from( - file: "/path/to/data-file.csv", - mode: "annotations" + file: "/path/to/data-file.csv", + mode: "annotations", ) ``` @@ -87,17 +87,16 @@ csv.from(file: "/path/to/data-file.csv") ##### Query raw CSV data from a file ```js import "csv" -csv.from( - file: "/path/to/data-file.csv", - mode: "raw" -) + +csv.from(file: "/path/to/data-file.csv", mode: "raw") ``` ##### Query an annotated CSV string ```js import "csv" -csvData = " +csvData = + " #datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,double #group,false,false,false,false,false,false,false,false #default,,,,,,,, @@ -109,22 +108,25 @@ csvData = " csv.from(csv: csvData) ``` + ##### Query a raw CSV string ```js import "csv" -csvData = " + +csvData = + " _start,_stop,_time,region,host,_value 2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43 2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25 2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62 " -csv.from( - csv: csvData, - mode: "raw" -) + +csv.from(csv: csvData, mode: "raw") ``` +{{< expand-wrapper >}} {{% expand "Function updates" %}} #### v0.109.0 - Add `mode` parameter to support querying raw CSV data. {{% /expand %}} +{{< /expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/date/addduration.md b/content/flux/v0.x/stdlib/date/addduration.md new file mode 100644 index 000000000..35d99e8d3 --- /dev/null +++ b/content/flux/v0.x/stdlib/date/addduration.md @@ -0,0 +1,54 @@ +--- +title: date.addDuration() function +description: > + `date.addDuration()` adds a duration to a time value and returns the resulting time. +menu: + flux_0_x_ref: + name: date.addDuration + parent: date +weight: 302 +flux/v0.x/tags: [date/time] +related: + - /flux/v0.x/stdlib/date/subduration/ +introduced: 0.162.0 +--- + +`date.addDuration()` adds a duration to a time value and returns the resulting time. + +```js +import "date" + +date.addDuration(d: 12h, to: now()) +``` + +## Parameters + +### d {data-type="duration"} +Duration to add. + +### to {data-type="time, duration"} +Time to add the [duration](#d) to. +Use an absolute time or a relative duration. +Durations are relative to [`now()`](/flux/v0.x/stdlib/universe/now/). + +## Examples + +### Add six hours to a timestamp +```js +import "date" + +date.addDuration(d: 6h, to: 2019-09-16T12:00:00Z) + +// Returns 2019-09-16T18:00:00.000000000Z +``` + +### Add six hours to a relative duration +```js +import "date" + +option now = () => 2022-01-01T12:00:00Z + +date.addDuration(d: 6h, to: 3h) + +// Returns 2022-01-01T21:00:00.000000000Z +``` diff --git a/content/flux/v0.x/stdlib/date/subduration.md b/content/flux/v0.x/stdlib/date/subduration.md new file mode 100644 index 000000000..354314131 --- /dev/null +++ b/content/flux/v0.x/stdlib/date/subduration.md @@ -0,0 +1,56 @@ +--- +title: date.subDuration() function +description: > + `date.subDuration()` subtracts a duration from a time value and returns the + resulting time value. +menu: + flux_0_x_ref: + name: date.subDuration + parent: date +weight: 302 +flux/v0.x/tags: [date/time] +related: + - /flux/v0.x/stdlib/date/addduration/ +introduced: 0.162.0 +--- + +`date.subDuration()` subtracts a duration from a time value and returns the +resulting time value. + +```js +import "date" + +date.subDuration(d: 12h, from: now()) +``` + +## Parameters + +### d {data-type="duration"} +Duration to subtract. + +### from {data-type="time, duration"} +Time to subtract the [duration](#d) from. +Use an absolute time or a relative duration. +Durations are relative to [`now()`](/flux/v0.x/stdlib/universe/now/). + +## Examples + +### Subtract six hours from a timestamp +```js +import "date" + +date.subDuration(d: 6h, from: 2019-09-16T12:00:00Z) + +// Returns 2019-09-16T06:00:00.000000000Z +``` + +### Subtract six hours from a relative duration +```js +import "date" + +option now = () => 2022-01-01T12:00:00Z + +date.subDuration(d: 6h, from: -3h) + +// Returns 2022-01-01T03:00:00.000000000Z +``` diff --git a/content/flux/v0.x/stdlib/dict/fromlist.md b/content/flux/v0.x/stdlib/dict/fromlist.md index 3e2f2797a..21df68bed 100644 --- a/content/flux/v0.x/stdlib/dict/fromlist.md +++ b/content/flux/v0.x/stdlib/dict/fromlist.md @@ -20,12 +20,7 @@ The `dict.fromList()` function creates a dictionary from a list of records with ```js import "dict" -dict.fromList( - pairs: [ - {key: 1, value: "foo"}, - {key: 2, value: "bar"} - ] -) +dict.fromList(pairs: [{key: 1, value: "foo"},{key: 2, value: "bar"}]) ``` ## Parameters @@ -40,12 +35,7 @@ dict.fromList( import "dict" // Define a new dictionary using an array of records -d = dict.fromList( - pairs: [ - {key: 1, value: "foo"}, - {key: 2, value: "bar"} - ] -) +d = dict.fromList(pairs: [{key: 1, value: "foo"}, {key: 2, value: "bar"}]) // Return a property of the dictionary dict.get(dict: d, key: 1, default: "") diff --git a/content/flux/v0.x/stdlib/dict/get.md b/content/flux/v0.x/stdlib/dict/get.md index 48a6fde57..dd89bffca 100644 --- a/content/flux/v0.x/stdlib/dict/get.md +++ b/content/flux/v0.x/stdlib/dict/get.md @@ -21,9 +21,9 @@ or a default value if the key does not exist. import "dict" dict.get( - dict: [1: "foo", 2: "bar"], - key: 1, - default: "" + dict: [1: "foo", 2: "bar"], + key: 1, + default: "", ) ``` @@ -51,11 +51,7 @@ import "dict" d = [1: "foo", 2: "bar"] -dict.get( - dict: d, - key: 1, - default: "" -) +dict.get(dict: d, key: 1, default: "") // Returns foo ``` diff --git a/content/flux/v0.x/stdlib/dict/insert.md b/content/flux/v0.x/stdlib/dict/insert.md index 16f8d7949..f9cb3d1e6 100644 --- a/content/flux/v0.x/stdlib/dict/insert.md +++ b/content/flux/v0.x/stdlib/dict/insert.md @@ -23,9 +23,9 @@ If the key already exists in the dictionary, the function overwrites the existin import "dict" dict.insert( - dict: [1: "foo", 2: "bar"], - key: 3, - value: "baz" + dict: [1: "foo", 2: "bar"], + key: 3, + value: "baz", ) ``` @@ -54,11 +54,7 @@ import "dict" d = [1: "foo", 2: "bar"] -dNew = dict.insert( - dict: d, - key: 3, - value: "baz" -) +dNew = dict.insert(dict: d, key: 3, value: "baz") // Verify the new key-value pair was inserted dict.get(dict: dNew, key: 3, default: "") @@ -72,11 +68,7 @@ import "dict" d = [1: "foo", 2: "bar"] -dNew = dict.insert( - dict: d, - key: 2, - value: "baz" -) +dNew = dict.insert(dict: d, key: 2, value: "baz") // Verify the new key-value pair was overwritten dict.get(dict: dNew, key: 2, default: "") diff --git a/content/flux/v0.x/stdlib/dict/remove.md b/content/flux/v0.x/stdlib/dict/remove.md index b8310b4bd..683fe2d00 100644 --- a/content/flux/v0.x/stdlib/dict/remove.md +++ b/content/flux/v0.x/stdlib/dict/remove.md @@ -20,10 +20,7 @@ an updated dictionary. ```js import "dict" -dict.remove( - dict: [1: "foo", 2: "bar"], - key: 1 -) +dict.remove(dict: [1: "foo", 2: "bar"], key: 1) ``` ## Parameters @@ -47,10 +44,7 @@ import "dict" d = [1: "foo", 2: "bar"] -dNew = dict.remove( - dict: d, - key: 1 -) +dNew = dict.remove(dict: d, key: 1) // Verify the key-value pairs was removed diff --git a/content/flux/v0.x/stdlib/experimental/addduration.md b/content/flux/v0.x/stdlib/experimental/addduration.md index 3d038ca9c..694198952 100644 --- a/content/flux/v0.x/stdlib/experimental/addduration.md +++ b/content/flux/v0.x/stdlib/experimental/addduration.md @@ -15,16 +15,17 @@ flux/v0.x/tags: [date/time] related: - /flux/v0.x/stdlib/experimental/subduration/ introduced: 0.39.0 +deprecated: 0.162.0 --- +{{% warn %}} +This function was promoted to the [`date` package](/flux/v0.x/stdlib/date/addduration/) +in **Flux v0.162.0**. This experimental version has been deprecated. +{{% /warn %}} + The `experimental.addDuration()` function adds a duration to a time value and returns the resulting time value. -{{% warn %}} -This function will be removed once duration vectors are implemented. -See [influxdata/flux#413](https://github.com/influxdata/flux/issues/413). -{{% /warn %}} - ```js import "experimental" diff --git a/content/flux/v0.x/stdlib/experimental/aggregate/rate.md b/content/flux/v0.x/stdlib/experimental/aggregate/rate.md index ca4f38519..9243e0048 100644 --- a/content/flux/v0.x/stdlib/experimental/aggregate/rate.md +++ b/content/flux/v0.x/stdlib/experimental/aggregate/rate.md @@ -23,9 +23,9 @@ for each input table. import "experimental/aggregate" aggregate.rate( - every: 1m, - groupColumns: ["column1", "column2"], - unit: 1s + every: 1m, + groupColumns: ["column1", "column2"], + unit: 1s, ) ``` @@ -57,15 +57,12 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "experimental/aggregate" import "sampledata" -data = sampledata.int() - |> range(start: sampledata.start, stop: sampledata.stop) +data = + sampledata.int() + |> range(start: sampledata.start, stop: sampledata.stop) -data - |> aggregate.rate( - every: 30s, - unit: 1s, - groupColumns: ["tag"] - ) +data + |> aggregate.rate(every: 30s, unit: 1s, groupColumns: ["tag"]) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/experimental/aligntime.md b/content/flux/v0.x/stdlib/experimental/aligntime.md index 55914babb..45927bfb0 100644 --- a/content/flux/v0.x/stdlib/experimental/aligntime.md +++ b/content/flux/v0.x/stdlib/experimental/aligntime.md @@ -20,7 +20,7 @@ The `experimental.alignTime()` function aligns input tables to a common start ti import "experimental" experimental.alignTime( - alignTo: 1970-01-01T00:00:00.000000000Z + alignTo: 1970-01-01T00:00:00.000000000Z ) ``` @@ -41,10 +41,10 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> range(start: -12mo) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> window(every: 1mo) - |> experimental.alignTime() + |> range(start: -12mo) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> window(every: 1mo) + |> experimental.alignTime() ``` **Given the following input:** @@ -67,8 +67,8 @@ from(bucket: "example-bucket") ```js //... - |> window(every: 1mo) - |> alignTime(alignTo: 2020-01-01T00:00:00Z) + |> window(every: 1mo) + |> alignTime(alignTo: 2020-01-01T00:00:00Z) ``` **And output:** diff --git a/content/flux/v0.x/stdlib/experimental/array/_index.md b/content/flux/v0.x/stdlib/experimental/array/_index.md new file mode 100644 index 000000000..dc8ae2bb0 --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/array/_index.md @@ -0,0 +1,35 @@ +--- +title: Flux experimental array package +list_title: array package +description: > + The Flux experimental `array` package provides functions for operating on Flux arrays. + Import the `experimental/array` package. +aliases: + - /influxdb/v2.0/reference/flux/stdlib/experimental/array/ + - /influxdb/cloud/reference/flux/stdlib/experimental/array/ + - /influxdb/v2.0/reference/flux/stdlib/array/ + - /influxdb/cloud/reference/flux/stdlib/array/ +menu: + flux_0_x_ref: + name: array + identifier: exp-array + parent: experimental +weight: 11 +flux/v0.x/tags: [functions, array, package] +cascade: + related: + - /flux/v0.x/data-types/composite/array/ +introduced: 0.79.0 +--- + +The experimental `array` package provides functions for operating on Flux +[arrays](/flux/v0.x/data-types/composite/array/). +Import the `experimental/array` package: + +```js +import "experimental/array" +``` + +## Functions + +{{< children type="functions" show="pages" >}} diff --git a/content/flux/v0.x/stdlib/experimental/array/concat.md b/content/flux/v0.x/stdlib/experimental/array/concat.md new file mode 100644 index 000000000..728554071 --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/array/concat.md @@ -0,0 +1,53 @@ +--- +title: array.concat() function +description: > + `array.concat` appends two arrays and returns a new array. +menu: + flux_0_x_ref: + name: array.concat + parent: exp-array +weight: 301 +flux/v0.x/tags: [array] +introduced: 0.155.0 +--- + +`array.concat()` appends two arrays and returns a new array. + +```js +import "experimental/array" + +array.concat( + arr: [1,2], + v: [3,4], +) + +// Returns [1, 2, 3, 4] +``` + +## Parameters + +### arr {data-type="array"} +First array. Default is the piped-forward array (`<-`). + +### v {data-type="array"} +Array to append to the first array. + +{{% note %}} +Neither input array is mutated and a new array is returned. +{{% /note %}} + +## Examples + +### Merge two arrays +```js +import "experimental/array" + +a = [1, 2, 3] +b = [4, 5, 6] + +c = a |> array.concat(v: b) +// Returns [1, 2, 3, 4, 5, 6] + +// Output each value in the array as a row in a table +array.from(rows: c |> array.map(fn: (x) => ({_value: x}))) +``` diff --git a/content/flux/v0.x/stdlib/experimental/array/filter.md b/content/flux/v0.x/stdlib/experimental/array/filter.md new file mode 100644 index 000000000..0f7279f35 --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/array/filter.md @@ -0,0 +1,71 @@ +--- +title: array.filter() function +description: > + `array.filter` iterates over an array, evaluates each element with a predicate + function, and then returns a new array with only elements that match the predicate. +menu: + flux_0_x_ref: + name: array.filter + parent: exp-array +weight: 301 +flux/v0.x/tags: [array] +introduced: 0.155.0 +--- + +`array.filter()` iterates over an array, evaluates each element with a predicate +function, and then returns a new array with only elements that match the predicate. + +```js +import "experimental/array" + +array.filter( + arr: [1, 2, 3, 4, 5], + fn: (x) => x >= 3, +) + +// Returns [3, 4, 5] +``` + +## Parameters + +### arr {data-type="array"} +Array to filter. Default is the piped-forward array (`<-`). + +### fn {data-type="function"} +Predicate function to evaluate on each element. +The element is represented by `x` in the predicate function. + +## Examples + +### Filter an array of integers +```js +import "experimental/array" + +a = [1, 2, 3, 4, 5] +b = a |> array.filter(fn: (x) => x >= 3) +// b returns [3, 4, 5] + +// Output the filtered array as a table +array.from(rows: b |> array.map(fn: (x) => ({_value: x}))) +``` + +### Filter an array of records +```js +import "experimental/array" + +a = + [ + {a: 1, b: 2, c: 3}, + {a: 4, b: 5, c: 6}, + {a: 7, b: 8, c: 9} + ] + +b = a |> array.filter(fn: (x) => x.b >= 3) +// b returns [ +// {a: 4, b: 5, c: 6}, +// {a: 7, b: 8, c: 9}, +// ] + +// Output the filtered array as a table +array.from(rows: b) +``` diff --git a/content/flux/v0.x/stdlib/experimental/array/map.md b/content/flux/v0.x/stdlib/experimental/array/map.md new file mode 100644 index 000000000..3ba8363de --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/array/map.md @@ -0,0 +1,70 @@ +--- +title: array.map() function +description: > + `array.map` iterates over an array, applies a function to each element to + produce a new element, and then returns a new array. +menu: + flux_0_x_ref: + name: array.map + parent: exp-array +weight: 301 +flux/v0.x/tags: [array] +introduced: 0.155.0 +--- + +`array.map()` iterates over an array, applies a function to each element to +produce a new element, and then returns a new array. + +```js +import "experimental/array" + +array.map( + arr: [1, 2, 3, 4], + fn: (x) => x * 2, +) + +// Returns [2, 4, 6, 8] +``` + +## Parameters + +### arr {data-type="array"} +Array to operate on. Default is the piped-forward array (`<-`). + +### fn {data-type="function"} +Function to apply to elements. The element is represented by `x` in the function. + +## Examples + +### Convert an array of integers to an array of records +```js +import "experimental/array" + +a = [1, 2, 3, 4, 5] +b = a |> array.map(fn: (x) => ({_value: x})) +// b returns [{_value: 1}, {_value: 2}, {_value: 3}, {_value: 4}, {_value: 5}] + +// Output the array of records as a table +> array.from(rows: b) +``` + +### Iterate over and modify an array of records +```js +a = + [ + {a: 1, b: 2, c: 3}, + {a: 4, b: 5, c: 6}, + {a: 7, b: 8, c: 9}, + ] + +b = a |> array.map(fn: (x) => ({x with a: x.a * x.a, d: x.b + x.c})) +// b returns: +// [ +// {a: 1, b: 2, c: 3, d: 5}, +// {a: 16, b: 5, c: 6, d: 11}, +// {a: 49, b: 8, c: 9, d: 17} +// ] + +// Output the modified array of records as a table +array.from(rows: b) +``` diff --git a/content/flux/v0.x/stdlib/experimental/bigtable/from.md b/content/flux/v0.x/stdlib/experimental/bigtable/from.md index 89e4c6d04..afc2a217f 100644 --- a/content/flux/v0.x/stdlib/experimental/bigtable/from.md +++ b/content/flux/v0.x/stdlib/experimental/bigtable/from.md @@ -21,10 +21,10 @@ data source. import "experimental/bigtable" bigtable.from( - token: "mySuPeRseCretTokEn", - project: "exampleProjectID", - instance: "exampleInstanceID", - table: "example-table" + token: "mySuPeRseCretTokEn", + project: "exampleProjectID", + instance: "exampleInstanceID", + table: "example-table", ) ``` @@ -64,9 +64,9 @@ bigtable_project = secrets.get(key: "BIGTABLE_PROJECT_ID") bigtable_instance = secrets.get(key: "BIGTABLE_INSTANCE_ID") bigtable.from( - token: bigtable_token, - project: bigtable_project, - instance: bigtable_instance, - table: "example-table" + token: bigtable_token, + project: bigtable_project, + instance: bigtable_instance, + table: "example-table" ) ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/sand.md b/content/flux/v0.x/stdlib/experimental/bitwise/sand.md index 81050692d..7e6121878 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/sand.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/sand.md @@ -19,10 +19,7 @@ flux/v0.x/tags: [bitwise] ```js import "experimental/bitwise" -bitwise.sand( - a: 12, - b: 21 -) +bitwise.sand(a: 12, b: 21) // Returns 4 ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/sclear.md b/content/flux/v0.x/stdlib/experimental/bitwise/sclear.md index e562f9eda..1042efe35 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/sclear.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/sclear.md @@ -19,10 +19,7 @@ flux/v0.x/tags: [bitwise] ```js import "experimental/bitwise" -bitwise.sclear( - a: 12, - b: 21 -) +bitwise.sclear(a: 12, b: 21) // Returns 8 ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/slshift.md b/content/flux/v0.x/stdlib/experimental/bitwise/slshift.md index 95369a9e8..4a525b0a2 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/slshift.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/slshift.md @@ -19,10 +19,7 @@ Both `a` and `b` are [integers](/flux/v0.x/data-types/basic/int/). ```js import "experimental/bitwise" -bitwise.slshift( - a: 12, - b: 21 -) +bitwise.slshift(a: 12, b: 21) // Returns 25165824 ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/sor.md b/content/flux/v0.x/stdlib/experimental/bitwise/sor.md index 2c95b3b81..81c9efaf1 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/sor.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/sor.md @@ -19,10 +19,7 @@ flux/v0.x/tags: [bitwise] ```js import "experimental/bitwise" -bitwise.sor( - a: 12, - b: 21 -) +bitwise.sor(a: 12, b: 21) // Returns 29 ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/srshift.md b/content/flux/v0.x/stdlib/experimental/bitwise/srshift.md index 3b3eec84b..47d8cdd10 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/srshift.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/srshift.md @@ -19,10 +19,7 @@ Both `a` and `b` are [integers](/flux/v0.x/data-types/basic/int/). ```js import "experimental/bitwise" -bitwise.srshift( - a: 21, - b: 4 -) +bitwise.srshift(a: 21, b: 4) // Returns 1 ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/sxor.md b/content/flux/v0.x/stdlib/experimental/bitwise/sxor.md index 58348918a..9419f65af 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/sxor.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/sxor.md @@ -19,10 +19,7 @@ flux/v0.x/tags: [bitwise] ```js import "experimental/bitwise" -bitwise.sxor( - a: 12, - b: 21 -) +bitwise.sxor(a: 12, b: 21) // Returns 25 ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/uand.md b/content/flux/v0.x/stdlib/experimental/bitwise/uand.md index 9490f90e1..21f4f79fa 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/uand.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/uand.md @@ -19,10 +19,7 @@ flux/v0.x/tags: [bitwise] ```js import "experimental/bitwise" -bitwise.uand( - a: uint(v: 12), - b: uint(v: 21) -) +bitwise.uand(a: uint(v: 12), b: uint(v: 21)) // Returns 4 (uint) ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/uclear.md b/content/flux/v0.x/stdlib/experimental/bitwise/uclear.md index 7a84a917f..14d1e7dd3 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/uclear.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/uclear.md @@ -19,10 +19,7 @@ flux/v0.x/tags: [bitwise] ```js import "experimental/bitwise" -bitwise.uclear( - a: uint(v: 12), - b: uint(v: 21) -) +bitwise.uclear(a: uint(v: 12), b: uint(v: 21)) // Returns 8 (uint) ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/ulshift.md b/content/flux/v0.x/stdlib/experimental/bitwise/ulshift.md index 0d447d2c9..8ff36cbcf 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/ulshift.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/ulshift.md @@ -19,10 +19,7 @@ Both `a` and `b` are [unsigned integers](/flux/v0.x/data-types/basic/uint/). ```js import "experimental/bitwise" -bitwise.ulshift( - a: uint(v: 12), - b: uint(v: 21) -) +bitwise.ulshift(a: uint(v: 12), b: uint(v: 21)) // Returns 25165824 (uint) ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/uor.md b/content/flux/v0.x/stdlib/experimental/bitwise/uor.md index a2347a303..904de179c 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/uor.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/uor.md @@ -19,10 +19,7 @@ flux/v0.x/tags: [bitwise] ```js import "experimental/bitwise" -bitwise.uor( - a: uint(v: 12), - b: uint(v: 21) -) +bitwise.uor(a: uint(v: 12), b: uint(v: 21)) // Returns 29 (uint) ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/urshift.md b/content/flux/v0.x/stdlib/experimental/bitwise/urshift.md index b751a6cc6..78ff29c0c 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/urshift.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/urshift.md @@ -19,10 +19,7 @@ Both `a` and `b` are [unsigned integers](/flux/v0.x/data-types/basic/uint/). ```js import "experimental/bitwise" -bitwise.urshift( - a: uint(v: 21), - b: uint(v: 4) -) +bitwise.urshift(a: uint(v: 21), b: uint(v: 4)) // Returns 1 (uint) ``` diff --git a/content/flux/v0.x/stdlib/experimental/bitwise/uxor.md b/content/flux/v0.x/stdlib/experimental/bitwise/uxor.md index 7ded84782..e362a4fc0 100644 --- a/content/flux/v0.x/stdlib/experimental/bitwise/uxor.md +++ b/content/flux/v0.x/stdlib/experimental/bitwise/uxor.md @@ -19,10 +19,7 @@ flux/v0.x/tags: [bitwise] ```js import "experimental/bitwise" -bitwise.uxor( - a: uint(v: 12), - b: uint(v: 21) -) +bitwise.uxor(a: uint(v: 12), b: uint(v: 21)) // Returns 25 (uint) ``` diff --git a/content/flux/v0.x/stdlib/experimental/chain.md b/content/flux/v0.x/stdlib/experimental/chain.md index b29c334d7..5c1e4e02d 100644 --- a/content/flux/v0.x/stdlib/experimental/chain.md +++ b/content/flux/v0.x/stdlib/experimental/chain.md @@ -29,8 +29,8 @@ the results of the first query are met. import "experimental" experimental.chain( - first: query1, - second: query2 + first: query1, + second: query2, ) ``` @@ -49,18 +49,15 @@ The second query to execute. import "experimental" downsampled_max = from(bucket: "example-bucket-1") - |> range(start: -1d) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> aggregateWindow(every: 1h, fn: max) - |> to(bucket: "downsample-1h-max", org: "example-org") + |> range(start: -1d) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> aggregateWindow(every: 1h, fn: max) + |> to(bucket: "downsample-1h-max", org: "example-org") average_max = from(bucket: "downsample-1h-max") - |> range(start: -1d) - |> filter(fn: (r) => r.measurement == "example-measurement") - |> mean() + |> range(start: -1d) + |> filter(fn: (r) => r.measurement == "example-measurement") + |> mean() -experimental.chain( - first: downsampled_max, - second: average_max -) +experimental.chain(first: downsampled_max, second: average_max) ``` diff --git a/content/flux/v0.x/stdlib/experimental/count.md b/content/flux/v0.x/stdlib/experimental/count.md index 9b09ffbfa..95563a663 100644 --- a/content/flux/v0.x/stdlib/experimental/count.md +++ b/content/flux/v0.x/stdlib/experimental/count.md @@ -52,6 +52,6 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> range(start: -5m) - |> experimental.count() + |> range(start: -5m) + |> experimental.count() ``` diff --git a/content/flux/v0.x/stdlib/experimental/csv/from.md b/content/flux/v0.x/stdlib/experimental/csv/from.md index e19d02b3b..65ee82ccf 100644 --- a/content/flux/v0.x/stdlib/experimental/csv/from.md +++ b/content/flux/v0.x/stdlib/experimental/csv/from.md @@ -43,15 +43,5 @@ The URL to retrieve annotated CSV from. import "experimental/csv" csv.from(url: "http://example.com/csv/example.csv") - |> filter(fn: (r) => r._measurement == "example-measurement") -``` - -## Function definition -```js -package csv - -import c "csv" -import "experimental/http" - -from = (url) => c.from(csv: string(v: http.get(url: url).body)) + |> filter(fn: (r) => r._measurement == "example-measurement") ``` diff --git a/content/flux/v0.x/stdlib/experimental/distinct.md b/content/flux/v0.x/stdlib/experimental/distinct.md index 5b9572b82..5d439dfca 100644 --- a/content/flux/v0.x/stdlib/experimental/distinct.md +++ b/content/flux/v0.x/stdlib/experimental/distinct.md @@ -53,7 +53,7 @@ Default is piped-forward data (`<-`). import "experimental" data - |> experimental.distinct() + |> experimental.distinct() ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/experimental/fill.md b/content/flux/v0.x/stdlib/experimental/fill.md index c0dc94d2e..b0426f952 100644 --- a/content/flux/v0.x/stdlib/experimental/fill.md +++ b/content/flux/v0.x/stdlib/experimental/fill.md @@ -61,7 +61,7 @@ Default is piped-forward data (`<-`). import "experimental" data - |> experimental.fill(value: 0.0) + |> experimental.fill(value: 0.0) ``` {{< flex >}} @@ -96,7 +96,7 @@ data import "experimental" data - |> experimental.fill(usePrevious: true) + |> experimental.fill(usePrevious: true) ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/experimental/first.md b/content/flux/v0.x/stdlib/experimental/first.md index 6f22eab9d..5a0f420bf 100644 --- a/content/flux/v0.x/stdlib/experimental/first.md +++ b/content/flux/v0.x/stdlib/experimental/first.md @@ -47,7 +47,7 @@ Default is piped-forward data (`<-`). import "experimental" data - |> experimental.first() + |> experimental.first() ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/experimental/geo/_index.md b/content/flux/v0.x/stdlib/experimental/geo/_index.md index 56544565d..3bb01fa45 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/_index.md +++ b/content/flux/v0.x/stdlib/experimental/geo/_index.md @@ -88,11 +88,11 @@ to add `s2_cell_id` tags to data that includes fields with latitude and longitud ```js //... - |> shapeData( - latField: "latitude", - lonField: "longitude", - level: 10 - ) + |> shapeData( + latField: "latitude", + lonField: "longitude", + level: 10, + ) ``` ## Latitude and longitude values @@ -123,10 +123,10 @@ Define a box-shaped region by specifying a record containing the following prope ##### Example box-shaped region ```js { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 + minLat: 40.51757813, + maxLat: 40.86914063, + minLon: -73.65234375, + maxLon: -72.94921875, } ``` @@ -140,9 +140,9 @@ Define a circular region by specifying a record containing the following propert ##### Example circular region ```js { - lat: 40.69335938, - lon: -73.30078125, - radius: 20.0 + lat: 40.69335938, + lon: -73.30078125, + radius: 20.0, } ``` @@ -155,8 +155,8 @@ Define a point region by specifying a record containing the following properties ##### Example point region ```js { - lat: 40.671659, - lon: -73.936631 + lat: 40.671659, + lon: -73.936631, } ``` @@ -173,11 +173,11 @@ Define a custom polygon region using a record containing the following propertie ##### Example polygonal region ```js { - points: [ - {lat: 40.671659, lon: -73.936631}, - {lat: 40.706543, lon: -73.749177}, - {lat: 40.791333, lon: -73.880327} - ] + points: [ + {lat: 40.671659, lon: -73.936631}, + {lat: 40.706543, lon: -73.749177}, + {lat: 40.791333, lon: -73.880327}, + ], } ``` @@ -195,9 +195,7 @@ Define a geographic linestring path using a record containing the following prop coordinate pairs (`lon lat,`): ```js -{ - linestring: "39.7515 14.01433, 38.3527 13.9228, 36.9978 15.08433" -} +{linestring: "39.7515 14.01433, 38.3527 13.9228, 36.9978 15.08433"} ``` ## Distance units diff --git a/content/flux/v0.x/stdlib/experimental/geo/astracks.md b/content/flux/v0.x/stdlib/experimental/geo/astracks.md index 4e98719b8..d1a8023d6 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/astracks.md +++ b/content/flux/v0.x/stdlib/experimental/geo/astracks.md @@ -22,8 +22,8 @@ The `geo.asTracks()` function groups rows into tracks (sequential, related data import "experimental/geo" geo.asTracks( - groupBy: ["id","tid"], - orderBy: ["_time"] + groupBy: ["id","tid"], + orderBy: ["_time"], ) ``` @@ -46,27 +46,13 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi ##### Group tracks in a box-shaped region ```js -import "experimental/geo" -region = { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 -} +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.gridFilter(region: region) - |> geo.toRows(correlationKey: ["_time", "id"]) - |> geo.asTracks() -``` - -## Function definition -```js -asTracks = (tables=<-, groupBy=["id","tid"], orderBy=["_time"]) => - tables - |> group(columns: groupBy) - |> sort(columns: orderBy) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.gridFilter(region: region) + |> geo.toRows(correlationKey: ["_time", "id"]) + |> geo.asTracks() ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/filterrows.md b/content/flux/v0.x/stdlib/experimental/geo/filterrows.md index fcf602ffa..1889b3ccf 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/filterrows.md +++ b/content/flux/v0.x/stdlib/experimental/geo/filterrows.md @@ -28,13 +28,13 @@ and [`geo.strictFilter()`](/flux/v0.x/stdlib/experimental/geo/strictfilter/). import "experimental/geo" geo.filterRows( - region: {lat: 40.69335938, lon: -73.30078125, radius: 20.0}, - minSize: 24, - maxSize: -1, - level: -1, - s2cellIDLevel: -1, - correlationKey: ["_time"], - strict: true + region: {lat: 40.69335938, lon: -73.30078125, radius: 20.0}, + minSize: 24, + maxSize: -1, + level: -1, + s2cellIDLevel: -1, + correlationKey: ["_time"], + strict: true, ) ``` @@ -48,7 +48,7 @@ To add `s2_cell_id` to the group key, use [`experimental.group`](/flux/v0.x/stdl import "experimental" // ... - |> experimental.group(columns: ["s2_cell_id"], mode: "extend") + |> experimental.group(columns: ["s2_cell_id"], mode: "extend") ``` {{% /note %}} @@ -123,21 +123,20 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi ## Examples +- [Strictly filter data in a box-shaped region](#strictly-filter-data-in-a-box-shaped-region) +- [Approximately filter data in a circular region](#approximately-filter-data-in-a-circular-region) +- [Filter data in a polygonal region](#filter-data-in-a-polygonal-region) + ##### Strictly filter data in a box-shaped region ```js import "experimental/geo" +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} + from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.filterRows( - region: { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 - } - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.filterRows(region: region) ``` ##### Approximately filter data in a circular region @@ -147,78 +146,24 @@ covered by the defined region even though some points my be located outside of t ```js import "experimental/geo" +region = {lat: 40.69335938, lon: -73.30078125, radius: 20.0} + from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.filterRows( - region: { - lat: 40.69335938, - lon: -73.30078125, - radius: 20.0 - } - strict: false - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.filterRows(region: region, strict: false) ``` ##### Filter data in a polygonal region ```js import "experimental/geo" -from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.filterRows( - region: { - points: [ - {lat: 40.671659, lon: -73.936631}, - {lat: 40.706543, lon: -73.749177}, - {lat: 40.791333, lon: -73.880327} - ] - } - ) -``` - -## Function definition -{{% truncate %}} -```js -filterRows = ( - tables=<-, - region, - minSize=24, - maxSize=-1, - level=-1, - s2cellIDLevel=-1, - strict=true -) => { - _columns = - |> columns(column: "_value") - |> tableFind(fn: (key) => true ) - |> getColumn(column: "_value") - _rows = - if contains(value: "lat", set: _columns) then - tables - |> gridFilter( - region: region, - minSize: minSize, - maxSize: maxSize, - level: level, - s2cellIDLevel: s2cellIDLevel) - else - tables - |> gridFilter( - region: region, - minSize: minSize, - maxSize: maxSize, - level: level, - s2cellIDLevel: s2cellIDLevel) - |> toRows() - _result = - if strict then - _rows - |> strictFilter(region) - else - _rows - return _result +region = { + points: [{lat: 40.671659, lon: -73.936631}, {lat: 40.706543, lon: -73.749177}, {lat: 40.791333, lon: -73.880327}], } + +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.filterRows(region: region) ``` -{{% /truncate %}} diff --git a/content/flux/v0.x/stdlib/experimental/geo/gridfilter.md b/content/flux/v0.x/stdlib/experimental/geo/gridfilter.md index f683072b7..8722458c0 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/gridfilter.md +++ b/content/flux/v0.x/stdlib/experimental/geo/gridfilter.md @@ -34,11 +34,11 @@ _See [Non-strict and strict filtering](#non-strict-and-strict-filtering) below._ import "experimental/geo" geo.gridFilter( - region: {lat: 40.69335938, lon: -73.30078125, radius: 20.0} - minSize: 24, - maxSize: -1, - level: -1, - s2cellIDLevel: -1 + region: {lat: 40.69335938, lon: -73.30078125, radius: 20.0} + minSize: 24, + maxSize: -1, + level: -1, + s2cellIDLevel: -1, ) ``` @@ -52,7 +52,7 @@ To add `s2_cell_id` to the group key, use [`experimental.group`](/flux/v0.x/stdl import "experimental" // ... - |> experimental.group(columns: ["s2_cell_id"], mode: "extend") + |> experimental.group(columns: ["s2_cell_id"], mode: "extend") ``` {{% /note %}} @@ -116,53 +116,44 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi ## Examples +- [Filter data in a box-shaped region](#filter-data-in-a-box-shaped-region) +- [Filter data in a circular region](#filter-data-in-a-circular-region) +- [Filter data in a custom polygon region](#filter-data-in-a-custom-polygon-region) + ##### Filter data in a box-shaped region ```js import "experimental/geo" +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} + from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.gridFilter( - region: { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 - } - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.gridFilter(region: region) ``` ##### Filter data in a circular region ```js import "experimental/geo" +region = {lat: 40.69335938, lon: -73.30078125, radius: 20.0} + from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.gridFilter( - region: { - lat: 40.69335938, - lon: -73.30078125, - radius: 20.0 - } - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.gridFilter(region: region) ``` ##### Filter data in a custom polygon region ```js import "experimental/geo" +region = { + points: [{lat: 40.671659, lon: -73.936631}, {lat: 40.706543, lon: -73.749177}, {lat: 40.791333, lon: -73.880327}], +} + from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.gridFilter( - region: { - points: [ - {lat: 40.671659, lon: -73.936631}, - {lat: 40.706543, lon: -73.749177}, - {lat: 40.791333, lon: -73.880327} - ] - } - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.gridFilter(region: region) ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/groupbyarea.md b/content/flux/v0.x/stdlib/experimental/geo/groupbyarea.md index f1c7ee22a..1d94fc576 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/groupbyarea.md +++ b/content/flux/v0.x/stdlib/experimental/geo/groupbyarea.md @@ -25,9 +25,9 @@ Results are grouped by `newColumn`. import "experimental/geo" geo.groupByArea( - newColumn: "geoArea", - level: 3, - s2cellIDLevel: -1 + newColumn: "geoArea", + level: 3, + s2cellIDLevel: -1, ) ``` @@ -56,17 +56,12 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi ```js import "experimental/geo" -region = { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 -} +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.gridFilter(region: region) - |> geo.toRows() - |> geo.groupByArea(newColumn: "geoArea", level: 3) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.gridFilter(region: region) + |> geo.toRows() + |> geo.groupByArea(newColumn: "geoArea", level: 3) ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/s2cellidtoken.md b/content/flux/v0.x/stdlib/experimental/geo/s2cellidtoken.md index 1f94b5b78..ec0631b2c 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/s2cellidtoken.md +++ b/content/flux/v0.x/stdlib/experimental/geo/s2cellidtoken.md @@ -22,8 +22,8 @@ The `geo.s2CellIDToken()` function returns an S2 cell ID token. import "experimental/geo" geo.s2CellIDToken( - point: {lat: 37.7858229, lon: -122.4058124}, - level: 10 + point: {lat: 37.7858229, lon: -122.4058124}, + level: 10, ) ``` @@ -53,15 +53,9 @@ when generating the S2 cell ID token. import "experimental/geo" from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> map(fn: (r) => ({ - r with - s2_cell_id: geo.s2CellIDToken( - point: {lat: r.lat, lon: r.lon}, - level: 10 - )}) - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> map(fn: (r) => ({r with s2_cell_id: geo.s2CellIDToken(point: {lat: r.lat, lon: r.lon}, level: 10)})) ``` ##### Update S2 cell ID token level @@ -69,13 +63,7 @@ from(bucket: "example-bucket") import "experimental/geo" from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> map(fn: (r) => ({ - r with - s2_cell_id: geo.s2CellIDToken( - token: r.s2_cell_id, - level: 10 - )}) - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> map(fn: (r) => ({r with s2_cell_id: geo.s2CellIDToken(token: r.s2_cell_id, level: 10)})) ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/s2celllatlon.md b/content/flux/v0.x/stdlib/experimental/geo/s2celllatlon.md index 3f76c25b2..2a982491b 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/s2celllatlon.md +++ b/content/flux/v0.x/stdlib/experimental/geo/s2celllatlon.md @@ -23,9 +23,7 @@ center of an S2 cell. ```js import "experimental/geo" -geo.s2CellLatLon( - token: "89c284" -) +geo.s2CellLatLon(token: "89c284") // Returns {lat: 40.812535546624574, lon: -73.55941282728273} ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/shapedata.md b/content/flux/v0.x/stdlib/experimental/geo/shapedata.md index 9a1e23760..c503d633d 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/shapedata.md +++ b/content/flux/v0.x/stdlib/experimental/geo/shapedata.md @@ -32,9 +32,9 @@ Use `geo.shapeData()` to ensure geo-temporal data meets the import "experimental/geo" geo.shapeData( - latField: "latitude", - lonField: "longitude", - level: 10 + latField: "latitude", + lonField: "longitude", + level: 10, ) ``` @@ -63,13 +63,9 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "experimental/geo" from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.shapeData( - latField: "latitude", - lonField: "longitude", - level: 10 - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.shapeData(latField: "latitude", lonField: "longitude", level: 10) ``` ### geo.shapeData input and output @@ -92,14 +88,15 @@ from(bucket: "example-bucket") {{% /flex-content %}} {{% flex-content %}} -**The following function would output:** +**The following would output:** ```js -|> geo.shapeData( - latField: "latitude", - lonField: "longitude", - level: 5 -) +data + |> geo.shapeData( + latField: "latitude", + lonField: "longitude", + level: 5, + ) ``` | _time | lat | lon | s2_cell_id | diff --git a/content/flux/v0.x/stdlib/experimental/geo/st_contains.md b/content/flux/v0.x/stdlib/experimental/geo/st_contains.md index 1602b1934..b7260e593 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/st_contains.md +++ b/content/flux/v0.x/stdlib/experimental/geo/st_contains.md @@ -24,8 +24,8 @@ geographic information system (GIS) geometry and returns `true` or `false`. import "experimental/geo" geo.ST_Contains( - region: {lat: 40.7, lon: -73.3, radius: 20.0}, - geometry: {lon: 39.7515, lat: 15.08433} + region: {lat: 40.7, lon: -73.3, radius: 20.0}, + geometry: {lon: 39.7515, lat: 15.08433}, ) // Returns false @@ -47,38 +47,24 @@ _See [GIS geometry definitions](/flux/v0.x/stdlib/experimental/geo/#gis-geometry ##### Test if geographic points are inside of a region ```js -import "experimental/geo" +iimport "experimental/geo" -region = { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 -} +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} data - |> geo.toRows() - |> map(fn: (r) => ({ - r with st_contains: geo.ST_Contains(region: region, geometry: {lat: r.lat, lon: r.lon}) - })) + |> geo.toRows() + |> map(fn: (r) => ({r with st_contains: geo.ST_Contains(region: region, geometry: {lat: r.lat, lon: r.lon})})) ``` ##### Test if tracks are inside of a region ```js import "experimental/geo" -region = { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 -} +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} data - |> geo.toRows() - |> geo.asTracks() - |> geo.ST_LineString() - |> map(fn: (r) => ({ - r with st_contains: geo.ST_Contains(region: region, geometry: {linestring: r.st_linestring}) - })) + |> geo.toRows() + |> geo.asTracks() + |> geo.ST_LineString() + |> map(fn: (r) => ({r with st_contains: geo.ST_Contains(region: region, geometry: {linestring: r.st_linestring})})) ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/st_distance.md b/content/flux/v0.x/stdlib/experimental/geo/st_distance.md index 5aeaaf99d..5cc637306 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/st_distance.md +++ b/content/flux/v0.x/stdlib/experimental/geo/st_distance.md @@ -25,8 +25,8 @@ Define distance units with the [`geo.units` option](/flux/v0.x/stdlib/experiment import "experimental/geo" geo.ST_Distance( - region: {lat: 40.7, lon: -73.3, radius: 20.0}, - geometry: {lon: 39.7515, lat: 15.08433} + region: {lat: 40.7, lon: -73.3, radius: 20.0}, + geometry: {lon: 39.7515, lat: 15.08433}, ) // Returns 10734.184618677662 (km) @@ -50,18 +50,11 @@ _See [GIS geometry definitions](/flux/v0.x/stdlib/experimental/geo/#gis-geometry ```js import "experimental/geo" -region = { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 -} +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} data - |> geo.toRows() - |> map(fn: (r) => ({ - r with st_distance: ST_Distance(region: region, geometry: {lat: r.lat, lon: r.lon}) - })) + |> geo.toRows() + |> map(fn: (r) => ({r with st_distance: ST_Distance(region: region, geometry: {lat: r.lat, lon: r.lon})})) ``` ##### Find the point nearest to a geographic location @@ -71,9 +64,7 @@ import "experimental/geo" fixedLocation = {lat: 40.7, lon: -73.3} data - |> geo.toRows() - |> map(fn: (r) => ({ r with - _value: geo.ST_Distance(region: {lat: r.lat, lon: r.lon}, geometry: fixedLocation) - })) - |> min() + |> geo.toRows() + |> map(fn: (r) => ({r with _value: geo.ST_Distance(region: {lat: r.lat, lon: r.lon}, geometry: fixedLocation)})) + |> min() ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/st_dwithin.md b/content/flux/v0.x/stdlib/experimental/geo/st_dwithin.md index c7827a7c3..26ae6ff5c 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/st_dwithin.md +++ b/content/flux/v0.x/stdlib/experimental/geo/st_dwithin.md @@ -26,9 +26,9 @@ returns `true` or `false`. import "experimental/geo" geo.ST_DWithin( - region: {lat: 40.7, lon: -73.3, radius: 20.0}, - geometry: {lon: 39.7515, lat: 15.08433}, - distance: 1000.0 + region: {lat: 40.7, lon: -73.3, radius: 20.0}, + geometry: {lon: 39.7515, lat: 15.08433}, + distance: 1000.0, ) // Returns false @@ -56,16 +56,12 @@ _Define distance units with the [`geo.units` option](/flux/v0.x/stdlib/experimen ```js import "experimental/geo" -region = { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 -} +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} data - |> geo.toRows() - |> map(fn: (r) => ({ - r with st_within: geo.ST_DWithin(region: box, geometry: {lat: r.lat, lon: r.lon}, distance: 15.0) - })) + |> geo.toRows() + |> map( + fn: (r) => + ({r with st_within: geo.ST_DWithin(region: box, geometry: {lat: r.lat, lon: r.lon}, distance: 15.0)}), + ) ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/st_intersects.md b/content/flux/v0.x/stdlib/experimental/geo/st_intersects.md index 06faa952b..e3478eb54 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/st_intersects.md +++ b/content/flux/v0.x/stdlib/experimental/geo/st_intersects.md @@ -24,8 +24,8 @@ system (GIS) geometry intersects with the specified region and returns `true` or import "experimental/geo" geo.ST_Intersects( - region: {lat: 40.7, lon: -73.3, radius: 20.0}, - geometry: {linestring: "39.7515 14.01433, 38.3527 13.9228, 36.9978 15.08433"} + region: {lat: 40.7, lon: -73.3, radius: 20.0}, + geometry: {linestring: "39.7515 14.01433, 38.3527 13.9228, 36.9978 15.08433"}, ) // Returns false @@ -49,16 +49,9 @@ _See [GIS geometry definitions](/flux/v0.x/stdlib/experimental/geo/#gis-geometry ```js import "experimental/geo" -region = { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 -} +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} data - |> geo.toRows() - |> map(fn: (r) => ({ - r with st_within: geo.ST_Intersects(region: box, geometry: {lat: r.lat, lon: r.lon}) - })) + |> geo.toRows() + |> map(fn: (r) => ({r with st_within: geo.ST_Intersects(region: box, geometry: {lat: r.lat, lon: r.lon})})) ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/st_length.md b/content/flux/v0.x/stdlib/experimental/geo/st_length.md index f22288091..4af99385d 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/st_length.md +++ b/content/flux/v0.x/stdlib/experimental/geo/st_length.md @@ -25,7 +25,7 @@ Define distance units with the [`geo.units` option](/flux/v0.x/stdlib/experiment import "experimental/geo" geo.ST_Length( - geometry: {linestring: "39.7515 14.01433, 38.3527 13.9228, 36.9978 15.08433"} + geometry: {linestring: "39.7515 14.01433, 38.3527 13.9228, 36.9978 15.08433"}, ) // Returns 346.1023974652474 (km) @@ -45,18 +45,11 @@ _See [GIS geometry definitions](/flux/v0.x/stdlib/experimental/geo/#gis-geometry ```js import "experimental/geo" -region = { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 -} +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} data - |> geo.toRows() - |> geo.asTracks() - |> geo.ST_LineString() - |> map(fn: (r) => ({ - r with st_length: geo.ST_Length(geometry: {linestring: r.st_linestring}) - })) + |> geo.toRows() + |> geo.asTracks() + |> geo.ST_LineString() + |> map(fn: (r) => ({r with st_length: geo.ST_Length(geometry: {linestring: r.st_linestring})})) ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/st_linestring.md b/content/flux/v0.x/stdlib/experimental/geo/st_linestring.md index 95ddd20a3..7d12dd928 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/st_linestring.md +++ b/content/flux/v0.x/stdlib/experimental/geo/st_linestring.md @@ -47,7 +47,7 @@ geo.ST_LineString() import "experimental/geo" data - |> geo.ST_LineString() + |> geo.ST_LineString() ``` ##### Output data @@ -55,18 +55,3 @@ data | id | st_linestring | |:-- |:------------- | | a213b | 39.7515 14.01433, 38.3527 13.9228, 36.9978 15.08433 | - -## Function definition -```js -ST_LineString = (tables=<-) => - tables - |> reduce(fn: (r, accumulator) => ({ - __linestring: accumulator.__linestring + (if accumulator.__count > 0 then ", " else "") + string(v: r.lat) + " " + string(v: r.lon), - __count: accumulator.__count + 1 - }), identity: { - __linestring: "", - __count: 0 - } - ) - |> rename(columns: {__linestring: "st_linestring"}) -``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/strictfilter.md b/content/flux/v0.x/stdlib/experimental/geo/strictfilter.md index 945804f06..e39482870 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/strictfilter.md +++ b/content/flux/v0.x/stdlib/experimental/geo/strictfilter.md @@ -28,7 +28,7 @@ _See [Strict and non-strict filtering](#strict-and-non-strict-filtering) below._ import "experimental/geo" geo.strictFilter( - region: {lat: 40.69335938, lon: -73.30078125, radius: 20.0} + region: {lat: 40.69335938, lon: -73.30078125, radius: 20.0}, ) ``` @@ -77,52 +77,39 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi ```js import "experimental/geo" +region = {minLat: 40.51757813, maxLat: 40.86914063, minLon: -73.65234375, maxLon: -72.94921875} + from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.toRows() - |> geo.strictFilter( - region: { - minLat: 40.51757813, - maxLat: 40.86914063, - minLon: -73.65234375, - maxLon: -72.94921875 - } - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.toRows() + |> geo.strictFilter(region: region) ``` ##### Filter data in a circular region ```js import "experimental/geo" +region = {lat: 40.69335938, lon: -73.30078125, radius: 20.0} + from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.toRows() - |> geo.strictFilter( - region: { - lat: 40.69335938, - lon: -73.30078125, - radius: 20.0 - } - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.toRows() + |> geo.strictFilter(region: region) ``` ##### Filter data in a custom polygon region ```js import "experimental/geo" +region = { + points: [{lat: 40.671659, lon: -73.936631}, {lat: 40.706543, lon: -73.749177}, {lat: 40.791333, lon: -73.880327}], +} + from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.toRows() - |> geo.strictFilter( - region: { - points: [ - {lat: 40.671659, lon: -73.936631}, - {lat: 40.706543, lon: -73.749177}, - {lat: 40.791333, lon: -73.880327} - ] - } - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.toRows() + |> geo.strictFilter(region: region) ``` diff --git a/content/flux/v0.x/stdlib/experimental/geo/torows.md b/content/flux/v0.x/stdlib/experimental/geo/torows.md index 80071dbda..2cdd5653d 100644 --- a/content/flux/v0.x/stdlib/experimental/geo/torows.md +++ b/content/flux/v0.x/stdlib/experimental/geo/torows.md @@ -38,14 +38,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "experimental/geo" from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> geo.toRows() -``` - -## Function definition -```js -toRows = (tables=<-) => - tables - |> v1.fieldsAsCols() + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> geo.toRows() ``` diff --git a/content/flux/v0.x/stdlib/experimental/group.md b/content/flux/v0.x/stdlib/experimental/group.md index 0a1d51f75..655b772b6 100644 --- a/content/flux/v0.x/stdlib/experimental/group.md +++ b/content/flux/v0.x/stdlib/experimental/group.md @@ -58,6 +58,6 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> range(start: -1m) - |> experimental.group(columns: ["_value"], mode: "extend") + |> range(start: -1m) + |> experimental.group(columns: ["_value"], mode: "extend") ``` diff --git a/content/flux/v0.x/stdlib/experimental/histogram.md b/content/flux/v0.x/stdlib/experimental/histogram.md index fc6e7f7fd..e687de929 100644 --- a/content/flux/v0.x/stdlib/experimental/histogram.md +++ b/content/flux/v0.x/stdlib/experimental/histogram.md @@ -28,8 +28,8 @@ Bin counts are cumulative. import "experimental" experimental.histogram( - bins: [50.0, 75.0, 90.0], - normalize: false + bins: [50.0, 75.0, 90.0], + normalize: false, ) ``` @@ -76,9 +76,7 @@ Default is piped-forward data (`<-`). import "experimental" data - |> experimental.histogram( - bins: linearBins(start:0.0, width:20.0, count:5) - ) + |> experimental.histogram(bins: linearBins(start: 0.0, width: 20.0, count: 5)) ``` ##### Input data diff --git a/content/flux/v0.x/stdlib/experimental/histogramquantile.md b/content/flux/v0.x/stdlib/experimental/histogramquantile.md index 7b2262b13..a5f1b07ea 100644 --- a/content/flux/v0.x/stdlib/experimental/histogramquantile.md +++ b/content/flux/v0.x/stdlib/experimental/histogramquantile.md @@ -41,8 +41,8 @@ The function returns the value of the specified quantile from the histogram in t import "experimental" experimental.histogramQuantile( - quantile: 0.5, - minValue: 0.0 + quantile: 0.5, + minValue: 0.0, ) ``` @@ -76,10 +76,7 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> range(start: -1d) - |> filter(fn: (r) => - r._meausrement == "example-measurement" and - r._field == "example-field" - ) - |> experimental.histogramQuantile(quantile: 0.9) + |> range(start: -1d) + |> filter(fn: (r) => r._meausrement == "example-measurement" and r._field == "example-field") + |> experimental.histogramQuantile(quantile: 0.9) ``` diff --git a/content/flux/v0.x/stdlib/experimental/http/get.md b/content/flux/v0.x/stdlib/experimental/http/get.md index ea4c94cab..aec5d314d 100644 --- a/content/flux/v0.x/stdlib/experimental/http/get.md +++ b/content/flux/v0.x/stdlib/experimental/http/get.md @@ -21,9 +21,9 @@ returns the HTTP status code, response body, and response headers. import "experimental/http" http.get( - url: "http://localhost:8086/", - headers: {x:"a", y:"b", z:"c"}, - timeout: 30s + url: "http://localhost:8086/", + headers: {x:"a", y:"b", z:"c"}, + timeout: 30s ) ``` @@ -66,10 +66,7 @@ import "csv" token = secrets.get(key: "READONLY_TOKEN") -response = http.get( - url: "http://localhost:8086/health", - headers: {Authorization: "Token ${token}"} - ) +response = http.get(url: "http://localhost:8086/health", headers: {Authorization: "Token ${token}"}) httpStatus = response.statusCode responseBody = string(v: response.body) @@ -87,12 +84,15 @@ csvData = "#datatype,string,long,string ,result,table,column ,,0,* " + csv.from(csv: csvData) - |> map(fn: (r) => ({ - httpStatus: httpStatus, - responseBody: responseBody, - date: date, - contentLenth: contentLenth, - contentType: contentType, - })) + |> map( + fn: (r) => ({ + httpStatus: httpStatus, + responseBody: responseBody, + date: date, + contentLenth: contentLenth, + contentType: contentType, + }), + ) ``` diff --git a/content/flux/v0.x/stdlib/experimental/http/requests/_index.md b/content/flux/v0.x/stdlib/experimental/http/requests/_index.md new file mode 100644 index 000000000..cf8965d70 --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/http/requests/_index.md @@ -0,0 +1,91 @@ +--- +title: Flux experimental http requests package +list_title: requests package +description: > + The Flux experimental HTTP requests package provides functions for transferring data + using HTTP protocol. + Import the `experimental/http/requests` package. +menu: + flux_0_x_ref: + name: requests + parent: http-exp +weight: 301 +flux/v0.x/tags: [functions, http, package] +introduced: 0.152.0 +--- + +The Flux experimental HTTP requests package provides functions for transferring data +using HTTP protocol. +Import the `experimental/http/requests` package: + +```js +import "experimental/http/requests" +``` + +## Options +The `experimental/http/requests` package includes the following options: + +```js +import "experimental/http/requests" + +option requests.defaultConfig = { + insecureSkipVerify: false, + timeout: 0ns, +} +``` + +### defaultConfig +Global default for all HTTP requests using the `experimental/http/requests` package. +Changing this option affects all other packages using the `experimental/http/requests` package. +To change configuration options for a single request, pass a new configuration +record directly into the corresponding function. + +The `requests.defaultConfig` record contains the following properties: + +- **insecureSkipVerify**: Skip TLS verification _(boolean)_. Default is `false`. +- **timeout**: HTTP request timeout _(duration)_. Default is `0ns` (no timeout). + +_See examples [below](#examples)._ + +## Functions + +{{< children type="functions" show="pages" >}} + +## Examples + +### Change HTTP configuration options globally +Modify the `requests.defaultConfig` option to change all consumers of the +`experimental/http/requests` package. + +```js +import "experimental/http/requests" + +option requests.defaultConfig = { + // Set a default timeout of 5s for all requests + timeout: 0ns, + insecureSkipVerify: true, +} +``` + +### Change configuration for a single request +To change the configuration for a single request, extending the default +configuration with only the configuration values you need to customize. + +```js +import "experimental/http/requests" + +// NOTE: Flux syntax does not yet let you specify anything but an identifier as +// the record to extend. As a workaround, this example rebinds the default +// configuration to a new name, "config". +// See https://github.com/influxdata/flux/issues/3655 +defaultConfig = requests.defaultConfig +config = {defaultConfig with + // Change the timeout to 60s for this request + // NOTE: We don't have to specify any other properites of the config because we're + // extending the default. + timeout: 60s, +} +response = requests.get(url:"http://example.com", config: config) + +requests.peek(response: response) +``` \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/experimental/http/requests/do.md b/content/flux/v0.x/stdlib/experimental/http/requests/do.md new file mode 100644 index 000000000..c4ff02280 --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/http/requests/do.md @@ -0,0 +1,163 @@ +--- +title: requests.do() function +description: > + `requests.do()` makes an HTTP request using the specified request method. +menu: + flux_0_x_ref: + name: requests.do + parent: requests +weight: 401 +flux/v0.x/tags: [http, inputs, outputs] +introduced: 0.152.0 +--- + +`requests.do()` makes an HTTP request using the specified request method. + +```js +import "experimental/http/requests" + +requests.do( + method: "GET", + url: "http://example.com", + params: ["example-param": ["example-param-value"]], + headers: ["Example-Header": "example-header-value"], + body: bytes(v: ""), + config: requests.defaultConfig, +) +``` + +`requests.do()` returns a record with the following properties: + +- **statusCode**: HTTP status code of the request _(as an [integer](/flux/v0.x/data-types/basic/int/))_. +- **body**: Response body _(as [bytes](/flux/v0.x/data-types/basic/bytes/))_. + A maximum size of 100MB is read from the response body. +- **headers**: Response headers _(as a [dictionary](/flux/v0.x/data-types/composite/dict/))_. +- **duration**: Request duration _(as a [duration](/flux/v0.x/data-types/basic/duration/))_. + +## Parameters + +### method {data-type="string"} +HTTP request method. + +**Supported methods**: +- DELETE +- GET +- HEAD +- PATCH +- POST +- PUT + +### url {data-type="string"} +URL to send the request to. + +{{% note %}} +The URL should not include any query parameters. +Use [`params`](#params) to specify query parameters. +{{% /note %}} + +### params {data-type="dict"} +Set of key-value pairs to add to the URL as query parameters. +Query parameters are URL-encoded. +All values for a key are appended to the query. + +### headers {data-type="dict"} +Set of key values pairs to include as request headers. + +### body {data-type="bytes"} +Data to send with the request. + +### config {data-type="record"} +Set of request configuration options. +_See [HTTP configuration option examples](/flux/v0.x/stdlib/experimental/http/requests/#examples)._ + +## Examples + +- [Make a GET request](#make-a-get-request) +- [Make a GET request with authorization](#make-a-get-request-with-authorization) +- [Make a GET request with query parameters](#make-a-get-request-with-query-parameters) +- [Make a GET request and decode the JSON response](#make-a-get-request-and-decode-the-json-response) +- [Make a POST request with a JSON body](#make-a-post-request-with-a-json-body) +- [Output HTTP response data in a table](#output-http-response-data-in-a-table) + +### Make a GET request +```js +import "experimental/http/requests" + +requests.do(url:"http://example.com", method: "GET") +``` + +### Make a GET request with authorization +```js +import "experimental/http/requests" +import "influxdata/influxdb/secrets" + +token = secrets.get(key:"TOKEN") + +requests.do( + method: "GET", + url: "http://example.com", + headers: ["Authorization": "Token ${token}"], +) +``` + +### Make a GET request with query parameters +```js +import "experimental/http/requests" + +requests.do(method: "GET", url: "http://example.com", params: ["start": ["100"]]) +``` + +### Make a GET request and decode the JSON response +To decode a JSON response, import the [`experimental/json` package](/flux/v0.x/stdlib/experimental/json/) +and use [`json.parse()`](/flux/v0.x/stdlib/experimental/json/parse/) to parse +the response into a [Flux type](/flux/v0.x/data-types/). + +```js +import "experimental/http/requests" +import "experimental/json" +import "array" + +response = requests.do(method: "GET", url: "https://api.agify.io", params: ["name": ["nathaniel"]]) + +// api.agify.io returns JSON with the form +// +// { +// name: string, +// age: number, +// count: number, +// } +// +// Define a data variable that parses the JSON response body into a Flux record. +data = json.parse(data: response.body) + +// Use array.from() to construct a table with one row containing our response data. +array.from(rows: [{name: data.name, age: data.age, count: data.count}]) +``` + +### Make a POST request with a JSON body +Use [`json.encode()`](/flux/v0.x/stdlib/json/encode/) to encode a Flux record as +a JSON object. + +```js +import "experimental/http/requests" +import "json" + +requests.do( + method: "POST", + url: "https://goolnk.com/api/v1/shorten", + body: json.encode(v: {url: "http://www.influxdata.com"}), + headers: ["Content-Type": "application/json"], +) +``` + +### Output HTTP response data in a table +To quickly inspect HTTP response data, use [`requests.peek()`](/flux/v0.x/stdlib/experimental/http/requests/peek/) +to output HTTP response data in a table. + +```js +import "experimental/http/requests" + +response = requests.do(method: "GET", url: "http://example.com") + +requests.peek(response: response) +``` \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/experimental/http/requests/get.md b/content/flux/v0.x/stdlib/experimental/http/requests/get.md new file mode 100644 index 000000000..db7fef574 --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/http/requests/get.md @@ -0,0 +1,120 @@ +--- +title: requests.get() function +description: > + `requests.get()` makes an HTTP request using the GET request method. +menu: + flux_0_x_ref: + name: requests.get + parent: requests +weight: 401 +flux/v0.x/tags: [http, inputs, outputs] +introduced: 0.152.0 +--- + +`requests.get()` makes an HTTP request using the GET request method. + +```js +import "experimental/http/requests" + +requests.get( + url: "http://example.com", + params: ["example-param": ["example-param-value"]], + headers: ["Example-Header": "example-header-value"], + body: bytes(v: ""), + config: requests.defaultConfig, +) +``` + +`requests.get()` returns a record with the following properties: + +- **statusCode**: HTTP status code of the request. +- **body**: Response body. A maximum size of 100MB is read from the response body. +- **headers**: Response headers. + +## Parameters + +### url {data-type="string"} +URL to send the request to. + +{{% note %}} +The URL should not include any query parameters. +Use [`params`](#params) to specify query parameters. +{{% /note %}} + +### params {data-type="dict"} +Set of key-value pairs to add to the URL as query parameters. +Query parameters are URL-encoded. +All values for a key are appended to the query. + +### headers {data-type="dict"} +Set of key values pairs to include as request headers. + +### body {data-type="bytes"} +Data to send with the request. + +### config {data-type="record"} +Set of request configuration options. +_See [HTTP configuration option examples](/flux/v0.x/stdlib/experimental/http/requests/#examples)._ + +## Examples + +- [Make a GET request](#make-a-get-request) +- [Make a GET request with authorization](#make-a-get-request-with-authorization) +- [Make a GET request and decode the JSON response](#make-a-get-request-and-decode-the-json-response) +- [Output HTTP response data in a table](#output-http-response-data-in-a-table) + +### Make a GET request +```js +import "experimental/http/requests" + +requests.get(url:"http://example.com") +``` + +### Make a GET request with authorization +```js +import "experimental/http/requests" +import "influxdata/influxdb/secrets" + +token = secrets.get(key: "TOKEN") + +requests.get(url: "http://example.com", headers: ["Authorization": "Bearer ${token}"]) +``` + +### Make a GET request and decode the JSON response +To decode a JSON response, import the [`experimental/json` package](/flux/v0.x/stdlib/experimental/json/) +and use [`json.parse()`](/flux/v0.x/stdlib/experimental/json/parse/) to parse +the response into a [Flux type](/flux/v0.x/data-types/). + +```js +import "experimental/http/requests" +import "experimental/json" +import "array" + +response = requests.get(url: "https://api.agify.io", params: ["name": ["john"]]) + +// api.agify.io returns JSON with the form +// +// { +// name: string, +// age: number, +// count: number, +// } +// +// Define a data variable that parses the JSON response body into a Flux record. +data = json.parse(data: response.body) + +// Use array.from() to construct a table with one row containing our response data. +array.from(rows: [{name: data.name, age: data.age, count: data.count}]) +``` + +### Output HTTP response data in a table +To quickly inspect HTTP response data, use [`requests.peek()`](/flux/v0.x/stdlib/experimental/http/requests/peek/) +to output HTTP response data in a table. + +```js +import "experimental/http/requests" + +response = requests.get(url: "http://example.com") + +requests.peek(response: response) +``` diff --git a/content/flux/v0.x/stdlib/experimental/http/requests/peek.md b/content/flux/v0.x/stdlib/experimental/http/requests/peek.md new file mode 100644 index 000000000..950df2b45 --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/http/requests/peek.md @@ -0,0 +1,74 @@ +--- +title: requests.peek() function +description: > + `requests.peek()` converts an HTTP response into a table for easy inspection. +menu: + flux_0_x_ref: + name: requests.peek + parent: requests +weight: 401 +flux/v0.x/tags: [http] +introduced: 0.154.0 +--- + +`requests.peek()` converts an HTTP response into a table for easy inspection. + +```js +import "experimental/http/requests" + +requests.peek( + response: requests.get(url: "http://example.com") +) +``` + +The output table includes the following columns: + +- **body**: response body as a string +- **statusCode**: returned status code as an integer +- **headers**: string representation of response headers +- **duration**: request duration in nanoseconds + +{{% note %}} +To customize how the response data is structured in a table, use `array.from()` +with a function like `json.parse()`. Parse the response body into a set of values +and then use `array.from()` to construct a table from those values. +{{% /note %}} + + +## Parameters + +### response {data-type="record"} +Response data from an HTTP request. + +## Examples + +### Inspect the response of an HTTP request +```js +import "experimental/http/requests" + +response = requests.get(url: "https://api.agify.io", params: ["name": ["natalie"]]) + +requests.peek(response: response) +``` + +| statusCode | body | headers | duration | +| :--------- | :--- | :------ | -------: | +| 200 | {"name":"natalie","age":34,"count":20959} | _See [returned headers](#returned-headers) string below_ | 1212263875 | + +##### Returned headers +``` +[ + Access-Control-Allow-Headers: Content-Type, X-Genderize-Source, + Access-Control-Allow-Methods: GET, + Access-Control-Allow-Origin: *, + Connection: keep-alive, + Content-Length: 41, + Content-Type: application/json; charset=utf-8, + Date: Wed, 09 Feb 2022 20:00:00 GMT, + Etag: W/"29-klDahUESBLxHyQ7NiaetCn2CvCI", + Server: nginx/1.16.1, + X-Rate-Limit-Limit: 1000, + X-Rate-Limit-Remaining: 999, + X-Rate-Reset: 12203 +] +``` diff --git a/content/flux/v0.x/stdlib/experimental/http/requests/post.md b/content/flux/v0.x/stdlib/experimental/http/requests/post.md new file mode 100644 index 000000000..cbac6fbf8 --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/http/requests/post.md @@ -0,0 +1,114 @@ +--- +title: requests.post() function +description: > + `requests.post()` makes an HTTP request using the POST request method. +menu: + flux_0_x_ref: + name: requests.post + parent: requests +weight: 401 +flux/v0.x/tags: [http, inputs, outputs] +introduced: 0.152.0 +--- + +`requests.post()` makes an HTTP request using the POST request method. + +```js +import "experimental/http/requests" + +requests.post( + url: "http://example.com", + params: ["example-param": ["example-param-value"]], + headers: ["Example-Header": "example-header-value"], + body: bytes(v: ""), + config: requests.defaultConfig, +) +``` + +`requests.post()` returns a record with the following properties: + +- **statusCode**: HTTP status code of the request. +- **body**: Response body. A maximum size of 100MB is read from the response body. +- **headers**: Response headers. + +## Parameters + +### url {data-type="string"} +URL to send the request to. + +{{% note %}} +The URL should not include any query parameters. +Use [`params`](#params) to specify query parameters. +{{% /note %}} + +### params {data-type="dict"} +Set of key-value pairs to add to the URL as query parameters. +Query parameters are URL-encoded. +All values for a key are appended to the query. + +### headers {data-type="dict"} +Set of key values pairs to include as request headers. + +### body {data-type="bytes"} +Data to send with the request. + +### config {data-type="record"} +Set of request configuration options. +_See [HTTP configuration option examples](/flux/v0.x/stdlib/experimental/http/requests/#examples)._ + +## Examples + +- [Make a POST request](#make-a-post-request) +- [Make a POST request with authorization](#make-a-post-request-with-authorization) +- [Make a POST request with a JSON body](#make-a-post-request-with-a-json-body) +- [Output HTTP POST response data in a table](#output-http-post-response-data-in-a-table) + +### Make a POST request +```js +import "json" +import "experimental/http/requests" + +requests.post(url:"http://example.com", body: json.encode(v: {data: {x:1, y: 2, z:3})) +``` + +### Make a POST request with authorization +```js +import "json" +import "experimental/http/requests" +import "influxdata/influxdb/secrets" + +token = secrets.get(key: "TOKEN") + +requests.post( + url: "http://example.com", + body: json.encode(v: {data: {x: 1, y: 2, z: 3}}), + headers: ["Authorization": "Bearer ${token}"], +) +``` + +### Make a POST request with a JSON body +Use [`json.encode()`](/flux/v0.x/stdlib/json/encode/) to encode a Flux record as +a JSON object. + +```js +import "experimental/http/requests" +import "json" + +requests.post( + url: "https://goolnk.com/api/v1/shorten", + body: json.encode(v: {url: "http://www.influxdata.com"}), + headers: ["Content-Type": "application/json"], +) +``` + +### Output HTTP POST response data in a table +To quickly inspect HTTP response data, use [`requests.peek()`](/flux/v0.x/stdlib/experimental/http/requests/peek/) +to output HTTP response data in a table. + +```js +import "experimental/http/requests" + +response = requests.post(url: "http://example.com") + +requests.peek(response: response) +``` diff --git a/content/flux/v0.x/stdlib/experimental/influxdb/api.md b/content/flux/v0.x/stdlib/experimental/influxdb/api.md index ea01a17e2..28be6eebe 100644 --- a/content/flux/v0.x/stdlib/experimental/influxdb/api.md +++ b/content/flux/v0.x/stdlib/experimental/influxdb/api.md @@ -26,14 +26,14 @@ Authorization permissions and limits apply to each request. import "experimental/influxdb" influxdb.api( - method: "get", - path: "/example", - host: "http://localhost:8086", - token: "mySupeR53cre7t0k3n", - headers: ["header1": "header1Value", "header2": "header2Value"], - query: ["ex1": "example1", "ex2": "example2"], - timeout: 30s, - body: bytes(v: "Example body") + method: "get", + path: "/example", + host: "http://localhost:8086", + token: "mySupeR53cre7t0k3n", + headers: ["header1": "header1Value", "header2": "header2Value"], + query: ["ex1": "example1", "ex2": "example2"], + timeout: 30s, + body: bytes(v: "Example body"), ) ``` @@ -41,9 +41,9 @@ influxdb.api( {{% expand "View response record schema" %}} ```js { - statusCode: int, - headers: dict, - body: bytes + statusCode: int, + headers: dict, + body: bytes, } ``` {{% /expand %}} @@ -90,15 +90,9 @@ import "influxdata/influxdb/secrets" token = secrets.get(key: "INFLUX_TOKEN") -response = influxdb.api( - method: "get", - path: "/health", - host: "http://localhost:8086", - token: token, -) +response = influxdb.api(method: "get", path: "/health", host: "http://localhost:8086", token: token) string(v: response.body) - // Returns something similar to: // { // "name":"influxdb", @@ -118,15 +112,17 @@ import "influxdata/influxdb/secrets" token = secrets.get(key: "INFLUX_TOKEN") influxdb.api( - method: "post", - path: "/api/v2/buckets", - host: "http://localhost:8086", - token: token, - body: bytes(v: "{ - \"name\": \"example-bucket\", - \"description\": \"This is an example bucket.\", - \"orgID\": \"x000X0x0xx0X00x0\", - \"retentionRules\": [] - }") + method: "post", + path: "/api/v2/buckets", + host: "http://localhost:8086", + token: token, + body: bytes( + v: "{ + \"name\": \"example-bucket\", + \"description\": \"This is an example bucket.\", + \"orgID\": \"x000X0x0xx0X00x0\", + \"retentionRules\": [] + }", + ), ) ``` diff --git a/content/flux/v0.x/stdlib/experimental/integral.md b/content/flux/v0.x/stdlib/experimental/integral.md index 6c9426d6e..b2a81821e 100644 --- a/content/flux/v0.x/stdlib/experimental/integral.md +++ b/content/flux/v0.x/stdlib/experimental/integral.md @@ -26,8 +26,8 @@ _`integral()` is an [aggregate function](/flux/v0.x/function-types/#aggregates). ```js integral( - unit: 10s, - interpolate: "" + unit: 10s, + interpolate: "", ) ``` @@ -55,21 +55,15 @@ Default is piped-forward data (`<-`). ##### Calculate the integral ```js from(bucket: "example-bucket") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" - ) - |> integral(unit:10s) + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") + |> integral(unit: 10s) ``` ##### Calculate the integral with linear interpolation ```js from(bucket: "example-bucket") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" - ) - |> integral(unit:10s, interpolate: "linear") + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") + |> integral(unit: 10s, interpolate: "linear") ``` diff --git a/content/flux/v0.x/stdlib/experimental/iox/_index.md b/content/flux/v0.x/stdlib/experimental/iox/_index.md new file mode 100644 index 000000000..eec189f86 --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/iox/_index.md @@ -0,0 +1,25 @@ +--- +title: Flux Experimental IOx package +list_title: iox package +description: > + The Flux experimental `iox` package provides functions for querying data from IOx. + Import the `experimental/iox` package. +menu: + flux_0_x_ref: + name: iox + parent: experimental +weight: 301 +flux/v0.x/tags: [functions, iox, package] +introduced: 0.152.0 +--- + +The experimental `iox` package provides functions for querying data from [IOx](https://github.com/influxdata/influxdb_iox). +Import the `experimental/iox` package: + +```js +import 'experimental/iox'; +``` + +## Functions + +{{< children type="functions" show="pages" >}} diff --git a/content/flux/v0.x/stdlib/experimental/iox/from.md b/content/flux/v0.x/stdlib/experimental/iox/from.md new file mode 100644 index 000000000..9df1bb1f6 --- /dev/null +++ b/content/flux/v0.x/stdlib/experimental/iox/from.md @@ -0,0 +1,42 @@ +--- +title: iox.from() function +description: > + `iox.from()` queries data from the specified bucket and measurement in an IOx + storage node. +menu: + flux_0_x_ref: + name: iox.from + parent: iox +weight: 401 +flux/v0.x/tags: [iox, inputs] +introduced: 0.152.0 +--- + +{{% warn %}} +`iox.from()` is in active development and has not been fully implemented. +This function acts as a placeholder as the implementation is completed. +{{% /warn %}} + +`iox.from()` queries data from the specified bucket and measurement in an +[IOx](https://github.com/influxdata/influxdb_iox) storage node. + +```js +import "experimental/iox" + +iox.from( + bucket: "example-bucket", + measurement: "example-measurement", +) +``` + +Output data is "pivoted" on the time column and includes columns for each +returned tag and field per time value. + +## Parameters + +### bucket {data-type="string"} +IOx bucket to read data from. + +### measurement {data-type="string"} +Measurement to read data from. + diff --git a/content/flux/v0.x/stdlib/experimental/join.md b/content/flux/v0.x/stdlib/experimental/join.md index e00da9c75..2902ca9b5 100644 --- a/content/flux/v0.x/stdlib/experimental/join.md +++ b/content/flux/v0.x/stdlib/experimental/join.md @@ -32,9 +32,9 @@ import "experimental" // ... experimental.join( - left: left, - right: right, - fn: (left, right) => ({left with lv: left._value, rv: right._value }) + left: left, + right: right, + fn: (left, right) => ({left with lv: left._value, rv: right._value }), ) ``` @@ -83,14 +83,13 @@ The return value must be a record. import "experimental" experimental.join( - left: left, - right: right, - fn: (left, right) => ({ - left with - lv: left._value, - rv: right._value, - diff: left._value - right._value - }) + left: left, + right: right, + fn: (left, right) => ({ left with + lv: left._value, + rv: right._value, + diff: left._value - right._value, + }) ) ``` @@ -108,22 +107,14 @@ experimental.join( import "experimental" s1 = from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "foo") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "foo") s2 = from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "bar") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "bar") -experimental.join( - left: s1, - right: s2, - fn: (left, right) => ({ - left with - s1_value: left._value, - s2_value: right._value - }) -) +experimental.join(left: s1, right: s2, fn: (left, right) => ({left with s1_value: left._value, s2_value: right._value})) ``` ###### Join two streams of tables with different fields and measurements @@ -131,22 +122,14 @@ experimental.join( import "experimental" s1 = from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "foo" and r._field == "bar") - |> group(columns: ["_time", "_measurement", "_field", "_value"], mode: "except") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "foo" and r._field == "bar") + |> group(columns: ["_time", "_measurement", "_field", "_value"], mode: "except") s2 = from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "baz" and r._field == "quz") - |> group(columns: ["_time", "_measurement", "_field", "_value"], mode: "except") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "baz" and r._field == "quz") + |> group(columns: ["_time", "_measurement", "_field", "_value"], mode: "except") -experimental.join( - left: s1, - right: s2, - fn: (left, right) => ({ - left with - bar_value: left._value, - quz_value: right._value - }) -) +experimental.join(left: s1, right: s2, fn: (left, right) => ({left with bar_value: left._value, quz_value: right._value})) ``` diff --git a/content/flux/v0.x/stdlib/experimental/json/parse.md b/content/flux/v0.x/stdlib/experimental/json/parse.md index a167eb6af..a77f322cd 100644 --- a/content/flux/v0.x/stdlib/experimental/json/parse.md +++ b/content/flux/v0.x/stdlib/experimental/json/parse.md @@ -38,15 +38,17 @@ JSON data to parse. import "experimental/json" data - |> map(fn: (r) => { - jsonData = json.parse(data: bytes(v: r._value)) - - return { - _time: r._time, - _field: r._field, - a: jsonData.a, - b: jsonData.b, - c: jsonData.c, - } - }) + |> map( + fn: (r) => { + jsonData = json.parse(data: bytes(v: r._value)) + + return { + _time: r._time, + _field: r._field, + a: jsonData.a, + b: jsonData.b, + c: jsonData.c, + } + }, + ) ``` diff --git a/content/flux/v0.x/stdlib/experimental/kaufmansama.md b/content/flux/v0.x/stdlib/experimental/kaufmansama.md index d2632139f..1e580ca09 100644 --- a/content/flux/v0.x/stdlib/experimental/kaufmansama.md +++ b/content/flux/v0.x/stdlib/experimental/kaufmansama.md @@ -45,6 +45,6 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket"): - |> range(start: -7d) - |> experimental.kaufmansAMA(n: 10) + |> range(start: -7d) + |> experimental.kaufmansAMA(n: 10) ``` \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/experimental/last.md b/content/flux/v0.x/stdlib/experimental/last.md index cf2c18d74..bfb820999 100644 --- a/content/flux/v0.x/stdlib/experimental/last.md +++ b/content/flux/v0.x/stdlib/experimental/last.md @@ -47,7 +47,7 @@ Default is piped-forward data (`<-`). import "experimental" data - |> experimental.last() + |> experimental.last() ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/experimental/max.md b/content/flux/v0.x/stdlib/experimental/max.md index 60fca8f48..d91b82273 100644 --- a/content/flux/v0.x/stdlib/experimental/max.md +++ b/content/flux/v0.x/stdlib/experimental/max.md @@ -44,7 +44,7 @@ Default is piped-forward data (`<-`). import "experimental" data - |> experimental.max() + |> experimental.max() ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/experimental/mean.md b/content/flux/v0.x/stdlib/experimental/mean.md index b5680b7a3..febc97c2f 100644 --- a/content/flux/v0.x/stdlib/experimental/mean.md +++ b/content/flux/v0.x/stdlib/experimental/mean.md @@ -39,10 +39,8 @@ Default is piped-forward data (`<-`). ```js import "experimental" -from(bucket:"example-bucket") - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field") - |> range(start:-1h) - |> experimental.mean() +from(bucket: "example-bucket") + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") + |> range(start: -1h) + |> experimental.mean() ``` diff --git a/content/flux/v0.x/stdlib/experimental/min.md b/content/flux/v0.x/stdlib/experimental/min.md index 1d968e12f..14cf72cc1 100644 --- a/content/flux/v0.x/stdlib/experimental/min.md +++ b/content/flux/v0.x/stdlib/experimental/min.md @@ -44,7 +44,7 @@ Default is piped-forward data (`<-`). import "experimental" data - |> experimental.min() + |> experimental.min() ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/experimental/mode.md b/content/flux/v0.x/stdlib/experimental/mode.md index 046d68018..b663d276e 100644 --- a/content/flux/v0.x/stdlib/experimental/mode.md +++ b/content/flux/v0.x/stdlib/experimental/mode.md @@ -59,11 +59,8 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" - ) - |> range(start:-12h) - |> window(every:10m) - |> experimental.mode() + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") + |> range(start: -12h) + |> window(every: 10m) + |> experimental.mode() ``` diff --git a/content/flux/v0.x/stdlib/experimental/mqtt/publish.md b/content/flux/v0.x/stdlib/experimental/mqtt/publish.md index 31aa1777d..cf3a55fea 100644 --- a/content/flux/v0.x/stdlib/experimental/mqtt/publish.md +++ b/content/flux/v0.x/stdlib/experimental/mqtt/publish.md @@ -16,15 +16,15 @@ The `mqtt.publish()` function outputs data to an MQTT broker using MQTT protocol import "experimental/mqtt" mqtt.publish( - broker: "tcp://localhost:8883", - topic: "example-topic", - message: "Example message", - qos: 0, - retain: false, - clientid: "flux-mqtt", - username: "username", - password: "password", - timeout: 1s + broker: "tcp://localhost:8883", + topic: "example-topic", + message: "Example message", + qos: 0, + retain: false, + clientid: "flux-mqtt", + username: "username", + password: "password", + timeout: 1s, ) ``` @@ -72,11 +72,11 @@ Default is `1s`. import "experimental/mqtt" mqtt.publish( - broker: "tcp://localhost:8883", - topic: "alerts", - message: "wake up", - clientid: "alert-watcher", - retain: true + broker: "tcp://localhost:8883", + topic: "alerts", + message: "wake up", + clientid: "alert-watcher", + retain: true, ) ``` @@ -86,15 +86,16 @@ import "experimental/mqtt" import "influxdata/influxdb/sample" sample.data(set: "airSensor") - |> range(start: -20m) - |> last() - |> map(fn: (r) => ({ - r with - sent: mqtt.publish( - broker: "tcp://localhost:8883", - topic: "air-sensors/last/${r.sensorID}", - message: string(v: r._value), - clientid: "sensor-12a4" - ) - })) + |> range(start: -20m) + |> last() + |> map(fn: (r) => ({ + r with + sent: mqtt.publish( + broker: "tcp://localhost:8883", + topic: "air-sensors/last/${r.sensorID}", + message: string(v: r._value), + clientid: "sensor-12a4", + ) + }) + ) ``` diff --git a/content/flux/v0.x/stdlib/experimental/mqtt/to.md b/content/flux/v0.x/stdlib/experimental/mqtt/to.md index 7db098fa1..a5b1ce540 100644 --- a/content/flux/v0.x/stdlib/experimental/mqtt/to.md +++ b/content/flux/v0.x/stdlib/experimental/mqtt/to.md @@ -20,17 +20,17 @@ The `mqtt.to()` function outputs data to an MQTT broker using MQTT protocol. import "experimental/mqtt" mqtt.to( - broker: "tcp://localhost:8883", - topic: "example-topic", - qos: 0, - clientid: "flux-mqtt", - username: "username", - password: "password", - name: "name-example", - timeout: 1s, - timeColumn: "_time", - tagColumns: ["tag1", "tag2"], - valueColumns: ["_value"] + broker: "tcp://localhost:8883", + topic: "example-topic", + qos: 0, + clientid: "flux-mqtt", + username: "username", + password: "password", + name: "name-example", + timeout: 1s, + timeColumn: "_time", + tagColumns: ["tag1", "tag2"], + valueColumns: ["_value"], ) ``` @@ -90,13 +90,13 @@ Default is `["_value"]`. import "experimental/mqtt" from(bucket: "example-bucket") - |> range(start: -5m) - |> filter(fn: (r) => r._measurement == "airSensor") - |> mqtt.to( - broker: "tcp://localhost:8883", - topic: "air-sensors", - clientid: "sensor-12a4", - tagColumns: ["sensorID"], - valueColumns: ["_value"] - ) + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "airSensor") + |> mqtt.to( + broker: "tcp://localhost:8883", + topic: "air-sensors", + clientid: "sensor-12a4", + tagColumns: ["sensorID"], + valueColumns: ["_value"], + ) ``` diff --git a/content/flux/v0.x/stdlib/experimental/objectkeys.md b/content/flux/v0.x/stdlib/experimental/objectkeys.md index a77cf7cd3..878948bb1 100644 --- a/content/flux/v0.x/stdlib/experimental/objectkeys.md +++ b/content/flux/v0.x/stdlib/experimental/objectkeys.md @@ -19,7 +19,7 @@ The `experimental.objectKeys()` function returns an array of keys in a specified import "experimental" experimental.objectKeys( - o: {key1: "value1", key2: "value2"} + o: {key1: "value1", key2: "value2"} ) // Returns [key1, key2] @@ -36,13 +36,8 @@ The record to return keys from. ```js import "experimental" -user = { - firstName: "John", - lastName: "Doe", - age: 42 -} +user = {firstName: "John", lastName: "Doe", age: 42} experimental.objectKeys(o: user) - // Returns [firstName, lastName, age] ``` diff --git a/content/flux/v0.x/stdlib/experimental/oee/apq.md b/content/flux/v0.x/stdlib/experimental/oee/apq.md index 97a725610..06a24acf7 100644 --- a/content/flux/v0.x/stdlib/experimental/oee/apq.md +++ b/content/flux/v0.x/stdlib/experimental/oee/apq.md @@ -23,9 +23,9 @@ _`oee.APQ()` is an [aggregate function](/flux/v0.x/function-types/#aggregates)._ import "experimental/oee" oee.APQ( - runningState: "running", - plannedTime: 8h, - idealCycleTime: 2m + runningState: "running", + plannedTime: 8h, + idealCycleTime: 2m, ) ``` @@ -98,12 +98,8 @@ import "experimental/oee" productionData = // ... productionData - |> oee.APQ( - runningState: "running", - plannedTime: 8h, - idealCycleTime: 21s - ) - |> drop(columns: ["_start","_stop"]) + |> oee.APQ(runningState: "running", plannedTime: 8h, idealCycleTime: 21s) + |> drop(columns: ["_start", "_stop"]) ``` #### Output data diff --git a/content/flux/v0.x/stdlib/experimental/oee/computeapq.md b/content/flux/v0.x/stdlib/experimental/oee/computeapq.md index ff4408a0e..05bc0d93b 100644 --- a/content/flux/v0.x/stdlib/experimental/oee/computeapq.md +++ b/content/flux/v0.x/stdlib/experimental/oee/computeapq.md @@ -23,11 +23,11 @@ _`oee.computeAPQ()` is an [aggregate function](/flux/v0.x/function-types/#aggreg import "experimental/oee" oee.computeAPQ( - productionEvents: exampleProductionScheme, - partEvents: examplePartsStream, - runningState: "running", - plannedTime: 8h, - idealCycleTime: 2m, + productionEvents: exampleProductionScheme, + partEvents: examplePartsStream, + runningState: "running", + plannedTime: 8h, + idealCycleTime: 2m, ) ``` @@ -120,13 +120,13 @@ productionData = // ... partsData = // ... oee.computeAPQ( - productionEvents: productionData, - partEvents: partsData, - runningState: "running", - plannedTime: 8h, - idealCycleTime: 21s + productionEvents: productionData, + partEvents: partsData, + runningState: "running", + plannedTime: 8h, + idealCycleTime: 21s, ) -|> drop(columns: ["_start","_stop"]) + |> drop(columns: ["_start", "_stop"]) ``` #### Output data diff --git a/content/flux/v0.x/stdlib/experimental/prometheus/histogramquantile.md b/content/flux/v0.x/stdlib/experimental/prometheus/histogramquantile.md index 7230032a0..0bdf37c3b 100644 --- a/content/flux/v0.x/stdlib/experimental/prometheus/histogramquantile.md +++ b/content/flux/v0.x/stdlib/experimental/prometheus/histogramquantile.md @@ -23,8 +23,8 @@ _`prometheus.histogramQuantile()` is an [aggregate function](/flux/v0.x/function import "experimental/prometheus" prometheus.histogramQuantile( - quantile: 0.99, - metricVersion: 2 + quantile: 0.99, + metricVersion: 2, ) ``` diff --git a/content/flux/v0.x/stdlib/experimental/prometheus/scrape.md b/content/flux/v0.x/stdlib/experimental/prometheus/scrape.md index 5db0dddb4..07b347611 100644 --- a/content/flux/v0.x/stdlib/experimental/prometheus/scrape.md +++ b/content/flux/v0.x/stdlib/experimental/prometheus/scrape.md @@ -26,7 +26,7 @@ The function groups metrics (including histogram and summary values) into indivi import "experimental/prometheus" prometheus.scrape( - url: "http://localhost:8086/metrics" + url: "http://localhost:8086/metrics" ) ``` @@ -42,8 +42,5 @@ The URL to scrape Prometheus-formatted metrics from. import "experimental/prometheus" prometheus.scrape(url: "https://example-url.com/metrics") - |> to( - org: "example-org", - bucket: "example-bucket" - ) + |> to(org: "example-org", bucket: "example-bucket") ``` diff --git a/content/flux/v0.x/stdlib/experimental/quantile.md b/content/flux/v0.x/stdlib/experimental/quantile.md index d61654cab..780d85a5f 100644 --- a/content/flux/v0.x/stdlib/experimental/quantile.md +++ b/content/flux/v0.x/stdlib/experimental/quantile.md @@ -33,9 +33,9 @@ the [`method`](#method) used._ import "experimental" experimental.quantile( - q: 0.99, - method: "estimate_tdigest", - compression: 1000.0 + q: 0.99, + method: "estimate_tdigest", + compression: 1000.0, ) ``` @@ -86,15 +86,9 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field") - |> experimental.quantile( - q: 0.99, - method: "estimate_tdigest", - compression: 1000.0 - ) + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") + |> experimental.quantile(q: 0.99, method: "estimate_tdigest", compression: 1000.0) ``` ###### Quantile as a selector @@ -102,12 +96,7 @@ from(bucket: "example-bucket") import "experimental" from(bucket: "example-bucket") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field") - |> experimental.quantile( - q: 0.99, - method: "exact_selector" - ) + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") + |> experimental.quantile(q: 0.99, method: "exact_selector") ``` diff --git a/content/flux/v0.x/stdlib/experimental/query/_index.md b/content/flux/v0.x/stdlib/experimental/query/_index.md index f16f2d0bb..1135df811 100644 --- a/content/flux/v0.x/stdlib/experimental/query/_index.md +++ b/content/flux/v0.x/stdlib/experimental/query/_index.md @@ -34,11 +34,11 @@ which uses all other functions in this package. import "experimental/query" query.inBucket( - bucket: "example-bucket", - start: -1h, - stop: now(), - measurement: "example-measurement", - fields: ["exampleField1", "exampleField2"], - predicate: (r) => r.tagA == "foo" and r.tagB != "bar" + bucket: "example-bucket", + start: -1h, + stop: now(), + measurement: "example-measurement", + fields: ["exampleField1", "exampleField2"], + predicate: (r) => r.tagA == "foo" and r.tagB != "bar", ) ``` diff --git a/content/flux/v0.x/stdlib/experimental/query/filterfields.md b/content/flux/v0.x/stdlib/experimental/query/filterfields.md index c9f72ea1e..053773396 100644 --- a/content/flux/v0.x/stdlib/experimental/query/filterfields.md +++ b/content/flux/v0.x/stdlib/experimental/query/filterfields.md @@ -20,7 +20,7 @@ The `query.filterFields()` function filters input data by field. import "experimental/query" query.filterFields( - fields: ["exampleField1", "exampleField2"] + fields: ["exampleField1", "exampleField2"] ) ``` @@ -40,24 +40,5 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "experimental/query" query.fromRange(bucket: "telegraf", start: -1h) - |> query.filterFields( - fields: ["used_percent", "available_percent"] - ) + |> query.filterFields(fields: ["used_percent", "available_percent"]) ``` - -## Function definition -```js -package query - -filterFields = (tables=<-, fields=[]) => - if length(arr: fields) == 0 then - tables - else - tables - |> filter(fn: (r) => contains(value: r._field, set: fields)) -``` - -_**Used functions:**_ -[contains()](/flux/v0.x/stdlib/universe/contains/) -[filter()](/flux/v0.x/stdlib/universe/filter/) -[length()](/flux/v0.x/stdlib/universe/length/) diff --git a/content/flux/v0.x/stdlib/experimental/query/filtermeasurement.md b/content/flux/v0.x/stdlib/experimental/query/filtermeasurement.md index b6a19ebcf..7d5c57be2 100644 --- a/content/flux/v0.x/stdlib/experimental/query/filtermeasurement.md +++ b/content/flux/v0.x/stdlib/experimental/query/filtermeasurement.md @@ -20,7 +20,7 @@ The `query.filterMeasurement()` function filters input data by measurement. import "experimental/query" query.filterMeasurement( - measurement: "example-measurement" + measurement: "example-measurement" ) ``` @@ -40,19 +40,5 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "experimental/query" query.fromRange(bucket: "example-bucket", start: -1h) - |> query.filterMeasurement( - measurement: "example-measurement" - ) + |> query.filterMeasurement(measurement: "example-measurement") ``` - -## Function definition -```js -package query - -filterMeasurement = (tables=<-, measurement) => - tables - |> filter(fn: (r) => r._measurement == measurement) -``` - -_**Used functions:**_ -[filter()](/flux/v0.x/stdlib/universe/filter/) diff --git a/content/flux/v0.x/stdlib/experimental/query/fromrange.md b/content/flux/v0.x/stdlib/experimental/query/fromrange.md index d74dc2c86..9710debf8 100644 --- a/content/flux/v0.x/stdlib/experimental/query/fromrange.md +++ b/content/flux/v0.x/stdlib/experimental/query/fromrange.md @@ -23,9 +23,9 @@ given time bounds. import "experimental/query" query.fromRange( - bucket: "example-bucket", - start: -1h, - stop: now() + bucket: "example-bucket", + start: -1h, + stop: now(), ) ``` @@ -54,17 +54,5 @@ Defaults to `now()`. ```js import "experimental/query" -query.fromRange( - bucket: "example-bucket", - start: 2020-01-01T00:00:00Z -) -``` - -## Function definition -```js -package query - -fromRange = (bucket, start, stop=now()) => - from(bucket: bucket) - |> range(start: start, stop: stop) +query.fromRange(bucket: "example-bucket", start: 2020-01-01T00:00:00Z) ``` diff --git a/content/flux/v0.x/stdlib/experimental/query/inbucket.md b/content/flux/v0.x/stdlib/experimental/query/inbucket.md index 8159f6928..6c184de9d 100644 --- a/content/flux/v0.x/stdlib/experimental/query/inbucket.md +++ b/content/flux/v0.x/stdlib/experimental/query/inbucket.md @@ -23,12 +23,12 @@ time bounds, filters data by measurement, field, and optional predicate expressi import "experimental/query" query.inBucket( - bucket: "example-bucket", - start: -1h, - stop: now(), - measurement: "example-measurement", - fields: ["exampleField1", "exampleField2"], - predicate: (r) => true + bucket: "example-bucket", + start: -1h, + stop: now(), + measurement: "example-measurement", + fields: ["exampleField1", "exampleField2"], + predicate: (r) => true, ) ``` @@ -74,34 +74,10 @@ Default is `(r) => true`. import "experimental/query" query.inBucket( - bucket: "telegraf", - start: -1h, - measurement: "mem", - fields: ["used_percent", "available_percent"], - predicate: (r) => r.host == "host1" + bucket: "telegraf", + start: -1h, + measurement: "mem", + fields: ["used_percent", "available_percent"], + predicate: (r) => r.host == "host1", ) ``` - -## Function definition -```js -package query - -inBucket = ( - bucket, - start, - stop=now(), - measurement, - fields=[], - predicate=(r) => true -) => - fromRange(bucket: bucket, start: start, stop: stop) - |> filterMeasurement(measurement) - |> filter(fn: predicate) - |> filterFields(fields) -``` - -_**Used functions:**_ -[filter()](/flux/v0.x/stdlib/universe/filter/) -[query.filterFields()](/flux/v0.x/stdlib/experimental/query/filterfields/) -[query.filterMeasurement()](/flux/v0.x/stdlib/experimental/query/filtermeasurement/) -[query.fromRange()](/flux/v0.x/stdlib/experimental/query/fromrange/) diff --git a/content/flux/v0.x/stdlib/experimental/record/get.md b/content/flux/v0.x/stdlib/experimental/record/get.md index 210fce6ea..e759188c0 100644 --- a/content/flux/v0.x/stdlib/experimental/record/get.md +++ b/content/flux/v0.x/stdlib/experimental/record/get.md @@ -25,9 +25,9 @@ For more information, see [influxdata/flux#4073](https://github.com/influxdata/f import "experimental/record" record.get( - r: {foo, "bar"}, - key: "foo", - default: "quz" + r: {foo, "bar"}, + key: "foo", + default: "quz", ) ``` @@ -54,11 +54,6 @@ import "experimental/record" key = "foo" exampleRecord = {foo: 1.0, bar: "hello"} -record.get( - r: exampleRecord, - key: key, - default: "" -) - +record.get(r: exampleRecord, key: key, default: "") // Returns 1.0 ``` \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/experimental/set.md b/content/flux/v0.x/stdlib/experimental/set.md index 76e1a48ba..300e77b54 100644 --- a/content/flux/v0.x/stdlib/experimental/set.md +++ b/content/flux/v0.x/stdlib/experimental/set.md @@ -27,7 +27,7 @@ _Once sufficiently vetted, `experimental.set()` will replace the existing import "experimental" experimental.set( - o: {column1: "value1", column2: "value2"} + o: {column1: "value1", column2: "value2"} ) ``` @@ -58,13 +58,13 @@ Default is piped-forward data (`<-`). import "experimental" data - |> experimental.set( - o: { - _field: "temperature", - unit: "°F", - location: "San Francisco" - } - ) + |> experimental.set(o: + { + _field: "temperature", + unit: "°F", + location: "San Francisco" + } + ) ``` ##### Example output table diff --git a/content/flux/v0.x/stdlib/experimental/skew.md b/content/flux/v0.x/stdlib/experimental/skew.md index df4f822f3..f0a6836ff 100644 --- a/content/flux/v0.x/stdlib/experimental/skew.md +++ b/content/flux/v0.x/stdlib/experimental/skew.md @@ -38,10 +38,7 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" - ) - |> experimental.skew() + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") + |> experimental.skew() ``` diff --git a/content/flux/v0.x/stdlib/experimental/spread.md b/content/flux/v0.x/stdlib/experimental/spread.md index 72799fa2f..57188b1f8 100644 --- a/content/flux/v0.x/stdlib/experimental/spread.md +++ b/content/flux/v0.x/stdlib/experimental/spread.md @@ -45,10 +45,7 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" - ) - |> experimental.spread() + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") + |> experimental.spread() ``` diff --git a/content/flux/v0.x/stdlib/experimental/stddev.md b/content/flux/v0.x/stdlib/experimental/stddev.md index 382be9873..a10180b90 100644 --- a/content/flux/v0.x/stdlib/experimental/stddev.md +++ b/content/flux/v0.x/stdlib/experimental/stddev.md @@ -54,10 +54,7 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" - ) - |> experimental.stddev() + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") + |> experimental.stddev() ``` diff --git a/content/flux/v0.x/stdlib/experimental/subduration.md b/content/flux/v0.x/stdlib/experimental/subduration.md index 6d53bf5ed..74fda166b 100644 --- a/content/flux/v0.x/stdlib/experimental/subduration.md +++ b/content/flux/v0.x/stdlib/experimental/subduration.md @@ -15,16 +15,17 @@ flux/v0.x/tags: [date/time] related: - /flux/v0.x/stdlib/experimental/addduration/ introduced: 0.39.0 +deprecated: 0.162.0 --- +{{% warn %}} +This function was promoted to the [`date` package](/flux/v0.x/stdlib/date/subduration/) +in **Flux v0.162.0**. This experimental version has been deprecated. +{{% /warn %}} + The `experimental.subDuration()` function subtracts a duration from a time value and returns the resulting time value. -{{% warn %}} -This function will be removed once duration vectors are implemented. -See [influxdata/flux#413](https://github.com/influxdata/flux/issues/413). -{{% /warn %}} - ```js import "experimental" @@ -37,10 +38,10 @@ experimental.subDuration( ## Parameters ### d {data-type="duration"} -The duration to subtract. +Duration to subtract. ### from {data-type="time, duration"} -The time to subtract the [duration](#d) from. +Time to subtract the [duration](#d) from. Use an absolute time or a relative duration. Durations are relative to [`now()`](/flux/v0.x/stdlib/universe/now/). diff --git a/content/flux/v0.x/stdlib/experimental/sum.md b/content/flux/v0.x/stdlib/experimental/sum.md index bcc7e630c..84364b397 100644 --- a/content/flux/v0.x/stdlib/experimental/sum.md +++ b/content/flux/v0.x/stdlib/experimental/sum.md @@ -40,10 +40,7 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> range(start: -5m) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" - ) - |> experimental.sum() + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") + |> experimental.sum() ``` diff --git a/content/flux/v0.x/stdlib/experimental/table/fill.md b/content/flux/v0.x/stdlib/experimental/table/fill.md index d8677b429..f887c9af9 100644 --- a/content/flux/v0.x/stdlib/experimental/table/fill.md +++ b/content/flux/v0.x/stdlib/experimental/table/fill.md @@ -35,7 +35,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "experimental/table" data - |> table.fill() + |> table.fill() ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/experimental/to.md b/content/flux/v0.x/stdlib/experimental/to.md index 0e2de00d6..0ef74565a 100644 --- a/content/flux/v0.x/stdlib/experimental/to.md +++ b/content/flux/v0.x/stdlib/experimental/to.md @@ -25,19 +25,19 @@ a [different structure](#expected-data-structure) than the import "experimental" experimental.to( - bucket: "my-bucket", - org: "my-org", - host: "http://localhost:8086", - token: "mY5uPeRs3Cre7tok3N" + bucket: "my-bucket", + org: "my-org", + host: "http://localhost:8086", + token: "mY5uPeRs3Cre7tok3N", ) // OR experimental.to( - bucketID: "1234567890", - orgID: "0987654321", - host: "http://localhost:8086", - token: "mY5uPeRs3Cre7tok3N" + bucketID: "1234567890", + orgID: "0987654321", + host: "http://localhost:8086", + token: "mY5uPeRs3Cre7tok3N", ) ``` @@ -118,13 +118,7 @@ Default is piped-forward data (`<-`). import "experimental" from(bucket: "example-bucket") - |> range(start: -1h) - |> pivot( - rowKey:["_time"], - columnKey: ["_field"], - valueColumn: "_value") - |> experimental.to( - bucket: "bucket-name", - org: "org-name" - ) + |> range(start: -1h) + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") + |> experimental.to(bucket: "bucket-name", org: "org-name") ``` diff --git a/content/flux/v0.x/stdlib/experimental/unique.md b/content/flux/v0.x/stdlib/experimental/unique.md index 00bda0f29..10710b44f 100644 --- a/content/flux/v0.x/stdlib/experimental/unique.md +++ b/content/flux/v0.x/stdlib/experimental/unique.md @@ -24,6 +24,7 @@ _`experimental.unique()` is a [selector function](/flux/v0.x/function-types/#sel ```js import "experimental" + experimental.unique() ``` @@ -48,8 +49,9 @@ Default is piped-forward data (`<-`). ## Examples ```js import "experimental" + data - |> experimental.unique() + |> experimental.unique() ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/experimental/usage/from.md b/content/flux/v0.x/stdlib/experimental/usage/from.md index fb18cab33..063f1fa0d 100644 --- a/content/flux/v0.x/stdlib/experimental/usage/from.md +++ b/content/flux/v0.x/stdlib/experimental/usage/from.md @@ -11,6 +11,9 @@ aliases: - /influxdb/cloud/reference/flux/stdlib/experimental/usage/from/ weight: 401 flux/v0.x/tags: [inputs] +related: + - /influxdb/cloud/account-management/data-usage/ + - /influxdb/cloud/account-management/limits/ --- `usage.from()` returns usage data from an **InfluxDB Cloud** organization. @@ -21,12 +24,12 @@ anomalies or rate limiting. import "experimental/usage" usage.from( - start: -30d, - stop: now(), - host: "", - orgID: "", - token: "", - raw: false + start: -30d, + stop: now(), + host: "", + orgID: "", + token: "", + raw: false, ) ``` @@ -123,7 +126,7 @@ usage.from( stop: now(), host: "https://cloud2.influxdata.com", orgID: "x000X0x0xx0X00x0", - token: token + token: token, ) ``` @@ -158,8 +161,8 @@ usage.from(start: -30d, stop: now()) import "experimental/usage" checkLimit = (tables=<-, limit) => tables - |> map(fn: (r) => ({ r with _value: r._value / 1000, limit: int(v: limit) * 60 * 5 })) - |> map(fn: (r) => ({ r with limitReached: r._value > r.limit})) + |> map(fn: (r) => ({r with _value: r._value / 1000, limit: int(v: limit) * 60 * 5})) + |> map(fn: (r) => ({r with limitReached: r._value > r.limit})) read = usage.from(start: -30d, stop: now()) |> filter(fn: (r) => r._measurement == "http_request") diff --git a/content/flux/v0.x/stdlib/experimental/usage/limits.md b/content/flux/v0.x/stdlib/experimental/usage/limits.md index 37f46baff..266270e87 100644 --- a/content/flux/v0.x/stdlib/experimental/usage/limits.md +++ b/content/flux/v0.x/stdlib/experimental/usage/limits.md @@ -13,6 +13,8 @@ aliases: weight: 401 related: - /flux/v0.x/stdlib/influxdata/influxdb/cardinality/ + - /influxdb/cloud/account-management/data-usage/ + - /influxdb/cloud/account-management/limits/ --- The `usage.limits()` function returns a record containing usage limits for an @@ -23,9 +25,9 @@ The `usage.limits()` function returns a record containing usage limits for an import "experimental/usage" usage.limits( - host: "", - orgID: "", - token: "" + host: "", + orgID: "", + token: "", ) ``` @@ -33,34 +35,34 @@ usage.limits( {{% expand "View example usage limits record" %}} ```js { - orgID: "123", - rate: { - readKBs: 1000, - concurrentReadRequests: 0, - writeKBs: 17, - concurrentWriteRequests: 0, - cardinality: 10000 - }, - bucket: { - maxBuckets: 2, - maxRetentionDuration: 2592000000000000 - }, - task: { - maxTasks: 5 - }, - dashboard: { - maxDashboards: 5 - }, - check: { - maxChecks: 2 - }, - notificationRule: { - maxNotifications: 2, - blockedNotificationRules: "comma, delimited, list" - }, - notificationEndpoint: { - blockedNotificationEndpoints: "comma, delimited, list" - } + orgID: "123", + rate: { + readKBs: 1000, + concurrentReadRequests: 0, + writeKBs: 17, + concurrentWriteRequests: 0, + cardinality: 10000 + }, + bucket: { + maxBuckets: 2, + maxRetentionDuration: 2592000000000000 + }, + task: { + maxTasks: 5 + }, + dashboard: { + maxDashboards: 5 + }, + check: { + maxChecks: 2 + }, + notificationRule: { + maxNotifications: 2, + blockedNotificationRules: "comma, delimited, list" + }, + notificationEndpoint: { + blockedNotificationEndpoints: "comma, delimited, list" + } } ``` {{% /expand %}} @@ -137,12 +139,8 @@ import "experimental/usage" import "influxdata/influxdb" limits = usage.limits() -bucketCardinality = (bucket) => - (influxdb.cardinality( - bucket: bucket, - start: time(v: 0), - ) - |> findColumn(fn: (key) => true, column: "_value"))[0] +bucketCardinality = (bucket) => (influxdb.cardinality(bucket: bucket, start: time(v: 0)) + |> findColumn(fn: (key) => true, column: "_value"))[0] buckets() |> filter(fn: (r) => not r.name =~ /^_/) diff --git a/content/flux/v0.x/stdlib/experimental/window.md b/content/flux/v0.x/stdlib/experimental/window.md index 5e27d9e2d..243fa2e59 100644 --- a/content/flux/v0.x/stdlib/experimental/window.md +++ b/content/flux/v0.x/stdlib/experimental/window.md @@ -17,7 +17,7 @@ flux/v0.x/tags: [transformations] introduced: 0.106.0 --- -The `window()` function groups records based on a time value. +The `experimental.window()` function groups records based on a time value. New columns are added to uniquely identify each window. Those columns are added to the group key of the output tables. **Input tables must have `_start`, `_stop`, and `_time` columns.** @@ -28,12 +28,14 @@ By default the start boundary of a window will align with the Unix epoch (zero t modified by the offset of the `location` option. ```js -window( - every: 5m, - period: 5m, - offset: 12h, - location: "UTC", - createEmpty: false +import "experimental" + +experimental.window( + every: 5m, + period: 5m, + offset: 12h, + location: "UTC", + createEmpty: false, ) ``` @@ -80,16 +82,20 @@ Default is piped-forward data (`<-`). #### Window data into 10 minute intervals ```js +import "experimental" + from(bucket:"example-bucket") |> range(start: -12h) - |> window(every: 10m) + |> experimental.window(every: 10m) // ... ``` #### Window by calendar month ```js +import "experimental" + from(bucket:"example-bucket") |> range(start: -1y) - |> window(every: 1mo) + |> experimental.window(every: 1mo) // ... ``` diff --git a/content/flux/v0.x/stdlib/generate/from.md b/content/flux/v0.x/stdlib/generate/from.md index 7d6cc1ff6..bd99ccfba 100644 --- a/content/flux/v0.x/stdlib/generate/from.md +++ b/content/flux/v0.x/stdlib/generate/from.md @@ -16,10 +16,10 @@ weight: 202 import "generate" generate.from( - count: 5, - fn: (n) => n, - start: 2021-01-01T00:00:00Z, - stop: 2021-01-02T00:00:00Z + count: 5, + fn: (n) => n, + start: 2021-01-01T00:00:00Z, + stop: 2021-01-02T00:00:00Z, ) ``` @@ -51,10 +51,10 @@ End of the time range to generate values in. import "generate" generate.from( - count: 6, - fn: (n) => (n + 1) * (n + 2), - start: 2021-01-01T00:00:00Z, - stop: 2021-01-02T00:00:00Z, + count: 6, + fn: (n) => (n + 1) * (n + 2), + start: 2021-01-01T00:00:00Z, + stop: 2021-01-02T00:00:00Z, ) ``` diff --git a/content/flux/v0.x/stdlib/http/basicauth.md b/content/flux/v0.x/stdlib/http/basicauth.md index 7a03351f5..e74004e39 100644 --- a/content/flux/v0.x/stdlib/http/basicauth.md +++ b/content/flux/v0.x/stdlib/http/basicauth.md @@ -21,10 +21,7 @@ header using a specified username and password combination. ```js import "http" -http.basicAuth( - u: "username", - p: "passw0rd" -) +http.basicAuth(u: "username", p: "passw0rd") // Returns "Basic dXNlcm5hbWU6cGFzc3cwcmQ=" ``` @@ -48,8 +45,8 @@ username = "myawesomeuser" password = "mySupErSecRetPasSW0rD" http.post( - url: "http://myawesomesite.com/api/", - headers: {Authorization: http.basicAuth(u:username, p:password)}, - data: bytes(v: "something I want to send.") + url: "http://myawesomesite.com/api/", + headers: {Authorization: http.basicAuth(u:username, p:password)}, + data: bytes(v: "something I want to send."), ) ``` diff --git a/content/flux/v0.x/stdlib/http/endpoint.md b/content/flux/v0.x/stdlib/http/endpoint.md index 2c477bea3..961bf267e 100644 --- a/content/flux/v0.x/stdlib/http/endpoint.md +++ b/content/flux/v0.x/stdlib/http/endpoint.md @@ -21,7 +21,7 @@ The `http.endpoint()` function sends output data to an HTTP URL using the POST r import "http" http.endpoint( - url: "http://localhost:1234/" + url: "http://localhost:1234/" ) ``` diff --git a/content/flux/v0.x/stdlib/http/pathescape.md b/content/flux/v0.x/stdlib/http/pathescape.md index c7b2868e8..f5bbff57f 100644 --- a/content/flux/v0.x/stdlib/http/pathescape.md +++ b/content/flux/v0.x/stdlib/http/pathescape.md @@ -21,7 +21,7 @@ and replaces non-ASCII characters with hexadecimal representations (`%XX`). import "http" http.pathEscape( - inputString: "/this/is/an/example-path.html" + inputString: "/this/is/an/example-path.html" ) // Returns %2Fthis%2Fis%2Fan%2Fexample-path.html @@ -39,7 +39,5 @@ The string to escape. import "http" data - |> map(fn: (r) => ({ r with - path: http.pathEscape(inputString: r.path) - })) + |> map(fn: (r) => ({r with path: http.pathEscape(inputString: r.path)})) ``` diff --git a/content/flux/v0.x/stdlib/http/post.md b/content/flux/v0.x/stdlib/http/post.md index 613db7b7a..49df50bb9 100644 --- a/content/flux/v0.x/stdlib/http/post.md +++ b/content/flux/v0.x/stdlib/http/post.md @@ -22,9 +22,9 @@ headers and data and returns the HTTP status code. import "http" http.post( - url: "http://localhost:8086/", - headers: {x:"a", y:"b", z:"c"}, - data: bytes(v: "body") + url: "http://localhost:8086/", + headers: {x:"a", y:"b", z:"c"}, + data: bytes(v: "body"), ) ``` @@ -55,19 +55,18 @@ The data body to include with the POST request. import "json" import "http" -lastReported = - from(bucket: "example-bucket") +lastReported = from(bucket: "example-bucket") |> range(start: -1m) |> filter(fn: (r) => r._measurement == "statuses") |> last() |> findColumn(fn: (key) => true, column: "_level") http.post( - url: "http://myawsomeurl.com/api/notify", - headers: { - Authorization: "Bearer mySuPerSecRetTokEn", - "Content-type": "application/json" - }, - data: json.encode(v: lastReported[0]) + url: "http://myawsomeurl.com/api/notify", + headers: { + Authorization: "Bearer mySuPerSecRetTokEn", + "Content-type": "application/json" + }, + data: json.encode(v: lastReported[0]), ) ``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/cardinality.md b/content/flux/v0.x/stdlib/influxdata/influxdb/cardinality.md index e5de731de..0745a70a3 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/cardinality.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/cardinality.md @@ -1,6 +1,6 @@ --- title: influxdb.cardinality() function -description: The `influxdb.cardinality()` function returns the series cardinality of data stored in InfluxDB Cloud. +description: The `influxdb.cardinality()` function returns the series cardinality of data stored in InfluxDB. menu: flux_0_x_ref: name: influxdb.cardinality @@ -17,31 +17,31 @@ related: introduced: 0.92.0 --- -The `influxdb.cardinality()` function returns the [series cardinality](/{{< latest "influxdb" "v2" >}}/reference/glossary#series-cardinality) of a specified dataset. +The `influxdb.cardinality()` function returns the [series cardinality](/{{< latest "influxdb" "v2" >}}/reference/glossary#series-cardinality) of a specified dataset in InfluxDB. ```js import "influxdata/influxdb" influxdb.cardinality( - bucket: "example-bucket", - org: "example-org", - host: "https://cloud2.influxdata.com", - token: "MySuP3rSecr3Tt0k3n", - start: -30d, - stop: now(), - predicate: (r) => true + bucket: "example-bucket", + org: "example-org", + host: "https://cloud2.influxdata.com", + token: "MySuP3rSecr3Tt0k3n", + start: -30d, + stop: now(), + predicate: (r) => true, ) // OR influxdb.cardinality( - bucketID: "00xXx0x00xXX0000", - orgID: "00xXx0x00xXX0000", - host: "https://cloud2.influxdata.com", - token: "MySuP3rSecr3Tt0k3n", - start: -30d, - stop: now(), - predicate: (r) => true + bucketID: "00xXx0x00xXX0000", + orgID: "00xXx0x00xXX0000", + host: "https://cloud2.influxdata.com", + token: "MySuP3rSecr3Tt0k3n", + start: -30d, + stop: now(), + predicate: (r) => true, ) ``` @@ -97,10 +97,7 @@ _Default is `(r) => true`_. ```js import "influxdata/influxdb" -influxdb.cardinality( - bucket: "example-bucket", - start: -1y -) +influxdb.cardinality(bucket: "example-bucket", start: -1y) ``` ##### Query series cardinality in a measurement @@ -108,9 +105,9 @@ influxdb.cardinality( import "influxdata/influxdb" influxdb.cardinality( - bucket: "example-bucket", - start: -1y, - predicate: (r) => r._measurement == "example-measurement" + bucket: "example-bucket", + start: -1y, + predicate: (r) => r._measurement == "example-measurement", ) ``` @@ -119,9 +116,9 @@ influxdb.cardinality( import "influxdata/influxdb" influxdb.cardinality( - bucket: "example-bucket", - start: -1y, - predicate: (r) => r.exampleTag == "foo" + bucket: "example-bucket", + start: -1y, + predicate: (r) => r.exampleTag == "foo", ) ``` @@ -129,12 +126,8 @@ influxdb.cardinality( ```js import "influxdata/influxdb" -bucketCardinality = (bucket) => - (influxdb.cardinality( - bucket: bucket, - start: time(v: 0), - ) - |> findColumn(fn: (key) => true, column: "_value"))[0] +bucketCardinality = (bucket) => (influxdb.cardinality(bucket: bucket, start: time(v: 0)) + |> findColumn(fn: (key) => true, column: "_value"))[0] buckets() |> filter(fn: (r) => not r.name =~ /^_/) diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/from.md b/content/flux/v0.x/stdlib/influxdata/influxdb/from.md index 8bb780a30..911b8aecb 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/from.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/from.md @@ -25,19 +25,19 @@ Each record in the table represents a single point in the series. ```js from( - bucket: "example-bucket", - host: "https://example.com", - org: "example-org", - token: "MySuP3rSecr3Tt0k3n" + bucket: "example-bucket", + host: "https://example.com", + org: "example-org", + token: "MySuP3rSecr3Tt0k3n", ) // OR from( - bucketID: "0261d8287f4d6000", - host: "https://example.com", - orgID: "867f3fcf1846f11f", - token: "MySuP3rSecr3Tt0k3n" + bucketID: "0261d8287f4d6000", + host: "https://example.com", + orgID: "867f3fcf1846f11f", + token: "MySuP3rSecr3Tt0k3n", ) ``` @@ -90,11 +90,86 @@ If authentication is _disabled_, provide an empty string (`""`). If authentication is _enabled_, provide your InfluxDB username and password using the `:` syntax. +## Push down optimizations + +Some transformations called after `from()` trigger performance optimizations called pushdowns. +These optimizations are "pushed down" from Flux into the InfluxDB storage layer and where they utilize code in storage to apply the transformation. +Pushdowns happen automatically, but it is helpful understand how these optimizations work so you can better optimize your Flux queries. + +Pushdowns require an unbroken and exclusive chain between transformations. +A `from()` call stored in a variable that then goes to multiple pushdowns will +cause none of the pushdowns to be applied. For example: + +```js +// Pushdowns are NOT applied +data = from(bucket: "example-bucket") + |> range(start: -1h) + +data |> filter(fn: (r) => r._measurement == "m0") |> yield(name: "m0") +data |> filter(fn: (r) => r._measurement == "m1") |> yield(name: "m1") +``` + +To reuse code and still apply pushdowns, invoke `from()` in a function and pipe-forward the output of the function into subsequent pushdowns: + +```js +// Pushdowns ARE applied +data = () => from(bucket: "example-bucket") + |> range(start: -1h) + +data() |> filter(fn: (r) => r._measurement == "m0") |> yield(name: "m0") +data() |> filter(fn: (r) => r._measurement == "m1") |> yield(name: "m1") +``` + +### Filter + +`filter()` transformations that compare `r._measurement`, `r._field`, `r._value` +or any tag value are pushed down to the storage layer. +Comparisons that use functions do not. +If the function produces a static value, evaluate the function outside of `filter()`. +For example: + +```js +import "strings" + +// filter() is NOT pushed down +data + |> filter(fn: (r) => r.example == strings.joinStr(arr: ["foo", "bar"], v: "")) + +// filter() is pushed down +exVar = strings.joinStr(arr: ["foo", "bar"], v: "")) + +data + |> filter(fn: (r) => r.example == exVar) +``` + +Multiple consecutive `filter()` transformations that can be pushed down are +merged together into a single filter that gets pushed down. + +### Aggregates + +The following aggregate transformations are pushed down: + +- `min()` +- `max()` +- `sum()` +- `count()` +- `mean()` (except when used with `group()`) + +Aggregates will also be pushed down if they are preceded by a `group()`. +The only exception is `mean()` which cannot be pushed down to the storage layer with `group()`. + +### Aggregate Window + +Aggregates used with `aggregateWindow()` are pushed down. +Aggregates pushed down with `aggregateWindow()` are not compatible with `group()`. + ## Examples - [Query InfluxDB using the bucket name](#query-using-the-bucket-name) - [Query InfluxDB using the bucket ID](#query-using-the-bucket-id) - [Query a remote InfluxDB Cloud instance](#query-a-remote-influxdb-cloud-instance) +- [Query push downs](#query-push-downs) +- [Query from the same bucket to multiple push downs](#query-multiple-push-downs) #### Query using the bucket name ```js @@ -113,9 +188,66 @@ import "influxdata/influxdb/secrets" token = secrets.get(key: "INFLUXDB_CLOUD_TOKEN") from( - bucket: "example-bucket", - host: "https://cloud2.influxdata.com", - org: "example-org", - token: token + bucket: "example-bucket", + host: "https://cloud2.influxdata.com", + org: "example-org", + token: token, ) ``` + +### Utilize pushdowns in multiple queries + +```js +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "m0") + |> filter(fn: (r) => r._field == "f0") + |> yield(name: "filter-only") + +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "m0") + |> filter(fn: (r) => r._field == "f0") + |> max() + |> yield(name: "max") + +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "m0") + |> filter(fn: (r) => r._field == "f0") + |> group(columns: ["t0"]) + |> max() + |> yield(name: "grouped-max") + +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "m0") + |> filter(fn: (r) => r._field == "f0") + |> aggregateWindow(every: 5m, fn: max) + |> yield(name: "windowed-max") +``` + +### Query from the same bucket to multiple pushdowns + +```js +// Use a function. If you use a variable, this will stop +// Flux from pushing down the operation. +data = () => from(bucket: "example-bucket") + |> range(start: -1h) + +data() |> filter(fn: (r) => r._measurement == "m0") +data() |> filter(fn: (r) => r._measurement == "m1") +``` + +### Query from the same bucket to multiple transformations + +```js +// The push down chain is not broken until after the push down +// is complete. In this case, it is more efficient to use a variable. +data = from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "m0") + +data |> derivative() |> yield(name: "derivative") +data |> movingAverage(n: 5) |> yield(name: "movingAverage") +``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/check.md b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/check.md index 7b30efea7..3513ed00c 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/check.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/check.md @@ -23,12 +23,12 @@ The `monitor.check()` function checks input data and assigns a level import "influxdata/influxdb/monitor" monitor.check( - crit: (r) => r._value > 90.0, - warn: (r) => r._value > 80.0, - info: (r) => r._value > 60.0, - ok: (r) => r._value <= 20.0, - messageFn: (r) => "The current level is ${r._level}", - data: {} + crit: (r) => r._value > 90.0, + warn: (r) => r._value > 80.0, + info: (r) => r._value > 60.0, + ok: (r) => r._value <= 20.0, + messageFn: (r) => "The current level is ${r._level}", + data: {}, ) ``` @@ -71,27 +71,27 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "influxdata/influxdb/monitor" from(bucket: "telegraf") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "disk" and - r._field == "used_percent" - ) - |> group(columns: ["_measurement"]) - |> monitor.check( - crit: (r) => r._value > 90.0, - warn: (r) => r._value > 80.0, - info: (r) => r._value > 70.0, - ok: (r) => r._value <= 60.0, - messageFn: (r) => - if r._level == "crit" then "Critical alert!! Disk usage is at ${r._value}%!" - else if r._level == "warn" then "Warning! Disk usage is at ${r._value}%." - else if r._level == "info" then "Disk usage is at ${r._value}%." - else "Things are looking good.", - data: { - _check_name: "Disk Utilization (Used Percentage)", - _check_id: "disk_used_percent", - _type: "threshold", - tags: {} - } - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") + |> group(columns: ["_measurement"]) + |> monitor.check( + crit: (r) => r._value > 90.0, + warn: (r) => r._value > 80.0, + info: (r) => r._value > 70.0, + ok: (r) => r._value <= 60.0, + messageFn: (r) => if r._level == "crit" then + "Critical alert!! Disk usage is at ${r._value}%!" + else if r._level == "warn" then + "Warning! Disk usage is at ${r._value}%." + else if r._level == "info" then + "Disk usage is at ${r._value}%." + else + "Things are looking good.", + data: { + _check_name: "Disk Utilization (Used Percentage)", + _check_id: "disk_used_percent", + _type: "threshold", + tags: {} + }, + ) ``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/deadman.md b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/deadman.md index 9db8c0c2d..5442fbf36 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/deadman.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/deadman.md @@ -45,9 +45,9 @@ import "influxdata/influxdb/monitor" import "experimental" from(bucket: "example-bucket") - |> range(start: -10m) - |> group(columns: ["host"]) - |> monitor.deadman(t: experimental.subDuration(d: 30s, from: now() )) + |> range(start: -10m) + |> group(columns: ["host"]) + |> monitor.deadman(t: experimental.subDuration(d: 5m, from: now())) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/from.md b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/from.md index c230cbc36..59c2d7dd9 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/from.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/from.md @@ -23,13 +23,12 @@ measurement in the `_monitoring` bucket. import "influxdata/influxdb/monitor" monitor.from( - start: -1h, - stop: now(), - fn: (r) => true + start: -1h, + stop: now(), + fn: (r) => true, ) ``` - ## Parameters ### start {data-type="duration, time, int"} @@ -60,8 +59,5 @@ Records that evaluate to _null_ or `false` are not included in output tables. ```js import "influxdata/influxdb/monitor" -monitor.from( - start: -1h, - fn: (r) => r._level == "crit" -) +monitor.from(start: -1h, fn: (r) => r._level == "crit") ``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/logs.md b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/logs.md index e2b0b35c5..d99d61b97 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/logs.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/logs.md @@ -23,9 +23,9 @@ measurement in the `_monitoring` bucket. import "influxdata/influxdb/monitor" monitor.logs( - start: -1h, - stop: now(), - fn: (r) => true + start: -1h, + stop: now(), + fn: (r) => true, ) ``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/notify.md b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/notify.md index 8f75fd129..648562676 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/notify.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/notify.md @@ -22,8 +22,8 @@ in the `notifications` measurement in the [`_monitoring` bucket](/influxdb/cloud import "influxdata/influxdb/monitor" monitor.notify( - endpoint: endpoint, - data: {} + endpoint: endpoint, + data: {}, ) ``` @@ -66,25 +66,18 @@ import "slack" token = secrets.get(key: "SLACK_TOKEN") -endpoint = slack.endpoint(token: token)(mapFn: (r) => ({ - channel: "Alerts", - text: r._message, - color: "danger" - })) +endpoint = slack.endpoint(token: token)(mapFn: (r) => ({channel: "Alerts", text: r._message, color: "danger"})) notification_data = { - _notification_rule_id: "0000000000000001", - _notification_rule_name: "example-rule-name", - _notification_endpoint_id: "0000000000000002", - _notification_endpoint_name: "example-endpoint-name", + _notification_rule_id: "0000000000000001", + _notification_rule_name: "example-rule-name", + _notification_endpoint_id: "0000000000000002", + _notification_endpoint_name: "example-endpoint-name" } from(bucket: "system") - |> range(start: -5m) - |> monitor.notify( - endpoint: endpoint, - data: notification_data - ) + |> range(start: -5m) + |> monitor.notify(endpoint: endpoint, data: notification_data) ``` ### Send a notification to PagerDuty @@ -95,30 +88,29 @@ import "pagerduty" routingKey = secrets.get(key: "PAGERDUTY_ROUTING_KEY") -endpoint = pagerduty.endpoint()(mapFn: (r) => ({ - routingKey: routingKey, - client: "ExampleClient", - clientURL: "http://examplepagerdutyclient.com", - dedupkey: "ExampleDedupKey", - class: "cpu usage", - group: "app-stack", - severity: "ok", - eventAction: "trigger", - source: "monitoringtool:vendor:region", - timestamp: r._source_timestamp - })) +endpoint = pagerduty.endpoint()( + mapFn: (r) => ({ + routingKey: routingKey, + client: "ExampleClient", + clientURL: "http://examplepagerdutyclient.com", + dedupkey: "ExampleDedupKey", + class: "cpu usage", + group: "app-stack", + severity: "ok", + eventAction: "trigger", + source: "monitoringtool:vendor:region", + timestamp: r._source_timestamp, + }), +) notification_data = { - _notification_rule_id: "0000000000000001", - _notification_rule_name: "example-rule-name", - _notification_endpoint_id: "0000000000000002", - _notification_endpoint_name: "example-endpoint-name", + _notification_rule_id: "0000000000000001", + _notification_rule_name: "example-rule-name", + _notification_endpoint_id: "0000000000000002", + _notification_endpoint_name: "example-endpoint-name" } from(bucket: "system") - |> range(start: -5m) - |> monitor.notify( - endpoint: endpoint, - data: notification_data - ) + |> range(start: -5m) + |> monitor.notify(endpoint: endpoint, data: notification_data) ``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/statechanges.md b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/statechanges.md index aecc04168..11eb257a0 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/statechanges.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/statechanges.md @@ -23,8 +23,8 @@ a `_level` column and outputs records that change from `fromLevel` to `toLevel`. import "influxdata/influxdb/monitor" monitor.stateChanges( - fromLevel: "any", - toLevel: "any" + fromLevel: "any", + toLevel: "any", ) ``` @@ -50,7 +50,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "influxdata/influxdb/monitor" monitor.from(start: -1h) - |> monitor.stateChanges(toLevel: "crit") + |> monitor.stateChanges(toLevel: "crit") ``` {{< flex >}} @@ -72,12 +72,3 @@ monitor.from(start: -1h) | 2021-01-01T00:30:00Z | crit | {{% /flex-content %}} {{< /flex >}} - -## Function definition -```js -stateChanges = (fromLevel="any", toLevel="any", tables=<-) => { - return - if fromLevel == "any" and toLevel == "any" then tables |> stateChangesOnly() - else tables |> _stateChanges(fromLevel: fromLevel, toLevel: toLevel) -} -``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/statechangesonly.md b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/statechangesonly.md index 3965f626e..45f1cd20b 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/statechangesonly.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/monitor/statechangesonly.md @@ -38,7 +38,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "influxdata/influxdb/monitor" monitor.from(start: -1h) - |> monitor.stateChangesOnly() + |> monitor.stateChangesOnly() ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/sample/aligntonow.md b/content/flux/v0.x/stdlib/influxdata/influxdb/sample/aligntonow.md index 8ed7727bc..d56f21b34 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/sample/aligntonow.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/sample/aligntonow.md @@ -30,15 +30,11 @@ import "influxdata/influxdb/sample" option now = () => 2021-01-01T00:00:00Z data = sample.data(set: "birdMigration") - |> filter(fn: (r) => - r._field == "lon" and - r.s2_cell_id == "471ed2c" and - r.id == "91916A" - ) - |> tail(n: 3) + |> filter(fn: (r) => r._field == "lon" and r.s2_cell_id == "471ed2c" and r.id == "91916A") + |> tail(n: 3) data - |> sample.alignToNow() + |> sample.alignToNow() ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/sample/data.md b/content/flux/v0.x/stdlib/influxdata/influxdb/sample/data.md index edd3d4263..1869be938 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/sample/data.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/sample/data.md @@ -21,9 +21,7 @@ The `sample.data()` function downloads and outputs an InfluxDB sample dataset. ```js import "influxdata/influxdb/sample" -sample.data( - set: "airSensor" -) +sample.data(set: "airSensor") ``` {{% note %}} diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/_index.md b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/_index.md index d803220ed..e01f71b3b 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/_index.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/_index.md @@ -17,11 +17,11 @@ flux/v0.x/tags: [functions, schema, package] cascade: introduced: 0.88.0 append: - block: cloud + block: warn content: | - #### Supported in the InfluxDB Cloud UI - The `schema` package can retrieve schema information from the InfluxDB - Cloud user interface (UI), but **not** from the [Flux REPL](/influxdb/cloud/tools/repl/). + #### Not supported in the Flux REPL + `schema` functions can retrieve schema information when executed within + the context of InfluxDB, but not from the [Flux REPL](/influxdb/cloud/tools/repl/). --- The Flux InfluxDB `schema` package provides functions for exploring your InfluxDB data schema. diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/fieldkeys.md b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/fieldkeys.md index cf227af53..e6221ecdb 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/fieldkeys.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/fieldkeys.md @@ -24,43 +24,64 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/schema" schema.fieldKeys( - bucket: "example-bucket", - predicate: (r) => true, - start: -30d + bucket: "example-bucket", + predicate: (r) => true, + start: -30d, ) ``` +{{% note %}} +#### Deleted fields +Fields [deleted from InfluxDB Cloud using the `/api/v2/delete` endpoint or the `influx delete` command](/influxdb/cloud/write-data/delete-data/) +**do not** appear in results. + +#### Expired fields +- **InfluxDB Cloud**: field keys associated with points outside of the bucket's + retention policy **may** appear in results up to an hour after expiring. +- **InfluxDB OSS**: field keys associated with points outside of the bucket's + retention policy **may** appear in results. + For more information, see [Data retention in InfluxDB OSS](/{{< latest "influxdb" >}}/reference/internals/data-retention/). +{{% /note %}} + ## Parameters ### bucket {data-type="string"} -The bucket to list field keys from. +Bucket to list field keys from. ### predicate {data-type="function"} -The predicate function that filters field keys. +Predicate function that filters field keys. _Default is `(r) => true`._ ### start {data-type="duration, time"} -The oldest time to include in results. +Earliest time to include in results. _Default is `-30d`._ Relative start times are defined using negative durations. Negative durations are relative to now. Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). +### stop {data-type="duration, time"} +Latest time to include in results. +_Default is `now()`._ + +The `stop` time is exclusive, meaning values with a time equal to stop time are +excluded from results. +Relative start times are defined using negative durations. +Negative durations are relative to `now()`. +Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). + ## Examples + +### Return all field keys in a bucket ```js import "influxdata/influxdb/schema" -schema.fieldKeys(bucket: "my-bucket") +schema.fieldKeys(bucket: "example-bucket") ``` -## Function definition +### Return all field keys in a bucket from a non-default time range ```js -package schema +import "influxdata/influxdb/schema" -fieldKeys = (bucket, predicate=(r) => true, start=-30d) => - tagValues(bucket: bucket, tag: "_field", predicate: predicate, start: start) +schema.fieldKeys(bucket: "example-bucket", start: -90d, stop: -60d) ``` - -_**Used functions:** -[schema.tagValues](/flux/v0.x/stdlib/influxdata/influxdb/schema/tagvalues/)_ diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/fieldsascols.md b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/fieldsascols.md index 203243658..f1c5b0b79 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/fieldsascols.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/fieldsascols.md @@ -39,9 +39,9 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "influxdata/influxdb/schema" from(bucket:"example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "cpu") - |> schema.fieldsAsCols() + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu") + |> schema.fieldsAsCols() ``` {{< expand-wrapper >}} @@ -74,16 +74,3 @@ _`_start` and `_stop` columns have been omitted._ {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -package schema - -fieldsAsCols = (tables=<-) => - tables - |> pivot( - rowKey:["_time"], - columnKey: ["_field"], - valueColumn: "_value" - ) -``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementfieldkeys.md b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementfieldkeys.md index 6aea3e611..5191ea780 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementfieldkeys.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementfieldkeys.md @@ -24,12 +24,25 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/schema" schema.measurementFieldKeys( - bucket: "example-bucket", - measurement: "example-measurement", - start: -30d + bucket: "example-bucket", + measurement: "example-measurement", + start: -30d, ) ``` +{{% note %}} +#### Deleted fields +Fields [explicitly deleted from InfluxDB Cloud](/influxdb/cloud/write-data/delete-data/) +**do not** appear in results. + +#### Expired fields +- **InfluxDB Cloud**: field keys associated with points outside of the bucket's + retention policy **may** appear in results up to an hour after expiring. +- **InfluxDB OSS**: field keys associated with points outside of the bucket's + retention policy **may** appear in results. + For more information, see [Data retention in InfluxDB OSS](/{{< latest "influxdb" >}}/reference/internals/data-retention/). +{{% /note %}} + ## Parameters ### bucket {data-type="string"} @@ -39,30 +52,35 @@ Bucket to retrieve field keys from. Measurement to list field keys from. ### start {data-type="duration, time"} -Oldest time to include in results. +Earliest time to include in results. _Defaults to `-30d`._ Relative start times are defined using negative durations. Negative durations are relative to now. Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). +### stop {data-type="duration, time"} +Latest time to include in results. +_Default is `now()`._ + +The `stop` time is exclusive, meaning values with a time equal to stop time are +excluded from results. +Relative start times are defined using negative durations. +Negative durations are relative to `now()`. +Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). + ## Examples + +### Return all field keys in a measurement ```js import "influxdata/influxdb/schema" -schema.measurementFieldKeys( - bucket: "telegraf", - measurement: "cpu", -) +schema.measurementFieldKeys(bucket: "example-bucket", measurement: "example-measurement") ``` -## Function definition +### Return all field keys in a measurement from a non-default time range ```js -package schema +import "influxdata/influxdb/schema" -measurementFieldKeys = (bucket, measurement, start=-30d) => - fieldKeys(bucket: bucket, predicate: (r) => r._measurement == measurement, start: start) +schema.measurementFieldKeys(bucket: "example-bucket", measurement: "example-measurement", start: -90d, stop: -60d) ``` - -_**Used functions:** -[schema.fieldKeys](/flux/v0.x/stdlib/influxdata/influxdb/schema/fieldkeys/)_ diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurements.md b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurements.md index de4142cf6..001988e50 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurements.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurements.md @@ -31,13 +31,36 @@ schema.measurements(bucket: "example-bucket") ### bucket {data-type="string"} Bucket to retrieve measurements from. -## Function definition -```js -package schema +### start {data-type="duration, time"} +Earliest time to include in results. +_Default is `-30d`._ -measurements = (bucket) => - tagValues(bucket: bucket, tag: "_measurement") +Relative start times are defined using negative durations. +Negative durations are relative to now. +Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). + +### stop {data-type="duration, time"} +Latest time to include in results. +_Default is `now()`._ + +The `stop` time is exclusive, meaning values with a time equal to stop time are +excluded from results. +Relative start times are defined using negative durations. +Negative durations are relative to `now()`. +Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). + +## Examples + +### Return all measurements in a bucket +```js +import "influxdata/influxdb/schema" + +measurements(bucket: "example-bucket") ``` -_**Used functions:** -[schema.tagValues()](/flux/v0.x/stdlib/influxdata/influxdb/schema/tagvalues)_ +### Return all measurements in a bucket from a non-default time range +```js +import "influxdata/influxdb/schema" + +measurements(bucket: "example-bucket", start: -90d, stop: -60d) +``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementtagkeys.md b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementtagkeys.md index 977c5dd22..8010e2408 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementtagkeys.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementtagkeys.md @@ -24,11 +24,24 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/schema" schema.measurementTagKeys( - bucket: "example-bucket", - measurement: "cpu" + bucket: "example-bucket", + measurement: "cpu", ) ``` +{{% note %}} +#### Deleted tags +Tags [explicitly deleted from InfluxDB](/{{< latest "influxdb" >}}/write-data/delete-data/) +**do not** appear in results. + +#### Expired tags +- **InfluxDB Cloud**: tags associated with points outside of the bucket's + retention policy **may** appear in results up to an hour after expiring. +- **InfluxDB OSS**: tags associated with points outside of the bucket's + retention policy **may** appear in results. + For more information, see [Data retention in InfluxDB OSS](/{{< latest "influxdb" >}}/reference/internals/data-retention/). +{{% /note %}} + ## Parameters ### bucket {data-type="string"} @@ -37,16 +50,36 @@ Bucket to return tag keys from for a specific measurement. ### measurement {data-type="string"} Measurement to return tag keys from. -## Function definition -```js -package schema +### start {data-type="duration, time"} +Earliest time to include in results. +_Default is `-30d`._ -measurementTagKeys = (bucket, measurement) => - tagKeys( - bucket: bucket, - predicate: (r) => r._measurement == measurement - ) +Relative start times are defined using negative durations. +Negative durations are relative to now. +Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). + +### stop {data-type="duration, time"} +Latest time to include in results. +_Default is `now()`._ + +The `stop` time is exclusive, meaning values with a time equal to stop time are +excluded from results. +Relative start times are defined using negative durations. +Negative durations are relative to `now()`. +Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). + +## Examples + +### Return all tag keys in a measurement +```js +import "influxdata/influxdb/schema" + +schema.measurementTagKeys(bucket: "example-bucket", measurement: "example-measurement") ``` -_**Used functions:** -[schema.tagKeys()](/flux/v0.x/stdlib/influxdata/influxdb/schema/tagkeys)_ +### Return all tag keys in a measurement during a non-default time range +```js +import "influxdata/influxdb/schema" + +schema.measurementTagKeys(bucket: "example-bucket", measurement: "example-measurement", start: -90d, stop: -60d) +``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementtagvalues.md b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementtagvalues.md index b3f70992d..8482fd6b1 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementtagvalues.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/measurementtagvalues.md @@ -26,12 +26,25 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/schema" schema.measurementTagValues( - bucket: "example-bucket", - measurement: "cpu", - tag: "host" + bucket: "example-bucket", + measurement: "cpu", + tag: "host", ) ``` +{{% note %}} +#### Deleted tags +Tags [explicitly deleted from InfluxDB](/{{< latest "influxdb" >}}/write-data/delete-data/) +**do not** appear in results. + +#### Expired tags +- **InfluxDB Cloud**: tags associated with points outside of the bucket's + retention policy **may** appear in results up to an hour after expiring. +- **InfluxDB OSS**: tags associated with points outside of the bucket's + retention policy **may** appear in results. + For more information, see [Data retention in InfluxDB OSS](/{{< latest "influxdb" >}}/reference/internals/data-retention/). +{{% /note %}} + ## Parameters ### bucket {data-type="string"} @@ -43,17 +56,42 @@ Measurement to return tag values from. ### tag {data-type="string"} Tag to return all unique values from. -## Function definition -```js -package schema +### start {data-type="duration, time"} +Earliest time to include in results. +_Default is `-30d`._ -measurementTagValues = (bucket, measurement, tag) => - tagValues( - bucket: bucket, - tag: tag, - predicate: (r) => r._measurement == measurement - ) +Relative start times are defined using negative durations. +Negative durations are relative to now. +Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). + +### stop {data-type="duration, time"} +Latest time to include in results. +_Default is `now()`._ + +The `stop` time is exclusive, meaning values with a time equal to stop time are +excluded from results. +Relative start times are defined using negative durations. +Negative durations are relative to `now()`. +Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). + +## Examples + +### Return all values for a tag in a measurement +```js +import "influxdata/influxdb/schema" + +schema.measurementTagValues(bucket: "example-bucket", measurement: "example-measurement", tag: "host") ``` -_**Used functions:** -[schema.tagValues()](/flux/v0.x/stdlib/influxdata/influxdb/schema/tagvalues)_ +### Return all tag values in a measurement during a non-default time range +```js +import "influxdata/influxdb/schema" + +schema.measurementTagValues( + bucket: "example-bucket", + measurement: "example-measurement", + tag: "host", + start: -90d, + stop: -60d, +) +``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/tagkeys.md b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/tagkeys.md index c141830d3..5535837b5 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/tagkeys.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/tagkeys.md @@ -24,12 +24,25 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/schema" schema.tagKeys( - bucket: "example-bucket", - predicate: (r) => true, - start: -30d + bucket: "example-bucket", + predicate: (r) => true, + start: -30d, ) ``` +{{% note %}} +#### Deleted tags +Tags [explicitly deleted from InfluxDB](/{{< latest "influxdb" >}}/write-data/delete-data/) +**do not** appear in results. + +#### Expired tags +- **InfluxDB Cloud**: tags associated with points outside of the bucket's + retention policy **may** appear in results up to an hour after expiring. +- **InfluxDB OSS**: tags associated with points outside of the bucket's + retention policy **may** appear in results. + For more information, see [Data retention in InfluxDB OSS](/{{< latest "influxdb" >}}/reference/internals/data-retention/). +{{% /note %}} + ## Parameters ### bucket {data-type="string"} @@ -40,30 +53,35 @@ Predicate function that filters tag keys. _Default is `(r) => true`._ ### start {data-type="duration, time"} -Oldest time to include in results. +Earliest time to include in results. _Default is `-30d`._ Relative start times are defined using negative durations. Negative durations are relative to now. Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). +### stop {data-type="duration, time"} +Latest time to include in results. +_Default is `now()`._ + +The `stop` time is exclusive, meaning values with a time equal to stop time are +excluded from results. +Relative start times are defined using negative durations. +Negative durations are relative to `now()`. +Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). + ## Examples + +### Return all tag keys in a bucket ```js import "influxdata/influxdb/schema" -schema.tagKeys(bucket: "my-bucket") +schema.tagKeys(bucket: "example-bucket") ``` - -## Function definition +### Return all tag keys in a bucket during a non-default time range ```js -package schema +import "influxdata/influxdb/schema" -tagKeys = (bucket, predicate=(r) => true, start=-30d) => - from(bucket: bucket) - |> range(start: start) - |> filter(fn: predicate) - |> keys() - |> keep(columns: ["_value"]) - |> distinct() +schema.tagKeys(bucket: "example-bucket", start: -90d, stop: -60d) ``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/tagvalues.md b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/tagvalues.md index b3f119736..f062d30cb 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/schema/tagvalues.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/schema/tagvalues.md @@ -24,13 +24,26 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/schema" schema.tagValues( - bucket: "example-bucket", - tag: "host", - predicate: (r) => true, - start: -30d + bucket: "example-bucket", + tag: "host", + predicate: (r) => true, + start: -30d, ) ``` +{{% note %}} +#### Deleted tags +Tags [explicitly deleted from InfluxDB](/{{< latest "influxdb" >}}/write-data/delete-data/) +**do not** appear in results. + +#### Expired tags +- **InfluxDB Cloud**: tags associated with points outside of the bucket's + retention policy **may** appear in results up to an hour after expiring. +- **InfluxDB OSS**: tags associated with points outside of the bucket's + retention policy **may** appear in results. + For more information, see [Data retention in InfluxDB OSS](/{{< latest "influxdb" >}}/reference/internals/data-retention/). +{{% /note %}} + ## Parameters ### bucket {data-type="string"} @@ -44,32 +57,35 @@ Predicate function that filters tag values. _Default is `(r) => true`._ ### start {data-type="duration, time"} -Oldest time to include in results. +Earliest time to include in results. _Default is `-30d`._ Relative start times are defined using negative durations. -Negative durations are relative to now. +Negative durations are relative to `now()`. +Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). + +### stop {data-type="duration, time"} +Latest time to include in results. +_Default is `now()`._ + +The `stop` time is exclusive, meaning values with a time equal to stop time are +excluded from results. +Relative start times are defined using negative durations. +Negative durations are relative to `now()`. Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time-types). ## Examples + +### Return all values for a tag in a bucket ```js import "influxdata/influxdb/schema" -schema.tagValues( - bucket: "my-bucket", - tag: "host", -) +schema.tagValues(bucket: "example-bucket", tag: "host") ``` -## Function definition +### Return all tag values in a bucket during a non-default time range ```js -package schema +import "influxdata/influxdb/schema" -tagValues = (bucket, tag, predicate=(r) => true, start=-30d) => - from(bucket: bucket) - |> range(start: start) - |> filter(fn: predicate) - |> keep(columns: [tag]) - |> group() - |> distinct(column: tag) +schema.tagValues(bucket: "example-bucket", tag: "host", start: -90d, stop: -60d) ``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/secrets/get.md b/content/flux/v0.x/stdlib/influxdata/influxdb/secrets/get.md index a7e16a37a..792dc5116 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/secrets/get.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/secrets/get.md @@ -40,8 +40,8 @@ username = secrets.get(key: "POSTGRES_USERNAME") password = secrets.get(key: "POSTGRES_PASSWORD") sql.from( - driverName: "postgres", - dataSourceName: "postgresql://${username}:${password}@localhost", - query:"SELECT * FROM example-table" + driverName: "postgres", + dataSourceName: "postgresql://${username}:${password}@localhost", + query:"SELECT * FROM example-table", ) ``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/tasks/lastsuccess.md b/content/flux/v0.x/stdlib/influxdata/influxdb/tasks/lastsuccess.md index 90b74ab24..b051208d4 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/tasks/lastsuccess.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/tasks/lastsuccess.md @@ -35,12 +35,9 @@ The default time value returned if the task has never successfully run. ```js import "influxdata/influxdb/tasks" -options task = { - name: "Example task", - every: 30m -} +option task = {name: "Example task", every: 30m} from(bucket: "example-bucket") - |> range(start: tasks.lastSuccess(orTime: -task.every)) - // ... + |> range(start: tasks.lastSuccess(orTime: -task.every)) + // ... ``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/to.md b/content/flux/v0.x/stdlib/influxdata/influxdb/to.md index f17389492..54884f55d 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/to.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/to.md @@ -1,6 +1,8 @@ --- title: to() function -description: The `to()` function writes data to an InfluxDB v2.0 bucket. +description: > + `to()` writes data to an **InfluxDB Cloud or v2.x** bucket and outputs the + written data. aliases: - /flux/v0.x/stdlib/universe/to - /influxdb/v2.0/reference/flux/functions/outputs/to @@ -19,29 +21,30 @@ related: introduced: 0.7.0 --- -The `to()` function writes data to an **InfluxDB v2.0** bucket. +`to()` writes data to an **InfluxDB Cloud or v2.x** bucket and outputs the +written data. ```js to( - bucket: "my-bucket", - org: "my-org", - host: "localhost:8086", - token: "mY5uP3rS3cRe7t0k3n", - timeColumn: "_time", - tagColumns: ["tag1", "tag2", "tag3"], - fieldFn: (r) => ({ r._field: r._value }) + bucket: "my-bucket", + org: "my-org", + host: "http://localhost:8086", + token: "mY5uP3rS3cRe7t0k3n", + timeColumn: "_time", + tagColumns: ["tag1", "tag2", "tag3"], + fieldFn: (r) => ({ r._field: r._value }), ) // OR to( - bucketID: "1234567890", - orgID: "0987654321", - host: "localhost:8086", - token: "mY5uP3rS3cRe7t0k3n", - timeColumn: "_time", - tagColumns: ["tag1", "tag2", "tag3"], - fieldFn: (r) => ({ r._field: r._value }) + bucketID: "1234567890", + orgID: "0987654321", + host: "http://localhost:8086", + token: "mY5uP3rS3cRe7t0k3n", + timeColumn: "_time", + tagColumns: ["tag1", "tag2", "tag3"], + fieldFn: (r) => ({ r._field: r._value }), ) ``` @@ -65,8 +68,7 @@ that includes, at a minimum, the following columns: _All other columns are written to InfluxDB as [tags](/{{< latest "influxdb" >}}/reference/key-concepts/data-elements/#tags)._ {{% note %}} -The `to()` function ignores rows with a null `_time` value and does not write -them to InfluxDB. +`to()` drops rows with a null `_time` value and does not write them to InfluxDB. {{% /note %}} ## Parameters @@ -134,6 +136,10 @@ To learn why, see [Match parameter names](/flux/v0.x/spec/data-model/#match-para ## Examples +- [Default to() operation](#default-to-operation) +- [Custom to() operation](#custom-to-operation) +- [Write to multiple buckets](#write-to-multiple-buckets) + ### Default to() operation Given the following table: @@ -144,11 +150,11 @@ Given the following table: | 0006 | 0000 | 0009 | "a" | "temp" | 99.3 | | 0007 | 0000 | 0009 | "a" | "temp" | 99.9 | -The default `to` operation: +The default `to()` operation: ```js -// ... -|> to(bucket:"my-bucket", org:"my-org") +data + |> to(bucket:"my-bucket", org:"my-org") ``` is equivalent to writing the above data using the following line protocol: @@ -161,7 +167,8 @@ _measurement=a temp=99.9 0007 ### Custom to() operation -The `to()` functions default operation can be overridden. For example, given the following table: +The default `to()` operation can be overridden. +For example, given the following table: | _time | _start | _stop | tag1 | tag2 | hum | temp | | ----- | ------ | ----- | ---- | ---- | ---- | ----- | @@ -172,13 +179,13 @@ The `to()` functions default operation can be overridden. For example, given the The operation: ```js -// ... -|> to( - bucket:"my-bucket", - org:"my-org", - tagColumns:["tag1"], - fieldFn: (r) => ({"hum": r.hum, "temp": r.temp}) -) +data + |> to( + bucket:"my-bucket", + org:"my-org", + tagColumns:["tag1"], + fieldFn: (r) => ({"hum": r.hum, "temp": r.temp}), + ) ``` is equivalent to writing the above data using the following line protocol: @@ -188,3 +195,21 @@ _tag1=a hum=55.3,temp=100.1 0005 _tag1=a hum=55.4,temp=99.3 0006 _tag1=a hum=55.5,temp=99.9 0007 ``` + +### Write to multiple buckets +The example below does the following: + +1. Writes data to `bucket1` and returns the data as it is written. +2. Ungroups the returned data. +3. Counts the number of rows. +4. Maps columns required to write to InfluxDB. +5. Writes the modified data to `bucket2`. + +```js +data + |> to(bucket: "bucket1") + |> group() + |> count() + |> map(fn: (r) => ({r with _time: now(), _measurement: "writeStats", _field: "numPointsWritten"})) + |> to(bucket: "bucket2") +``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/fieldkeys.md b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/fieldkeys.md index 3abcebc33..f8b3257b0 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/fieldkeys.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/fieldkeys.md @@ -30,9 +30,9 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/v1" v1.fieldKeys( - bucket: "example-bucket", - predicate: (r) => true, - start: -30d + bucket: "example-bucket", + predicate: (r) => true, + start: -30d, ) ``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/fieldsascols.md b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/fieldsascols.md index ef113cca6..000c8d371 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/fieldsascols.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/fieldsascols.md @@ -45,9 +45,9 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "influxdata/influxdb/v1" from(bucket:"example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "cpu") - |> v1.fieldsAsCols() + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu") + |> v1.fieldsAsCols() ``` {{< expand-wrapper >}} @@ -80,16 +80,3 @@ _`_start` and `_stop` columns have been omitted._ {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -package v1 - -fieldsAsCols = (tables=<-) => - tables - |> pivot( - rowKey:["_time"], - columnKey: ["_field"], - valueColumn: "_value" - ) -``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementfieldkeys.md b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementfieldkeys.md index 6e5ed8df1..8a91a7a82 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementfieldkeys.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementfieldkeys.md @@ -30,9 +30,9 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/v1" v1.measurementFieldKeys( - bucket: "example-bucket", - measurement: "example-measurement", - start: -30d + bucket: "example-bucket", + measurement: "example-measurement", + start: -30d, ) ``` @@ -56,19 +56,5 @@ Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time ```js import "influxdata/influxdb/v1" -v1.measurementFieldKeys( - bucket: "telegraf", - measurement: "cpu", -) +v1.measurementFieldKeys(bucket: "telegraf", measurement: "cpu") ``` - -## Function definition -```js -package v1 - -measurementFieldKeys = (bucket, measurement, start=-30d) => - fieldKeys(bucket: bucket, predicate: (r) => r._measurement == measurement, start: start) -``` - -_**Used functions:** -[v1.fieldKeys](/flux/v0.x/stdlib/influxdata/influxdb/schema/fieldkeys/)_ diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurements.md b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurements.md index 07f2bfcd1..318cffce0 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurements.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurements.md @@ -36,14 +36,3 @@ v1.measurements(bucket: "example-bucket") ### bucket {data-type="string"} Bucket to retrieve measurements from. - -## Function definition -```js -package v1 - -measurements = (bucket) => - tagValues(bucket: bucket, tag: "_measurement") -``` - -_**Used functions:** -[v1.tagValues()](/flux/v0.x/stdlib/influxdata/influxdb/schema/tagvalues)_ diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementtagkeys.md b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementtagkeys.md index 3235737a9..1de146d7c 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementtagkeys.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementtagkeys.md @@ -30,8 +30,8 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/v1" v1.measurementTagKeys( - bucket: "example-bucket", - measurement: "cpu" + bucket: "example-bucket", + measurement: "cpu", ) ``` @@ -42,17 +42,3 @@ Bucket to return tag keys from for a specific measurement. ### measurement {data-type="string"} Measurement to return tag keys from. - -## Function definition -```js -package v1 - -measurementTagKeys = (bucket, measurement) => - tagKeys( - bucket: bucket, - predicate: (r) => r._measurement == measurement - ) -``` - -_**Used functions:** -[v1.tagKeys()](/flux/v0.x/stdlib/influxdata/influxdb/schema/tagkeys)_ diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementtagvalues.md b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementtagvalues.md index d4d2c491b..f5dc090ce 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementtagvalues.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/measurementtagvalues.md @@ -30,9 +30,9 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/v1" v1.measurementTagValues( - bucket: "example-bucket", - measurement: "cpu", - tag: "host" + bucket: "example-bucket", + measurement: "cpu", + tag: "host", ) ``` @@ -46,18 +46,3 @@ Measurement to return tag values from. ### tag {data-type="string"} Tag to return all unique values from. - -## Function definition -```js -package v1 - -measurementTagValues = (bucket, measurement, tag) => - tagValues( - bucket: bucket, - tag: tag, - predicate: (r) => r._measurement == measurement - ) -``` - -_**Used functions:** -[v1.tagValues()](/flux/v0.x/stdlib/influxdata/influxdb/schema/tagvalues)_ diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/tagkeys.md b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/tagkeys.md index f7da29d5a..a1f3c5a1c 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/tagkeys.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/tagkeys.md @@ -30,9 +30,9 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/v1" v1.tagKeys( - bucket: "example-bucket", - predicate: (r) => true, - start: -30d + bucket: "example-bucket", + predicate: (r) => true, + start: -30d, ) ``` @@ -59,17 +59,3 @@ import "influxdata/influxdb/v1" v1.tagKeys(bucket: "my-bucket") ``` - - -## Function definition -```js -package v1 - -tagKeys = (bucket, predicate=(r) => true, start=-30d) => - from(bucket: bucket) - |> range(start: start) - |> filter(fn: predicate) - |> keys() - |> keep(columns: ["_value"]) - |> distinct() -``` diff --git a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/tagvalues.md b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/tagvalues.md index d1aa317c8..2acad48ac 100644 --- a/content/flux/v0.x/stdlib/influxdata/influxdb/v1/tagvalues.md +++ b/content/flux/v0.x/stdlib/influxdata/influxdb/v1/tagvalues.md @@ -30,10 +30,10 @@ The return value is always a single table with a single column, `_value`. import "influxdata/influxdb/v1" v1.tagValues( - bucket: "example-bucket", - tag: "host", - predicate: (r) => true, - start: -30d + bucket: "example-bucket", + tag: "host", + predicate: (r) => true, + start: -30d, ) ``` @@ -61,21 +61,5 @@ Absolute start times are defined using [time values](/flux/v0.x/spec/types/#time ```js import "influxdata/influxdb/v1" -v1.tagValues( - bucket: "my-bucket", - tag: "host", -) -``` - -## Function definition -```js -package v1 - -tagValues = (bucket, tag, predicate=(r) => true, start=-30d) => - from(bucket: bucket) - |> range(start: start) - |> filter(fn: predicate) - |> group(columns: [tag]) - |> distinct(column: tag) - |> keep(columns: ["_value"]) +v1.tagValues(bucket: "my-bucket", tag: "host") ``` diff --git a/content/flux/v0.x/stdlib/interpolate/linear.md b/content/flux/v0.x/stdlib/interpolate/linear.md index 00caa8a72..3e36fa0d5 100644 --- a/content/flux/v0.x/stdlib/interpolate/linear.md +++ b/content/flux/v0.x/stdlib/interpolate/linear.md @@ -46,7 +46,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "interpolate" data - |> interpolate.linear(every: 1d) + |> interpolate.linear(every: 1d) ``` {{< flex >}} diff --git a/content/flux/v0.x/stdlib/pagerduty/actionfromseverity.md b/content/flux/v0.x/stdlib/pagerduty/actionfromseverity.md index d0f28d662..a999b8bdc 100644 --- a/content/flux/v0.x/stdlib/pagerduty/actionfromseverity.md +++ b/content/flux/v0.x/stdlib/pagerduty/actionfromseverity.md @@ -21,9 +21,7 @@ All other severities convert to `trigger`. ```js import "pagerduty" -pagerduty.actionFromSeverity( - severity: "ok" -) +pagerduty.actionFromSeverity(severity: "ok") // Returns "resolve" ``` @@ -32,12 +30,3 @@ pagerduty.actionFromSeverity( ### severity {data-type="float"} The severity to convert to a PagerDuty action. - -## Function definition -```js -import "strings" - -actionFromSeverity = (severity) => - if strings.toLower(v: severity) == "ok" then "resolve" - else "trigger" -``` diff --git a/content/flux/v0.x/stdlib/pagerduty/dedupkey.md b/content/flux/v0.x/stdlib/pagerduty/dedupkey.md index f4f1b5bb0..ee80c2151 100644 --- a/content/flux/v0.x/stdlib/pagerduty/dedupkey.md +++ b/content/flux/v0.x/stdlib/pagerduty/dedupkey.md @@ -24,7 +24,7 @@ the group key to create a unique deduplication key for each input table. import "pagerduty" pagerduty.dedupKey( - exclude: ["_start", "_stop", "_level"] + exclude: ["_start", "_stop", "_level"], ) ``` @@ -41,9 +41,9 @@ Default is `["_start", "_stop", "_level"]`. import "pagerduty" from(bucket: "default") - |> range(start: -5m) - |> filter(fn: (r) => r._measurement == "mem") - |> pagerduty.dedupKey() + |> range(start: -5m) + |> filter(fn: (r) => r._measurement == "mem") + |> pagerduty.dedupKey() ``` {{% expand "View function updates" %}} diff --git a/content/flux/v0.x/stdlib/pagerduty/endpoint.md b/content/flux/v0.x/stdlib/pagerduty/endpoint.md index 398c2e2c6..55431b79b 100644 --- a/content/flux/v0.x/stdlib/pagerduty/endpoint.md +++ b/content/flux/v0.x/stdlib/pagerduty/endpoint.md @@ -22,7 +22,7 @@ a message to PagerDuty that includes output data. import "pagerduty" pagerduty.endpoint( - url: "https://events.pagerduty.com/v2/enqueue" + url: "https://events.pagerduty.com/v2/enqueue" ) ``` @@ -71,27 +71,25 @@ routingKey = secrets.get(key: "PAGERDUTY_ROUTING_KEY") toPagerDuty = pagerduty.endpoint() crit_statuses = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and r.status == "crit") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and r.status == "crit") crit_statuses - |> toPagerDuty(mapFn: (r) => ({ r with - routingKey: routingKey, - client: r.client, - clientURL: r.clientURL, - class: r.class, - eventAction: r.eventAction, - group: r.group, - severity: r.severity, - component: r.component, - source: r.source, - summary: r.summary, - component: r.component, - timestamp: r._time, - customDetails: { - "ping time": lastReported.ping, - load: lastReported.load - } - }) - )() + |> toPagerDuty( + mapFn: (r) => ({r with + routingKey: routingKey, + client: r.client, + clientURL: r.clientURL, + class: r.class, + eventAction: r.eventAction, + group: r.group, + severity: r.severity, + component: r.component, + source: r.source, + summary: r.summary, + component: r.component, + timestamp: r._time, + customDetails: {"ping time": lastReported.ping, load: lastReported.load}, + }), + )() ``` diff --git a/content/flux/v0.x/stdlib/pagerduty/sendevent.md b/content/flux/v0.x/stdlib/pagerduty/sendevent.md index c4f1b8b71..b1edd1ea4 100644 --- a/content/flux/v0.x/stdlib/pagerduty/sendevent.md +++ b/content/flux/v0.x/stdlib/pagerduty/sendevent.md @@ -20,20 +20,20 @@ The `pagerduty.sendEvent()` function sends an event to PagerDuty. import "pagerduty" pagerduty.sendEvent( - pagerdutyURL: "https://events.pagerduty.com/v2/enqueue", - routingKey: "ExampleRoutingKey", - client: "ExampleClient", - clientURL: "http://examplepagerdutyclient.com", - dedupkey: "ExampleDedupKey", - class: "cpu usage", - group: "app-stack", - severity: "ok", - eventAction: "trigger", - source: "monitoringtool:vendor:region", - component: "example-component", - summary: "This is an example summary.", - timestamp: "2016-07-17T08:42:58.315+0000", - customDetails: {exampleDetail: "Details"} + pagerdutyURL: "https://events.pagerduty.com/v2/enqueue", + routingKey: "ExampleRoutingKey", + client: "ExampleClient", + clientURL: "http://examplepagerdutyclient.com", + dedupKey: "ExampleDedupKey", + class: "cpu usage", + group: "app-stack", + severity: "ok", + eventAction: "trigger", + source: "monitoringtool:vendor:region", + component: "example-component", + summary: "This is an example summary.", + timestamp: "2016-07-17T08:42:58.315+0000", + customDetails: {exampleDetail: "Details"}, ) ``` @@ -52,14 +52,14 @@ The name of the client sending the alert. ### clientURL {data-type="string"} The URL of the client sending the alert. -### dedupkey {data-type="string"} +### dedupKey {data-type="string"} A per-alert ID that acts as deduplication key and allows you to acknowledge or change the severity of previous messages. Supports a maximum of 255 characters. {{% note %}} When using [`pagerduty.endpoint()`](/flux/v0.x/stdlib/pagerduty/endpoint/) -to send data to PagerDuty, the function uses the [`pagerduty.dedupKey()` function](/flux/v0.x/stdlib/pagerduty/dedupkey/) to populate the `dedupkey` parameter. +to send data to PagerDuty, the function uses the [`pagerduty.dedupKey()` function](/flux/v0.x/stdlib/pagerduty/dedupkey/) to populate the `dedupKey` parameter. {{% /note %}} ### class {data-type="string"} @@ -115,29 +115,25 @@ Additional event details. import "pagerduty" import "influxdata/influxdb/secrets" -lastReported = - from(bucket: "example-bucket") +lastReported = from(bucket: "example-bucket") |> range(start: -1m) |> filter(fn: (r) => r._measurement == "statuses") |> last() |> findRecord(fn: (key) => true, idx: 0) pagerduty.sendEvent( - routingKey: "example-routing-key", - client: lastReported.client, - clientURL: lastReported.clientURL, - class: lastReported.class, - eventAction: lastReported.eventAction, - group: lastReported.group, - severity: lastReported.severity, - component: lastReported.component, - source: lastReported.source, - component: lastReported.component, - summary: lastReported.summary, - timestamp: lastReported._time, - customDetails: { - "ping time": lastReported.ping, - load: lastReported.load - } + routingKey: "example-routing-key", + client: lastReported.client, + clientURL: lastReported.clientURL, + class: lastReported.class, + eventAction: lastReported.eventAction, + group: lastReported.group, + severity: lastReported.severity, + component: lastReported.component, + source: lastReported.source, + component: lastReported.component, + summary: lastReported.summary, + timestamp: lastReported._time, + customDetails: {"ping time": lastReported.ping, load: lastReported.load}, ) ``` \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/pagerduty/severityfromlevel.md b/content/flux/v0.x/stdlib/pagerduty/severityfromlevel.md index ab6fe9e48..f2d14e932 100644 --- a/content/flux/v0.x/stdlib/pagerduty/severityfromlevel.md +++ b/content/flux/v0.x/stdlib/pagerduty/severityfromlevel.md @@ -21,9 +21,7 @@ a PagerDuty severity. ```js import "pagerduty" -pagerduty.severityFromLevel( - level: "crit" -) +pagerduty.severityFromLevel(level: "crit") // Returns "critical" ``` @@ -39,18 +37,3 @@ pagerduty.severityFromLevel( ### level {data-type="string"} The InfluxDB status level to convert to a PagerDuty severity. - -## Function definition -```js -import "strings" - -severityFromLevel = (level) => { - lvl = strings.toLower(v:level) - sev = if lvl == "warn" then "warning" - else if lvl == "crit" then "critical" - else if lvl == "info" then "info" - else if lvl == "ok" then "info" - else "error" - return sev -} -``` diff --git a/content/flux/v0.x/stdlib/pushbullet/endpoint.md b/content/flux/v0.x/stdlib/pushbullet/endpoint.md index 5d4cf2671..dcb4fd55c 100644 --- a/content/flux/v0.x/stdlib/pushbullet/endpoint.md +++ b/content/flux/v0.x/stdlib/pushbullet/endpoint.md @@ -22,8 +22,8 @@ and sends a notification of type `note`. import "pushbullet" pushbullet.endpoint( - url: "https://api.pushbullet.com/v2/pushes", - token: "" + url: "https://api.pushbullet.com/v2/pushes", + token: "", ) ``` @@ -62,17 +62,11 @@ import "influxdata/influxdb/secrets" token = secrets.get(key: "PUSHBULLET_TOKEN") e = pushbullet.endpoint(token: token) -lastReported = - from(bucket: "example-bucket") +lastReported = from(bucket: "example-bucket") |> range(start: -10m) |> filter(fn: (r) => r._measurement == "statuses") |> last() lastReported - |> e(mapFn: (r) => ({ - r with - title: r.title, - text: "${string(v: r._time)}: ${r.status}." - }) - )() + |> e(mapFn: (r) => ({r with title: r.title, text: "${string(v: r._time)}: ${r.status}."}))() ``` diff --git a/content/flux/v0.x/stdlib/pushbullet/pushdata.md b/content/flux/v0.x/stdlib/pushbullet/pushdata.md index 7acfb110e..c22ac7c32 100644 --- a/content/flux/v0.x/stdlib/pushbullet/pushdata.md +++ b/content/flux/v0.x/stdlib/pushbullet/pushdata.md @@ -20,15 +20,15 @@ The `pushbullet.pushData()` function sends a push notification to the import "pushbullet" pushbullet.pushData( - url: "https://api.pushbullet.com/v2/pushes", - token: "", - data: { - "type": "link", - "title": "This is a notification!", - "body": "This notification came from Flux.", - "url": "http://example.com" - } -) + url: "https://api.pushbullet.com/v2/pushes", + token: "", + data: { + "type": "link", + "title": "This is a notification!", + "body": "This notification came from Flux.", + "url": "http://example.com" + }, + ) ``` ## Parameters @@ -56,8 +56,7 @@ import "influxdata/influxdb/secrets" token = secrets.get(key: "PUSHBULLET_TOKEN") -lastReported = - from(bucket: "example-bucket") +lastReported = from(bucket: "example-bucket") |> range(start: -1m) |> filter(fn: (r) => r._measurement == "statuses") |> last() @@ -65,12 +64,12 @@ lastReported = |> getRecord(idx: 0) pushbullet.pushData( - token: token, - data: { - "type": "link", - "title": "Last reported status", - "body": "${lastReported._time}: ${lastReported.status}." - "url": "${lastReported.statusURL}" - } + token: token, + data: { + "type": "link", + "title": "Last reported status", + "body": "${lastReported._time}: ${lastReported.status}.", + "url": "${lastReported.statusURL}", + } ) ``` diff --git a/content/flux/v0.x/stdlib/pushbullet/pushnote.md b/content/flux/v0.x/stdlib/pushbullet/pushnote.md index 4ae7ce18f..4aa1bd695 100644 --- a/content/flux/v0.x/stdlib/pushbullet/pushnote.md +++ b/content/flux/v0.x/stdlib/pushbullet/pushnote.md @@ -21,10 +21,10 @@ to the Pushbullet API. import "pushbullet" pushbullet.pushNote( - url: "https://api.pushbullet.com/v2/pushes", - token: "", - title: "This is a push notification!", - text: "This push notification came from Flux." + url: "https://api.pushbullet.com/v2/pushes", + token: "", + title: "This is a push notification!", + text: "This push notification came from Flux.", ) ``` @@ -56,8 +56,7 @@ import "influxdata/influxdb/secrets" token = secrets.get(key: "PUSHBULLET_TOKEN") -lastReported = - from(bucket: "example-bucket") +lastReported = from(bucket: "example-bucket") |> range(start: -1m) |> filter(fn: (r) => r._measurement == "statuses") |> last() @@ -65,8 +64,8 @@ lastReported = |> getRecord(idx: 0) pushbullet.pushNote( - token: token, - title: "Last reported status", - text: "${lastReported._time}: ${lastReported.status}." + token: token, + title: "Last reported status", + text: "${lastReported._time}: ${lastReported.status}.", ) ``` diff --git a/content/flux/v0.x/stdlib/regexp/compile.md b/content/flux/v0.x/stdlib/regexp/compile.md index 70ba88852..d18ff8581 100644 --- a/content/flux/v0.x/stdlib/regexp/compile.md +++ b/content/flux/v0.x/stdlib/regexp/compile.md @@ -40,14 +40,9 @@ The string value to parse into a regular expression. import "regexp" data - |> map(fn: (r) => ({ - r with - regexStr: r.regexStr, - _value: r._value, - firstRegexMatch: findString( - r: regexp.compile(v: regexStr), - v: r._value - ) - }) - ) + |> map(fn: (r) => ({r with + regexStr: r.regexStr, + _value: r._value, + firstRegexMatch: findString(r: regexp.compile(v: regexStr), v: r._value) + })) ``` diff --git a/content/flux/v0.x/stdlib/regexp/findstring.md b/content/flux/v0.x/stdlib/regexp/findstring.md index 8a81d371f..048f75ee3 100644 --- a/content/flux/v0.x/stdlib/regexp/findstring.md +++ b/content/flux/v0.x/stdlib/regexp/findstring.md @@ -43,11 +43,10 @@ The string value to search. import "regexp" data - |> map(fn: (r) => ({ - r with - message: r.message, - regexp: r.regexp, - match: regexp.findString(r: r.regexp, v: r.message) - }) - ) + |> map(fn: (r) => ({r with + message: r.message, + regexp: r.regexp, + match: regexp.findString(r: r.regexp, v: r.message) + }) + ) ``` diff --git a/content/flux/v0.x/stdlib/regexp/findstringindex.md b/content/flux/v0.x/stdlib/regexp/findstringindex.md index 69deb8e75..f1f6d2444 100644 --- a/content/flux/v0.x/stdlib/regexp/findstringindex.md +++ b/content/flux/v0.x/stdlib/regexp/findstringindex.md @@ -46,14 +46,13 @@ The string value to search. import "regexp" data - |> map(fn: (r) => ({ - r with - regexStr: r.regexStr, - _value: r._value, - matchIndex: regexp.findStringIndex( - r: regexp.compile(r.regexStr), - v: r._value - ) - }) - ) + |> map(fn: (r) => ({r with + regexStr: r.regexStr, + _value: r._value, + matchIndex: regexp.findStringIndex( + r: regexp.compile(r.regexStr), + v: r._value + ) + }) + ) ``` diff --git a/content/flux/v0.x/stdlib/regexp/getstring.md b/content/flux/v0.x/stdlib/regexp/getstring.md index 5fddac744..a4ca95fab 100644 --- a/content/flux/v0.x/stdlib/regexp/getstring.md +++ b/content/flux/v0.x/stdlib/regexp/getstring.md @@ -37,13 +37,6 @@ The regular expression object to convert to a string. ###### Convert regular expressions into strings in each row ```js -import "regexp" - data - |> map(fn: (r) => ({ - r with - regex: r.regex, - regexStr: regexp.getString(r: r.regex) - }) - ) + |> map(fn: (r) => ({r with regex: r.regex, regexStr: regexp.getString(r: r.regex)})) ``` diff --git a/content/flux/v0.x/stdlib/regexp/matchregexpstring.md b/content/flux/v0.x/stdlib/regexp/matchregexpstring.md index f8295e538..cb359a6df 100644 --- a/content/flux/v0.x/stdlib/regexp/matchregexpstring.md +++ b/content/flux/v0.x/stdlib/regexp/matchregexpstring.md @@ -43,10 +43,5 @@ The string value to search. import "regexp" data - |> filter(fn: (r) => - regexp.matchRegexpString( - r: /Alert\:/, - v: r.message - ) - ) + |> filter(fn: (r) => regexp.matchRegexpString(r: /Alert:/, v: r.message)) ``` diff --git a/content/flux/v0.x/stdlib/regexp/quotemeta.md b/content/flux/v0.x/stdlib/regexp/quotemeta.md index b8c894076..bde846d1a 100644 --- a/content/flux/v0.x/stdlib/regexp/quotemeta.md +++ b/content/flux/v0.x/stdlib/regexp/quotemeta.md @@ -38,10 +38,5 @@ The string that contains regular expression metacharacters to escape. import "regexp" data - |> map(fn: (r) => ({ - r with - notes: r.notes, - notes_escaped: regexp.quoteMeta(v: r.notes) - }) - ) + |> map(fn: (r) => ({r with notes: r.notes, notes_escaped: regexp.quoteMeta(v: r.notes)})) ``` diff --git a/content/flux/v0.x/stdlib/regexp/replaceallstring.md b/content/flux/v0.x/stdlib/regexp/replaceallstring.md index e61c57377..8cea33325 100644 --- a/content/flux/v0.x/stdlib/regexp/replaceallstring.md +++ b/content/flux/v0.x/stdlib/regexp/replaceallstring.md @@ -46,13 +46,12 @@ The replacement for matches to `r`. import "regexp" data - |> map(fn: (r) => ({ - r with - message: r.message, - updated_message: regexp.replaceAllString( - r: /cat|bird|ferret/, - v: r.message, - t: "dog" - ) - })) + |> map(fn: (r) => ({r with + message: r.message, + updated_message: regexp.replaceAllString( + r: /cat|bird|ferret/, + v: r.message, + t: "dog", + ) + })) ``` diff --git a/content/flux/v0.x/stdlib/sampledata/bool.md b/content/flux/v0.x/stdlib/sampledata/bool.md index e0eb23c6a..c759e4a6b 100644 --- a/content/flux/v0.x/stdlib/sampledata/bool.md +++ b/content/flux/v0.x/stdlib/sampledata/bool.md @@ -15,9 +15,7 @@ flux/v0.x/tags: [inputs, sample data] ```js import "sampledata" -sampledata.bool( - includeNull: false -) +sampledata.bool(includeNull: false) ``` ## Parameters @@ -39,6 +37,7 @@ import "sampledata" sampledata.bool() ``` + ##### Output tables {{% flux/sample "bool" %}} @@ -50,5 +49,6 @@ import "sampledata" sampledata.bool(includeNull: true) ``` + ##### Output tables {{% flux/sample "bool" true %}} \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/sampledata/float.md b/content/flux/v0.x/stdlib/sampledata/float.md index 2d3a81360..cfaa2a4ad 100644 --- a/content/flux/v0.x/stdlib/sampledata/float.md +++ b/content/flux/v0.x/stdlib/sampledata/float.md @@ -15,9 +15,7 @@ flux/v0.x/tags: [inputs, sample data] ```js import "sampledata" -sampledata.float( - includeNull: false -) +sampledata.float(includeNull: false) ``` ## Parameters @@ -39,6 +37,7 @@ import "sampledata" sampledata.float() ``` + ##### Output tables {{% flux/sample "float" %}} @@ -50,5 +49,6 @@ import "sampledata" sampledata.float(includeNull: true) ``` + ##### Output tables {{% flux/sample "float" true %}} \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/sampledata/int.md b/content/flux/v0.x/stdlib/sampledata/int.md index a91e1e1d6..bf27bd578 100644 --- a/content/flux/v0.x/stdlib/sampledata/int.md +++ b/content/flux/v0.x/stdlib/sampledata/int.md @@ -15,9 +15,7 @@ flux/v0.x/tags: [inputs, sample data] ```js import "sampledata" -sampledata.int( - includeNull: false -) +sampledata.int(includeNull: false) ``` ## Parameters @@ -39,6 +37,7 @@ import "sampledata" sampledata.int() ``` + ##### Output tables {{% flux/sample "int" %}} @@ -50,5 +49,6 @@ import "sampledata" sampledata.int(includeNull: true) ``` + ##### Output tables {{% flux/sample "int" true %}} \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/sampledata/numericbool.md b/content/flux/v0.x/stdlib/sampledata/numericbool.md index 5cc0a5ae6..96fcbf76a 100644 --- a/content/flux/v0.x/stdlib/sampledata/numericbool.md +++ b/content/flux/v0.x/stdlib/sampledata/numericbool.md @@ -15,9 +15,7 @@ flux/v0.x/tags: [inputs, sample data] ```js import "sampledata" -sampledata.numericBool( - includeNull: false -) +sampledata.numericBool(includeNull: false) ``` ## Parameters @@ -39,6 +37,7 @@ import "sampledata" sampledata.numericBool() ``` + ##### Output tables {{% flux/sample "numericBool" %}} @@ -50,5 +49,6 @@ import "sampledata" sampledata.numericBool(includeNull: true) ``` + ##### Output tables {{% flux/sample "numericBool" true %}} \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/sampledata/string.md b/content/flux/v0.x/stdlib/sampledata/string.md index 434db3fcd..3f96ad87b 100644 --- a/content/flux/v0.x/stdlib/sampledata/string.md +++ b/content/flux/v0.x/stdlib/sampledata/string.md @@ -15,9 +15,7 @@ flux/v0.x/tags: [inputs, sample data] ```js import "sampledata" -sampledata.string( - includeNull: false -) +sampledata.string(includeNull: false) ``` ## Parameters @@ -39,6 +37,7 @@ import "sampledata" sampledata.string() ``` + ##### Output tables {{% flux/sample "string" %}} @@ -50,5 +49,6 @@ import "sampledata" sampledata.string(includeNull: true) ``` + ##### Output tables {{% flux/sample "string" true %}} \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/sampledata/uint.md b/content/flux/v0.x/stdlib/sampledata/uint.md index 8fb780244..ff0c92128 100644 --- a/content/flux/v0.x/stdlib/sampledata/uint.md +++ b/content/flux/v0.x/stdlib/sampledata/uint.md @@ -15,9 +15,7 @@ flux/v0.x/tags: [inputs, sample data] ```js import "sampledata" -sampledata.uint( - includeNull: false -) +sampledata.uint(includeNull: false) ``` ## Parameters @@ -50,5 +48,6 @@ import "sampledata" sampledata.uint(includeNull: true) ``` + ##### Output tables {{% flux/sample "uint" true %}} \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/slack/endpoint.md b/content/flux/v0.x/stdlib/slack/endpoint.md index 6f3a39717..0063d8836 100644 --- a/content/flux/v0.x/stdlib/slack/endpoint.md +++ b/content/flux/v0.x/stdlib/slack/endpoint.md @@ -21,8 +21,8 @@ The `slack.endpoint()` function sends a message to Slack that includes output da import "slack" slack.endpoint( - url: "https://slack.com/api/chat.postMessage", - token: "mySuPerSecRetTokEn" + url: "https://slack.com/api/chat.postMessage", + token: "mySuPerSecRetTokEn", ) ``` @@ -73,14 +73,9 @@ token = secrets.get(key: "SLACK_TOKEN") toSlack = slack.endpoint(token: token) crit_statuses = from(bucket: "example-bucket") - |> range(start: -1m) - |> filter(fn: (r) => r._measurement == "statuses" and r.status == "crit") + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "statuses" and r.status == "crit") crit_statuses - |> toSlack(mapFn: (r) => ({ - channel: "Alerts", - text: r._message, - color: "danger", - }) - )() + |> toSlack(mapFn: (r) => ({channel: "Alerts", text: r._message, color: "danger"}))() ``` diff --git a/content/flux/v0.x/stdlib/slack/message.md b/content/flux/v0.x/stdlib/slack/message.md index 689bf18a2..0cc101e3d 100644 --- a/content/flux/v0.x/stdlib/slack/message.md +++ b/content/flux/v0.x/stdlib/slack/message.md @@ -23,11 +23,11 @@ or with a [Slack webhook](https://api.slack.com/incoming-webhooks). import "slack" slack.message( - url: "https://slack.com/api/chat.postMessage", - token: "mySuPerSecRetTokEn", - channel: "#flux",, - text: "This is a message from the Flux slack.message() function.", - color: "good" + url: "https://slack.com/api/chat.postMessage", + token: "mySuPerSecRetTokEn", + channel: "#flux", + text: "This is a message from the Flux slack.message() function.", + color: "good", ) ``` @@ -73,8 +73,7 @@ A token is only required if using the Slack chat.postMessage API. ```js import "slack" -lastReported = - from(bucket: "example-bucket") +lastReported = from(bucket: "example-bucket") |> range(start: -1m) |> filter(fn: (r) => r._measurement == "statuses") |> last() @@ -82,10 +81,10 @@ lastReported = |> getRecord(idx: 0) slack.message( - url: "https://slack.com/api/chat.postMessage", - token: "mySuPerSecRetTokEn", - channel: "#system-status", - text: "The last reported status was \"${lastReported.status}\"." - color: "warning" + url: "https://slack.com/api/chat.postMessage", + token: "mySuPerSecRetTokEn", + channel: "#system-status", + text: "The last reported status was \"${lastReported.status}\".", + color: "warning", ) ``` diff --git a/content/flux/v0.x/stdlib/sql/from.md b/content/flux/v0.x/stdlib/sql/from.md index 4b80c0694..627db869f 100644 --- a/content/flux/v0.x/stdlib/sql/from.md +++ b/content/flux/v0.x/stdlib/sql/from.md @@ -22,9 +22,9 @@ The `sql.from()` function retrieves data from a SQL data source. import "sql" sql.from( - driverName: "postgres", - dataSourceName: "postgresql://user:password@localhost", - query:"SELECT * FROM TestTable" + driverName: "postgres", + dataSourceName: "postgresql://user:password@localhost", + query:"SELECT * FROM TestTable", ) ``` diff --git a/content/flux/v0.x/stdlib/sql/to.md b/content/flux/v0.x/stdlib/sql/to.md index 05df65cae..a6e0a09ed 100644 --- a/content/flux/v0.x/stdlib/sql/to.md +++ b/content/flux/v0.x/stdlib/sql/to.md @@ -20,10 +20,10 @@ The `sql.to()` function writes data to a SQL database. import "sql" sql.to( - driverName: "mysql", - dataSourceName: "username:password@tcp(localhost:3306)/dbname?param=value", - table: "example_table", - batchSize: 10000 + driverName: "mysql", + dataSourceName: "username:password@tcp(localhost:3306)/dbname?param=value", + table: "example_table", + batchSize: 10000, ) ``` diff --git a/content/flux/v0.x/stdlib/strings/compare.md b/content/flux/v0.x/stdlib/strings/compare.md index dbea9a993..ae6732cc6 100644 --- a/content/flux/v0.x/stdlib/strings/compare.md +++ b/content/flux/v0.x/stdlib/strings/compare.md @@ -47,9 +47,5 @@ The string value to compare against. import "strings" data - |> map(fn: (r) => ({ - r with - _value: strings.compare(v: r.tag1, t: r.tag2) - }) - ) + |> map(fn: (r) => ({r with _value: strings.compare(v: r.tag1, t: r.tag2)})) ``` diff --git a/content/flux/v0.x/stdlib/strings/containsany.md b/content/flux/v0.x/stdlib/strings/containsany.md index c1bddf18e..4ec418fe7 100644 --- a/content/flux/v0.x/stdlib/strings/containsany.md +++ b/content/flux/v0.x/stdlib/strings/containsany.md @@ -45,9 +45,5 @@ Characters to search for. import "strings" data - |> map(fn: (r) => ({ - r with - _value: strings.containsAny(v: r.price, chars: "£$¢") - }) - ) + |> map(fn: (r) => ({r with _value: strings.containsAny(v: r.price, chars: "£$¢")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/containsstr.md b/content/flux/v0.x/stdlib/strings/containsstr.md index f706d8b74..172eb29dc 100644 --- a/content/flux/v0.x/stdlib/strings/containsstr.md +++ b/content/flux/v0.x/stdlib/strings/containsstr.md @@ -42,9 +42,5 @@ The substring value to search for. import "strings" data - |> map(fn: (r) => ({ - r with - _value: strings.containsStr(v: r.author, substr: "John") - }) - ) + |> map(fn: (r) => ({r with _value: strings.containsStr(v: r.author, substr: "John")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/countstr.md b/content/flux/v0.x/stdlib/strings/countstr.md index 2eec1074f..2d811b119 100644 --- a/content/flux/v0.x/stdlib/strings/countstr.md +++ b/content/flux/v0.x/stdlib/strings/countstr.md @@ -54,9 +54,5 @@ strings.coutnStr(v: "ooooo", substr: "oo") import "strings" data - |> map(fn: (r) => ({ - r with - _value: strings.countStr(v: r.message, substr: "uh") - }) - ) + |> map(fn: (r) => ({r with _value: strings.countStr(v: r.message, substr: "uh")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/equalfold.md b/content/flux/v0.x/stdlib/strings/equalfold.md index d1fd1a964..fb2d1a172 100644 --- a/content/flux/v0.x/stdlib/strings/equalfold.md +++ b/content/flux/v0.x/stdlib/strings/equalfold.md @@ -43,11 +43,10 @@ The string value to compare against. import "strings" data - |> map(fn: (r) => ({ - r with - string1: r.string1, - string2: r.string2, - same: strings.equalFold(v: r.string1, t: r.string2) - }) - ) + |> map(fn: (r) => ({r with + string1: r.string1, + string2: r.string2, + same: strings.equalFold(v: r.string1, t: r.string2) + }) + ) ``` diff --git a/content/flux/v0.x/stdlib/strings/hasprefix.md b/content/flux/v0.x/stdlib/strings/hasprefix.md index 114364ff4..ffb6ce18e 100644 --- a/content/flux/v0.x/stdlib/strings/hasprefix.md +++ b/content/flux/v0.x/stdlib/strings/hasprefix.md @@ -40,5 +40,5 @@ The prefix to search for. import "strings" data - |> filter(fn:(r) => strings.hasPrefix(v: r.metric, prefix: "int_" )) + |> filter(fn:(r) => strings.hasPrefix(v: r.metric, prefix: "int_" )) ``` diff --git a/content/flux/v0.x/stdlib/strings/hassuffix.md b/content/flux/v0.x/stdlib/strings/hassuffix.md index 053c99705..413df06f0 100644 --- a/content/flux/v0.x/stdlib/strings/hassuffix.md +++ b/content/flux/v0.x/stdlib/strings/hassuffix.md @@ -41,5 +41,5 @@ The suffix to search for. import "strings" data - |> filter(fn:(r) => strings.hasSuffix(v: r.metric, suffix: "_count" )) + |> filter(fn:(r) => strings.hasSuffix(v: r.metric, suffix: "_count" )) ``` diff --git a/content/flux/v0.x/stdlib/strings/index-func.md b/content/flux/v0.x/stdlib/strings/index-func.md index 5cdcf3d6e..6d9ab1271 100644 --- a/content/flux/v0.x/stdlib/strings/index-func.md +++ b/content/flux/v0.x/stdlib/strings/index-func.md @@ -47,9 +47,5 @@ The substring to search for. import "strings" data - |> map(fn: (r) => ({ - r with - the_index: strings.index(v: r.pageTitle, substr: "the") - }) - ) + |> map(fn: (r) => ({r with the_index: strings.index(v: r.pageTitle, substr: "the")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/indexany.md b/content/flux/v0.x/stdlib/strings/indexany.md index d2c16cdce..6ef4ee5d7 100644 --- a/content/flux/v0.x/stdlib/strings/indexany.md +++ b/content/flux/v0.x/stdlib/strings/indexany.md @@ -46,9 +46,5 @@ Characters to search for. import "strings" data - |> map(fn: (r) => ({ - r with - charIndex: strings.indexAny(v: r._field, chars: "_-") - }) - ) + |> map(fn: (r) => ({r with charIndex: strings.indexAny(v: r._field, chars: "_-")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/isdigit.md b/content/flux/v0.x/stdlib/strings/isdigit.md index 6c5a76c26..771cdfe9b 100644 --- a/content/flux/v0.x/stdlib/strings/isdigit.md +++ b/content/flux/v0.x/stdlib/strings/isdigit.md @@ -40,5 +40,5 @@ The single-character string to test. import "strings" data - |> filter(fn: (r) => strings.isDigit(v: r.serverRef)) + |> filter(fn: (r) => strings.isDigit(v: r.serverRef)) ``` diff --git a/content/flux/v0.x/stdlib/strings/isletter.md b/content/flux/v0.x/stdlib/strings/isletter.md index e3b3c535e..54b217ef5 100644 --- a/content/flux/v0.x/stdlib/strings/isletter.md +++ b/content/flux/v0.x/stdlib/strings/isletter.md @@ -40,5 +40,5 @@ The single character string to test. import "strings" data - |> filter(fn: (r) => strings.isLetter(v: r.serverRef)) + |> filter(fn: (r) => strings.isLetter(v: r.serverRef)) ``` diff --git a/content/flux/v0.x/stdlib/strings/islower.md b/content/flux/v0.x/stdlib/strings/islower.md index 6c5fc9673..138b72638 100644 --- a/content/flux/v0.x/stdlib/strings/islower.md +++ b/content/flux/v0.x/stdlib/strings/islower.md @@ -40,5 +40,5 @@ The single-character string value to test. import "strings" data - |> filter(fn: (r) => strings.isLower(v: r.host)) + |> filter(fn: (r) => strings.isLower(v: r.host)) ``` diff --git a/content/flux/v0.x/stdlib/strings/isupper.md b/content/flux/v0.x/stdlib/strings/isupper.md index d34b7de67..315e3f632 100644 --- a/content/flux/v0.x/stdlib/strings/isupper.md +++ b/content/flux/v0.x/stdlib/strings/isupper.md @@ -40,5 +40,5 @@ The single-character string value to test. import "strings" data - |> filter(fn: (r) => strings.isUpper(v: r.host)) + |> filter(fn: (r) => strings.isUpper(v: r.host)) ``` diff --git a/content/flux/v0.x/stdlib/strings/lastindex.md b/content/flux/v0.x/stdlib/strings/lastindex.md index a25fc7429..afcc33cf4 100644 --- a/content/flux/v0.x/stdlib/strings/lastindex.md +++ b/content/flux/v0.x/stdlib/strings/lastindex.md @@ -47,9 +47,5 @@ The substring to search for. import "strings" data - |> map(fn: (r) => ({ - r with - the_index: strings.lastIndex(v: r.pageTitle, substr: "the") - }) - ) + |> map(fn: (r) => ({r with the_index: strings.lastIndex(v: r.pageTitle, substr: "the")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/lastindexany.md b/content/flux/v0.x/stdlib/strings/lastindexany.md index 4ba7d002f..6f14fc381 100644 --- a/content/flux/v0.x/stdlib/strings/lastindexany.md +++ b/content/flux/v0.x/stdlib/strings/lastindexany.md @@ -45,9 +45,5 @@ Characters to search for. import "strings" data - |> map(fn: (r) => ({ - r with - charLastIndex: strings.lastIndexAny(v: r._field, chars: "_-") - }) - ) + |> map(fn: (r) => ({r with charLastIndex: strings.lastIndexAny(v: r._field, chars: "_-")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/repeat.md b/content/flux/v0.x/stdlib/strings/repeat.md index 1661acb7a..5b2a88498 100644 --- a/content/flux/v0.x/stdlib/strings/repeat.md +++ b/content/flux/v0.x/stdlib/strings/repeat.md @@ -40,10 +40,10 @@ The number of times to repeat `v`. import "strings" data - |> map(fn: (r) => ({ - laugh: r.laugh - intensity: r.intensity - laughter: strings.repeat(v: r.laugh, i: r.intensity) - }) - ) + |> map(fn: (r) => ({ + laugh: r.laugh, + intensity: r.intensity, + laughter: strings.repeat(v: r.laugh, i: r.intensity), + }) + ) ``` diff --git a/content/flux/v0.x/stdlib/strings/replace.md b/content/flux/v0.x/stdlib/strings/replace.md index dfaf75811..ab39c25ca 100644 --- a/content/flux/v0.x/stdlib/strings/replace.md +++ b/content/flux/v0.x/stdlib/strings/replace.md @@ -51,9 +51,5 @@ The number of non-overlapping `t` matches to replace. import "strings" data - |> map(fn: (r) => ({ - r with - content: strings.replace(v: r.content, t: "he", u: "her", i: 3) - }) - ) + |> map(fn: (r) => ({r with content: strings.replace(v: r.content, t: "he", u: "her", i: 3)})) ``` diff --git a/content/flux/v0.x/stdlib/strings/replaceall.md b/content/flux/v0.x/stdlib/strings/replaceall.md index 5157d8232..e7f3a7d49 100644 --- a/content/flux/v0.x/stdlib/strings/replaceall.md +++ b/content/flux/v0.x/stdlib/strings/replaceall.md @@ -48,9 +48,5 @@ The replacement for all instances of `t`. import "strings" data - |> map(fn: (r) => ({ - r with - content: strings.replaceAll(v: r.content, t: "he", u: "her") - }) - ) + |> map(fn: (r) => ({r with content: strings.replaceAll(v: r.content, t: "he", u: "her")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/split.md b/content/flux/v0.x/stdlib/strings/split.md index e8aaf289d..1e5e5c9e6 100644 --- a/content/flux/v0.x/stdlib/strings/split.md +++ b/content/flux/v0.x/stdlib/strings/split.md @@ -47,5 +47,5 @@ The string value that acts as the separator. import "strings" data - |> map (fn:(r) => strings.split(v: r.searchTags, t: ",")) + |> map (fn:(r) => strings.split(v: r.searchTags, t: ",")) ``` diff --git a/content/flux/v0.x/stdlib/strings/splitafter.md b/content/flux/v0.x/stdlib/strings/splitafter.md index 2ce2c0735..0a44a65a1 100644 --- a/content/flux/v0.x/stdlib/strings/splitafter.md +++ b/content/flux/v0.x/stdlib/strings/splitafter.md @@ -48,5 +48,5 @@ The string value that acts as the separator. import "strings" data - |> map (fn:(r) => strings.splitAfter(v: r.searchTags, t: ",")) + |> map (fn:(r) => strings.splitAfter(v: r.searchTags, t: ",")) ``` diff --git a/content/flux/v0.x/stdlib/strings/splitaftern.md b/content/flux/v0.x/stdlib/strings/splitaftern.md index 3ac5c22c7..2894ec4c5 100644 --- a/content/flux/v0.x/stdlib/strings/splitaftern.md +++ b/content/flux/v0.x/stdlib/strings/splitaftern.md @@ -53,5 +53,5 @@ The last substring is the unsplit remainder. import "strings" data - |> map (fn:(r) => strings.splitAfterN(v: r.searchTags, t: ",")) + |> map (fn:(r) => strings.splitAfterN(v: r.searchTags, t: ",")) ``` diff --git a/content/flux/v0.x/stdlib/strings/splitn.md b/content/flux/v0.x/stdlib/strings/splitn.md index 81037529c..f9984c2b9 100644 --- a/content/flux/v0.x/stdlib/strings/splitn.md +++ b/content/flux/v0.x/stdlib/strings/splitn.md @@ -52,5 +52,5 @@ The last substring is the unsplit remainder. import "strings" data - |> map (fn:(r) => strings.splitN(v: r.searchTags, t: ",")) + |> map (fn:(r) => strings.splitN(v: r.searchTags, t: ",")) ``` diff --git a/content/flux/v0.x/stdlib/strings/strlen.md b/content/flux/v0.x/stdlib/strings/strlen.md index 192b8bef9..1cbbe7684 100644 --- a/content/flux/v0.x/stdlib/strings/strlen.md +++ b/content/flux/v0.x/stdlib/strings/strlen.md @@ -48,9 +48,5 @@ data import "strings" data - |> map(fn: (r) => ({ - r with - length: strings.strlen(v: r._value) - }) - ) + |> map(fn: (r) => ({r with length: strings.strlen(v: r._value)})) ``` diff --git a/content/flux/v0.x/stdlib/strings/substring.md b/content/flux/v0.x/stdlib/strings/substring.md index 15f5e2f0d..5aeefa5cc 100644 --- a/content/flux/v0.x/stdlib/strings/substring.md +++ b/content/flux/v0.x/stdlib/strings/substring.md @@ -43,12 +43,6 @@ The ending exclusive index of the substring. ###### Store the first four characters of a string ```js -import "strings" - data - |> map(fn: (r) => ({ - r with - abbr: strings.substring(v: r.name, start: 0, end: 4) - }) - ) + |> map(fn: (r) => ({r with abbr: strings.substring(v: r.name, start: 0, end: 4)})) ``` diff --git a/content/flux/v0.x/stdlib/strings/title.md b/content/flux/v0.x/stdlib/strings/title.md index d17926410..22c13b3bf 100644 --- a/content/flux/v0.x/stdlib/strings/title.md +++ b/content/flux/v0.x/stdlib/strings/title.md @@ -41,5 +41,5 @@ The string value to convert. import "strings" data - |> map(fn: (r) => ({ r with pageTitle: strings.title(v: r.pageTitle) })) + |> map(fn: (r) => ({ r with pageTitle: strings.title(v: r.pageTitle) })) ``` diff --git a/content/flux/v0.x/stdlib/strings/tolower.md b/content/flux/v0.x/stdlib/strings/tolower.md index 84f0fd904..a2f7df272 100644 --- a/content/flux/v0.x/stdlib/strings/tolower.md +++ b/content/flux/v0.x/stdlib/strings/tolower.md @@ -41,8 +41,5 @@ The string value to convert. import "strings" data - |> map(fn: (r) => ({ - r with exclamation: strings.toLower(v: r.exclamation) - }) - ) + |> map(fn: (r) => ({r with exclamation: strings.toLower(v: r.exclamation)})) ``` diff --git a/content/flux/v0.x/stdlib/strings/totitle.md b/content/flux/v0.x/stdlib/strings/totitle.md index 0f967dd2f..56b487056 100644 --- a/content/flux/v0.x/stdlib/strings/totitle.md +++ b/content/flux/v0.x/stdlib/strings/totitle.md @@ -41,7 +41,7 @@ The string value to convert. import "strings" data - |> map(fn: (r) => ({ r with pageTitle: strings.toTitle(v: r.pageTitle) })) + |> map(fn: (r) => ({ r with pageTitle: strings.toTitle(v: r.pageTitle) })) ``` {{% note %}} diff --git a/content/flux/v0.x/stdlib/strings/toupper.md b/content/flux/v0.x/stdlib/strings/toupper.md index cb125ee46..9870aea3c 100644 --- a/content/flux/v0.x/stdlib/strings/toupper.md +++ b/content/flux/v0.x/stdlib/strings/toupper.md @@ -41,7 +41,7 @@ The string value to convert. import "strings" data - |> map(fn: (r) => ({ r with envVars: strings.toUpper(v: r.envVars) })) + |> map(fn: (r) => ({ r with envVars: strings.toUpper(v: r.envVars) })) ``` {{% note %}} diff --git a/content/flux/v0.x/stdlib/strings/trim.md b/content/flux/v0.x/stdlib/strings/trim.md index 8fc589b04..20a3c0f56 100644 --- a/content/flux/v0.x/stdlib/strings/trim.md +++ b/content/flux/v0.x/stdlib/strings/trim.md @@ -50,9 +50,5 @@ Only characters that match the `cutset` string exactly are trimmed. import "strings" data - |> map(fn: (r) => ({ - r with - variables: strings.trim(v: r.variables, cutset: ".") - }) - ) + |> map(fn: (r) => ({r with variables: strings.trim(v: r.variables, cutset: ".")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/trimleft.md b/content/flux/v0.x/stdlib/strings/trimleft.md index 7e5f6fb47..a020f217d 100644 --- a/content/flux/v0.x/stdlib/strings/trimleft.md +++ b/content/flux/v0.x/stdlib/strings/trimleft.md @@ -48,9 +48,5 @@ Only characters that match the `cutset` string exactly are removed. import "strings" data - |> map(fn: (r) => ({ - r with - variables: strings.trimLeft(v: r.variables, cutset: ".") - }) - ) + |> map(fn: (r) => ({r with variables: strings.trimLeft(v: r.variables, cutset: ".")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/trimprefix.md b/content/flux/v0.x/stdlib/strings/trimprefix.md index 7815f86af..5048ebcc9 100644 --- a/content/flux/v0.x/stdlib/strings/trimprefix.md +++ b/content/flux/v0.x/stdlib/strings/trimprefix.md @@ -49,9 +49,5 @@ The prefix to remove. import "strings" data - |> map(fn: (r) => ({ - r with - sensorID: strings.trimPrefix(v: r.sensorId, prefix: "s12_") - }) - ) + |> map(fn: (r) => ({r with sensorID: strings.trimPrefix(v: r.sensorId, prefix: "s12_")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/trimright.md b/content/flux/v0.x/stdlib/strings/trimright.md index 5c592b959..4d1779a39 100644 --- a/content/flux/v0.x/stdlib/strings/trimright.md +++ b/content/flux/v0.x/stdlib/strings/trimright.md @@ -49,9 +49,5 @@ Only characters that match the `cutset` string exactly are trimmed. import "strings" data - |> map(fn: (r) => ({ - r with - variables: strings.trimRight(v: r.variables, cutset: ".") - }) - ) + |> map(fn: (r) => ({r with variables: strings.trimRight(v: r.variables, cutset: ".")})) ``` diff --git a/content/flux/v0.x/stdlib/strings/trimspace.md b/content/flux/v0.x/stdlib/strings/trimspace.md index 7c5d2a6a3..c56a7462e 100644 --- a/content/flux/v0.x/stdlib/strings/trimspace.md +++ b/content/flux/v0.x/stdlib/strings/trimspace.md @@ -43,5 +43,5 @@ String to remove spaces from. import "strings" data - |> map(fn: (r) => ({ r with userInput: strings.trimSpace(v: r.userInput) })) + |> map(fn: (r) => ({ r with userInput: strings.trimSpace(v: r.userInput) })) ``` diff --git a/content/flux/v0.x/stdlib/strings/trimsuffix.md b/content/flux/v0.x/stdlib/strings/trimsuffix.md index ba5eeb12e..287b3bc04 100644 --- a/content/flux/v0.x/stdlib/strings/trimsuffix.md +++ b/content/flux/v0.x/stdlib/strings/trimsuffix.md @@ -49,9 +49,5 @@ The suffix to remove. import "strings" data - |> map(fn: (r) => ({ - r with - sensorID: strings.trimSuffix(v: r.sensorId, suffix: "_s12") - }) - ) + |> map(fn: (r) => ({r with sensorID: strings.trimSuffix(v: r.sensorId, suffix: "_s12")})) ``` diff --git a/content/flux/v0.x/stdlib/system/time.md b/content/flux/v0.x/stdlib/system/time.md index b1940a952..bccfa99bd 100644 --- a/content/flux/v0.x/stdlib/system/time.md +++ b/content/flux/v0.x/stdlib/system/time.md @@ -32,7 +32,7 @@ system.time() import "system" data - |> set(key: "processed_at", value: string(v: system.time() )) + |> set(key: "processed_at", value: string(v: system.time() )) ``` {{% note %}} diff --git a/content/flux/v0.x/stdlib/testing/assertempty.md b/content/flux/v0.x/stdlib/testing/assertempty.md index 5505c687d..300de2543 100644 --- a/content/flux/v0.x/stdlib/testing/assertempty.md +++ b/content/flux/v0.x/stdlib/testing/assertempty.md @@ -42,10 +42,11 @@ The `.testing.assertEmpty()` function checks to see if the diff is empty. import "testing" got = from(bucket: "example-bucket") - |> range(start: -15m) + |> range(start: -15m) want = from(bucket: "backup_example-bucket") - |> range(start: -15m) + |> range(start: -15m) + got - |> testing.diff(want: want) - |> testing.assertEmpty() + |> testing.diff(want: want) + |> testing.assertEmpty() ``` diff --git a/content/flux/v0.x/stdlib/testing/assertequals.md b/content/flux/v0.x/stdlib/testing/assertequals.md index 45c04ce4b..2693c5689 100644 --- a/content/flux/v0.x/stdlib/testing/assertequals.md +++ b/content/flux/v0.x/stdlib/testing/assertequals.md @@ -23,9 +23,9 @@ If unequal, the function returns an error. import "testing" testing.assertEquals( - name: "streamEquality", - got: got, - want: want + name: "streamEquality", + got: got, + want: want, ) ``` @@ -50,10 +50,10 @@ The stream that contains the expected data to test against. import "testing" want = from(bucket: "backup-example-bucket") - |> range(start: -5m) + |> range(start: -5m) got = from(bucket: "example-bucket") - |> range(start: -5m) + |> range(start: -5m) testing.assertEquals(got: got, want: want) ``` @@ -63,9 +63,9 @@ testing.assertEquals(got: got, want: want) import "testing" want = from(bucket: "backup-example-bucket") - |> range(start: -5m) + |> range(start: -5m) from(bucket: "example-bucket") - |> range(start: -5m) - |> testing.assertEquals(want: want) + |> range(start: -5m) + |> testing.assertEquals(want: want) ``` diff --git a/content/flux/v0.x/stdlib/testing/benchmark.md b/content/flux/v0.x/stdlib/testing/benchmark.md index 3236371c7..32a64259a 100644 --- a/content/flux/v0.x/stdlib/testing/benchmark.md +++ b/content/flux/v0.x/stdlib/testing/benchmark.md @@ -23,9 +23,7 @@ test output that occurs in [`testing.run()`](/flux/v0.x/stdlib/testing/run/). ```js import "testing" -testing.benchmark( - case: exampleTestCase -) +testing.benchmark(case: exampleTestCase) ``` ## Parameters @@ -63,13 +61,11 @@ outData = " ,,0,2021-01-01T00:00:00Z,2021-01-03T01:00:00Z,m,t,4.8 " -t_sum = (table=<-) => - (table - |> range(start:2021-01-01T00:00:00Z, stop:2021-01-03T01:00:00Z) - |> sum()) +t_sum = (table=<-) => table + |> range(start: 2021-01-01T00:00:00Z, stop: 2021-01-03T01:00:00Z) + |> sum() -test _sum = () => - ({input: testing.loadStorage(csv: inData), want: testing.loadMem(csv: outData), fn: t_sum}) +test _sum = () => ({input: testing.loadStorage(csv: inData), want: testing.loadMem(csv: outData), fn: t_sum}) testing.benchmark(case: _sum) ``` diff --git a/content/flux/v0.x/stdlib/testing/diff.md b/content/flux/v0.x/stdlib/testing/diff.md index f41a3ece2..001a2f821 100644 --- a/content/flux/v0.x/stdlib/testing/diff.md +++ b/content/flux/v0.x/stdlib/testing/diff.md @@ -20,11 +20,11 @@ The `testing.diff()` function produces a diff between two streams. import "testing" testing.diff( - got: stream2, - want: stream1, - epsilon: 0.000001, - nansEqual: false, - verbose: false + got: stream2, + want: stream1, + epsilon: 0.000001, + nansEqual: false, + verbose: false, ) ``` @@ -67,18 +67,20 @@ Default is `false`. import "testing" want = from(bucket: "backup-example-bucket") - |> range(start: -5m) + |> range(start: -5m) got = from(bucket: "example-bucket") - |> range(start: -5m) + |> range(start: -5m) + testing.diff(got: got, want: want) ``` ##### Inline diff ```js -import "testing" +iimport "testing" want = from(bucket: "backup-example-bucket") |> range(start: -5m) + from(bucket: "example-bucket") - |> range(start: -5m) - |> testing.diff(want: want) + |> range(start: -5m) + |> testing.diff(want: want) ``` diff --git a/content/flux/v0.x/stdlib/testing/inspect.md b/content/flux/v0.x/stdlib/testing/inspect.md index 57d014ccc..a609cfd29 100644 --- a/content/flux/v0.x/stdlib/testing/inspect.md +++ b/content/flux/v0.x/stdlib/testing/inspect.md @@ -19,9 +19,7 @@ The `testing.inspect()` function returns information about a test case. ```js import "testing" -testing.inspect( - case: exampleTestCase -) +testing.inspect(case: exampleTestCase) ``` ## Parameters @@ -53,13 +51,11 @@ outData = " ,,0,2021-01-01T00:00:00Z,2021-01-03T01:00:00Z,m,t,4.8 " -t_sum = (table=<-) => - (table - |> range(start:2021-01-01T00:00:00Z, stop:2021-01-03T01:00:00Z) - |> sum()) +t_sum = (table=<-) => table + |> range(start: 2021-01-01T00:00:00Z, stop: 2021-01-03T01:00:00Z) + |> sum() -test _sum = () => - ({input: testing.loadStorage(csv: inData), want: testing.loadMem(csv: outData), fn: t_sum}) +test _sum = () => ({input: testing.loadStorage(csv: inData), want: testing.loadMem(csv: outData), fn: t_sum}) testing.inpsect(case: _sum) diff --git a/content/flux/v0.x/stdlib/testing/load.md b/content/flux/v0.x/stdlib/testing/load.md index b4d173c2c..8c04e32c1 100644 --- a/content/flux/v0.x/stdlib/testing/load.md +++ b/content/flux/v0.x/stdlib/testing/load.md @@ -38,20 +38,21 @@ to create two streams of tables to compare in the test. import "testing" import "array" -got = array.from(rows: [ - {_time: 2021-01-01T00:00:00Z, _measurement: "m", _field: "t", _value: 1.2}, - {_time: 2021-01-01T01:00:00Z, _measurement: "m", _field: "t", _value: 0.8}, - {_time: 2021-01-01T02:00:00Z, _measurement: "m", _field: "t", _value: 3.2} -]) - -want = array.from(rows: [ - {_time: 2021-01-01T00:00:00Z, _measurement: "m", _field: "t", _value: 1.2}, - {_time: 2021-01-01T01:00:00Z, _measurement: "m", _field: "t", _value: 0.8}, - {_time: 2021-01-01T02:00:00Z, _measurement: "m", _field: "t", _value: 3.1} -]) - -testing.diff( - got: testing.load(tables: got), - want: testing.load(tables: want) +got = array.from( + rows: [ + {_time: 2021-01-01T00:00:00Z, _measurement: "m", _field: "t", _value: 1.2}, + {_time: 2021-01-01T01:00:00Z, _measurement: "m", _field: "t", _value: 0.8}, + {_time: 2021-01-01T02:00:00Z, _measurement: "m", _field: "t", _value: 3.2}, + ] ) + +want = array.from( + rows: [ + {_time: 2021-01-01T00:00:00Z, _measurement: "m", _field: "t", _value: 1.2}, + {_time: 2021-01-01T01:00:00Z, _measurement: "m", _field: "t", _value: 0.8}, + {_time: 2021-01-01T02:00:00Z, _measurement: "m", _field: "t", _value: 3.1}, + ] +) + +testing.diff(got: testing.load(tables: got), want: testing.load(tables: want)) ``` diff --git a/content/flux/v0.x/stdlib/testing/loadmem.md b/content/flux/v0.x/stdlib/testing/loadmem.md index 33731740e..a41b7e5e7 100644 --- a/content/flux/v0.x/stdlib/testing/loadmem.md +++ b/content/flux/v0.x/stdlib/testing/loadmem.md @@ -21,9 +21,7 @@ test data from memory to emulate query results returned by Flux. ```js import "testing" -testing.loadMem( - csv: csvData -) +testing.loadMem(csv: csvData) ``` ## Parameters diff --git a/content/flux/v0.x/stdlib/testing/loadstorage.md b/content/flux/v0.x/stdlib/testing/loadstorage.md index 6f530ac47..918694beb 100644 --- a/content/flux/v0.x/stdlib/testing/loadstorage.md +++ b/content/flux/v0.x/stdlib/testing/loadstorage.md @@ -27,9 +27,7 @@ Test data requires the following columns: ```js import "testing" -testing.loadStorage( - csv: csvData -) +testing.loadStorage(csv: csvData) ``` ## Parameters diff --git a/content/flux/v0.x/stdlib/testing/run.md b/content/flux/v0.x/stdlib/testing/run.md index fc8f12e5b..c15e963ae 100644 --- a/content/flux/v0.x/stdlib/testing/run.md +++ b/content/flux/v0.x/stdlib/testing/run.md @@ -18,9 +18,7 @@ The `testing.run()` function executes a specified test case. ```js import "testing" -testing.run( - case: exampleTestCase -) +testing.run(case: exampleTestCase) ``` ## Parameters @@ -52,13 +50,11 @@ outData = " ,,0,2021-01-01T00:00:00Z,2021-01-03T01:00:00Z,m,t,4.8 " -t_sum = (table=<-) => - (table - |> range(start:2021-01-01T00:00:00Z, stop:2021-01-03T01:00:00Z) - |> sum()) +t_sum = (table=<-) => table + |> range(start: 2021-01-01T00:00:00Z, stop: 2021-01-03T01:00:00Z) + |> sum() -test _sum = () => - ({input: testing.loadStorage(csv: inData), want: testing.loadMem(csv: outData), fn: t_sum}) +test _sum = () => ({input: testing.loadStorage(csv: inData), want: testing.loadMem(csv: outData), fn: t_sum}) testing.run(case: _sum) ``` diff --git a/content/flux/v0.x/stdlib/timezone/fixed.md b/content/flux/v0.x/stdlib/timezone/fixed.md index 5e6a68aa2..3a9d65b88 100644 --- a/content/flux/v0.x/stdlib/timezone/fixed.md +++ b/content/flux/v0.x/stdlib/timezone/fixed.md @@ -38,11 +38,13 @@ import "timezone" option location = timezone.fixed(offset: -8h) -data = array.from(rows: [ +data = array.from( + rows: [ {_time: 2021-01-01T00:06:00Z, _value: 1}, {_time: 2021-01-02T00:06:00Z, _value: 2}, - {_time: 2021-01-03T00:06:00Z, _value: 3} - ]) + {_time: 2021-01-03T00:06:00Z, _value: 3}, + ], +) |> range(start: 2021-01-01T00:00:00Z, stop: 2021-01-04T00:00:00Z) data diff --git a/content/flux/v0.x/stdlib/timezone/location.md b/content/flux/v0.x/stdlib/timezone/location.md index da7086192..3db41a721 100644 --- a/content/flux/v0.x/stdlib/timezone/location.md +++ b/content/flux/v0.x/stdlib/timezone/location.md @@ -14,6 +14,10 @@ flux/v0.x/tags: [timezone, location, data/time] Setting the timezone by location accounts for location-based time shifts in the clock such as daylight savings time or summertime. +Flux uses the timezone database provided by the underlying operating system (OS). +Timezones and timezone names depend on your OS. +For a general list of timezone names, see [tz database time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). + ```js import "timezone" diff --git a/content/flux/v0.x/stdlib/types/_index.md b/content/flux/v0.x/stdlib/types/_index.md index cef9f3902..7132a1c37 100644 --- a/content/flux/v0.x/stdlib/types/_index.md +++ b/content/flux/v0.x/stdlib/types/_index.md @@ -12,7 +12,7 @@ menu: weight: 11 flux/v0.x/tags: [types, functions, package] cascade: - introduced: 0.140.0 + introduced: 0.141.0 --- The Flux `types` package provides functions for working with diff --git a/content/flux/v0.x/stdlib/types/istype.md b/content/flux/v0.x/stdlib/types/istype.md index adbf7ccde..a5d8345fe 100644 --- a/content/flux/v0.x/stdlib/types/istype.md +++ b/content/flux/v0.x/stdlib/types/istype.md @@ -90,3 +90,85 @@ data {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} + + +### Aggregate or select data based on type +```javascript +data = () => from(bucket: "example-bucket") + |> range(start: -1m) + +nonNumericData = data() + |> filter(fn: (r) => types.isType(v: r._value, type: "string") or types.isType(v: r._value, type: "bool")) + |> aggregateWindow(every: 30s, fn: last) + +numericData = data() + |> filter(fn: (r) => types.isType(v: r._value, type: "int") or types.isType(v: r._value, type: "float")) + |> aggregateWindow(every: 30s, fn: mean) + +> union(tables: [nonNumericData, numericData]) +``` + +{{< expand-wrapper >}} +{{% expand "View example input and output" %}} + +#### Input data +| _start | _stop | _time | type | _value | +| :------------------- | :------------------- | :------------------- | :---- | -----: | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:00Z | float | -2.18 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:10Z | float | 10.92 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:20Z | float | 7.35 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:30Z | float | 17.53 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:40Z | float | 15.23 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:50Z | float | 4.43 | + +| _start | _stop | _time | type | _value | +| :------------------- | :------------------- | :------------------- | :--- | -----: | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:00Z | bool | true | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:10Z | bool | true | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:20Z | bool | false | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:30Z | bool | true | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:40Z | bool | false | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:50Z | bool | false | + +| _start | _stop | _time | type | _value | +| :------------------- | :------------------- | :------------------- | :----- | ----------: | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:00Z | string | smpl_g9qczs | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:10Z | string | smpl_0mgv9n | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:20Z | string | smpl_phw664 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:30Z | string | smpl_guvzy4 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:40Z | string | smpl_5v3cce | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:50Z | string | smpl_s9fmgy | + +| _start | _stop | _time | type | _value | +| :------------------- | :------------------- | :------------------- | :--- | -----: | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:00Z | int | -2 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:10Z | int | 10 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:20Z | int | 7 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:30Z | int | 17 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:40Z | int | 15 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:50Z | int | 4 | + +#### Output data + +| _start | _stop | _time | type | _value | +| :------------------- | :------------------- | :------------------- | :--- | -----: | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:30Z | bool | false | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:01:00Z | bool | false | + +| _start | _stop | _time | type | _value | +| :------------------- | :------------------- | :------------------- | :---- | -----------------: | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:30Z | float | 5.363333333333333 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:01:00Z | float | 12.396666666666668 | + +| _start | _stop | _time | type | _value | +| :------------------- | :------------------- | :------------------- | :--- | -----: | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:30Z | int | 5 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:01:00Z | int | 12 | + +| _start | _stop | _time | type | _value | +| :------------------- | :------------------- | :------------------- | :----- | ----------: | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:00:30Z | string | smpl_phw664 | +| 2021-01-01T00:00:00Z | 2021-01-01T00:01:00Z | 2021-01-01T00:01:00Z | string | smpl_s9fmgy | + +{{% /expand %}} +{{< /expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/aggregatewindow.md b/content/flux/v0.x/stdlib/universe/aggregatewindow.md index c45ebb582..a7701b46f 100644 --- a/content/flux/v0.x/stdlib/universe/aggregatewindow.md +++ b/content/flux/v0.x/stdlib/universe/aggregatewindow.md @@ -24,14 +24,14 @@ introduced: 0.7.0 ```js aggregateWindow( - every: 1m, - period: 1m, - fn: mean, - column: "_value", - timeSrc: "_stop", - timeDst: "_time", - location: "UTC", - createEmpty: true + every: 1m, + period: 1m, + fn: mean, + column: "_value", + timeSrc: "_stop", + timeDst: "_time", + location: "UTC", + createEmpty: true, ) ``` @@ -138,13 +138,10 @@ to aggregate time-based windows: import "sampledata" data = sampledata.float() - |> range(start: sampledata.start, stop: sampledata.stop) + |> range(start: sampledata.start, stop: sampledata.stop) data - |> aggregateWindow( - every: 20s, - fn: mean - ) + |> aggregateWindow(every: 20s, fn: mean) ``` {{< expand-wrapper >}} @@ -177,14 +174,14 @@ tables into the aggregate or selector function with all required parameters defi import "sampledata" data = sampledata.float() - |> range(start: sampledata.start, stop: sampledata.stop) + |> range(start: sampledata.start, stop: sampledata.stop) data - |> aggregateWindow( - column: "_value", - every: 20s, - fn: (column, tables=<-) => tables |> quantile(q: 0.99, column:column) - ) + |> aggregateWindow( + column: "_value", + every: 20s, + fn: (column, tables=<-) => tables |> quantile(q: 0.99, column: column), + ) ``` {{< expand-wrapper >}} @@ -212,10 +209,10 @@ data import "sampledata" data = sampledata.float() - |> range(start: sampledata.start, stop: sampledata.stop) + |> range(start: sampledata.start, stop: sampledata.stop) data - |> aggregateWindow(every: 1mo, fn: mean) + |> aggregateWindow(every: 1mo, fn: mean) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/bool.md b/content/flux/v0.x/stdlib/universe/bool.md index 7d4729b74..0d527807f 100644 --- a/content/flux/v0.x/stdlib/universe/bool.md +++ b/content/flux/v0.x/stdlib/universe/bool.md @@ -33,7 +33,7 @@ The value to convert. ## Examples ```js from(bucket: "sensor-data") - |> range(start: -1m) - |> filter(fn:(r) => r._measurement == "system" ) - |> map(fn:(r) => ({ r with responsive: bool(v: r.responsive) })) + |> range(start: -1m) + |> filter(fn: (r) => r._measurement == "system") + |> map(fn: (r) => ({r with responsive: bool(v: r.responsive)})) ``` diff --git a/content/flux/v0.x/stdlib/universe/bottom.md b/content/flux/v0.x/stdlib/universe/bottom.md index cb9b46fae..26acf8c31 100644 --- a/content/flux/v0.x/stdlib/universe/bottom.md +++ b/content/flux/v0.x/stdlib/universe/bottom.md @@ -51,7 +51,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> bottom(n:2) + |> bottom(n:2) ``` {{< expand-wrapper >}} @@ -80,15 +80,3 @@ sampledata.int() {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -// _sortLimit is a helper function, which sorts and limits a table. -_sortLimit = (n, desc, columns=["_value"], tables=<-) => - tables - |> sort(columns:columns, desc:desc) - |> limit(n:n) - -bottom = (n, columns=["_value"], tables=<-) => - _sortLimit(n:n, columns:columns, desc:false) -``` diff --git a/content/flux/v0.x/stdlib/universe/bytes.md b/content/flux/v0.x/stdlib/universe/bytes.md index a68983664..dfac974f1 100644 --- a/content/flux/v0.x/stdlib/universe/bytes.md +++ b/content/flux/v0.x/stdlib/universe/bytes.md @@ -32,6 +32,6 @@ The value to convert. ## Examples ```js from(bucket: "sensor-data") - |> range(start: -1m) - |> map(fn:(r) => ({ r with _value: bytes(v: r._value) })) + |> range(start: -1m) + |> map(fn: (r) => ({r with _value: bytes(v: r._value)})) ``` diff --git a/content/flux/v0.x/stdlib/universe/chandemomentumoscillator.md b/content/flux/v0.x/stdlib/universe/chandemomentumoscillator.md index 30863411e..2eeb31d5f 100644 --- a/content/flux/v0.x/stdlib/universe/chandemomentumoscillator.md +++ b/content/flux/v0.x/stdlib/universe/chandemomentumoscillator.md @@ -24,8 +24,8 @@ developed by Tushar Chande. ```js chandeMomentumOscillator( - n: 10, - columns: ["_value"] + n: 10, + columns: ["_value"], ) ``` @@ -61,7 +61,7 @@ with `x - n` rows. import "sampledata" sampledata.int() - |> chandeMomentumOscillator(n: 2) + |> chandeMomentumOscillator(n: 2) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/columns.md b/content/flux/v0.x/stdlib/universe/columns.md index 840206c77..d3ad962dc 100644 --- a/content/flux/v0.x/stdlib/universe/columns.md +++ b/content/flux/v0.x/stdlib/universe/columns.md @@ -51,7 +51,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.string() - |> columns(column: "labels") + |> columns(column: "labels") ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/contains.md b/content/flux/v0.x/stdlib/universe/contains.md index e08bab6b4..6c6494736 100644 --- a/content/flux/v0.x/stdlib/universe/contains.md +++ b/content/flux/v0.x/stdlib/universe/contains.md @@ -22,8 +22,8 @@ If the value is not a member of the set, the functions returns `false`. ```js contains( - value: 1, - set: [1,2,3] + value: 1, + set: [1,2,3], ) ``` @@ -44,8 +44,8 @@ import "influxdata/influxdb/sample" fields = ["temperature", "humidity"] sample.data(set: "airSensor") - |> range(start: -30m) - |> filter(fn: (r) => contains(value: r._field, set: fields)) + |> range(start: -30m) + |> filter(fn: (r) => contains(value: r._field, set: fields)) ``` {{% expand "View example input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/count.md b/content/flux/v0.x/stdlib/universe/count.md index bc9ab0106..27fa60761 100644 --- a/content/flux/v0.x/stdlib/universe/count.md +++ b/content/flux/v0.x/stdlib/universe/count.md @@ -33,10 +33,10 @@ count(column: "_value") `count()` returns `0` for empty tables. To keep empty tables in your data, set the following parameters for the following functions: -| Function | Parameter | -|:-------- |:--------- | -| [filter()](/flux/v0.x/stdlib/universe/filter/) | `onEmpty: "keep"` | -| [window()](/flux/v0.x/stdlib/universe/window/) | `createEmpty: true` | +| Function | Parameter | +| :--------------------------------------------------------------- | :------------------ | +| [filter()](/flux/v0.x/stdlib/universe/filter/) | `onEmpty: "keep"` | +| [window()](/flux/v0.x/stdlib/universe/window/) | `createEmpty: true` | | [aggregateWindow()](/flux/v0.x/stdlib/universe/aggregatewindow/) | `createEmpty: true` | {{% /note %}} @@ -58,7 +58,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.string() - |> count() + |> count() ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/cov.md b/content/flux/v0.x/stdlib/universe/cov.md index e2b78e471..a483d3f33 100644 --- a/content/flux/v0.x/stdlib/universe/cov.md +++ b/content/flux/v0.x/stdlib/universe/cov.md @@ -52,18 +52,20 @@ to generate sample data and show how `cov()` transforms data. import "generate" stream1 = generate.from( - count: 5, - fn: (n) => n * n, - start: 2021-01-01T00:00:00Z, - stop: 2021-01-01T00:01:00Z -) |> toFloat() + count: 5, + fn: (n) => n * n, + start: 2021-01-01T00:00:00Z, + stop: 2021-01-01T00:01:00Z, +) + |> toFloat() stream2 = generate.from( - count: 5, - fn: (n) => n * n * n / 2, - start: 2021-01-01T00:00:00Z, - stop: 2021-01-01T00:01:00Z -) |> toFloat() + count: 5, + fn: (n) => n * n * n / 2, + start: 2021-01-01T00:00:00Z, + stop: 2021-01-01T00:01:00Z, +) + |> toFloat() cov(x: stream1, y: stream2, on: ["_time"]) ``` @@ -102,10 +104,3 @@ cov(x: stream1, y: stream2, on: ["_time"]) {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -cov = (x,y,on,pearsonr=false) => - join( tables:{x:x, y:y}, on:on ) - |> covariance(pearsonr:pearsonr, columns:["_value_x","_value_y"]) -``` diff --git a/content/flux/v0.x/stdlib/universe/covariance.md b/content/flux/v0.x/stdlib/universe/covariance.md index a4c079dea..5c17f6b06 100644 --- a/content/flux/v0.x/stdlib/universe/covariance.md +++ b/content/flux/v0.x/stdlib/universe/covariance.md @@ -50,12 +50,12 @@ to generate sample data and show how `covariance()` transforms data. ```js import "generate" -data = generate.from(count: 5, fn: (n) => n * n, start: 2021-01-01T00:00:00Z,stop: 2021-01-01T00:01:00Z ) - |> toFloat() - |> map(fn: (r) => ({_time: r._time, x: r._value, y: r._value * r._value / 2.0})) - +data = generate.from(count: 5, fn: (n) => n * n, start: 2021-01-01T00:00:00Z, stop: 2021-01-01T00:01:00Z) + |> toFloat() + |> map(fn: (r) => ({_time: r._time, x: r._value, y: r._value * r._value / 2.0})) + data - |> covariance(columns: ["x", "y"]) + |> covariance(columns: ["x", "y"]) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/cumulativesum.md b/content/flux/v0.x/stdlib/universe/cumulativesum.md index 7c255f09e..2fb92cc8c 100644 --- a/content/flux/v0.x/stdlib/universe/cumulativesum.md +++ b/content/flux/v0.x/stdlib/universe/cumulativesum.md @@ -44,7 +44,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.string() - |> cumulativeSum() + |> cumulativeSum() ``` {{% expand "View input and output" %}} {{< flex >}} diff --git a/content/flux/v0.x/stdlib/universe/derivative.md b/content/flux/v0.x/stdlib/universe/derivative.md index 090e7ca29..22b248fce 100644 --- a/content/flux/v0.x/stdlib/universe/derivative.md +++ b/content/flux/v0.x/stdlib/universe/derivative.md @@ -26,10 +26,10 @@ _**Output data type:** Float_ ```js derivative( - unit: 1s, - nonNegative: true, - columns: ["_value"], - timeColumn: "_time" + unit: 1s, + nonNegative: false, + columns: ["_value"], + timeColumn: "_time", ) ``` @@ -40,7 +40,7 @@ The time duration used when creating the derivative. Default is `1s`. ### nonNegative {data-type="bool"} -Indicates if the derivative is allowed to be negative. Default is `true`. +Indicates if the derivative is allowed to be negative. Default is `false`. When `true`, if a value is less than the previous value, it is assumed the previous value should have been a zero. @@ -71,7 +71,7 @@ For each input table with `n` rows, `derivative()` outputs a table with `n - 1` import "sampledata" sampledata.int() - |> derivative() + |> derivative() ``` {{< expand-wrapper >}} @@ -112,7 +112,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> derivative(nonNegative: true) + |> derivative(nonNegative: true) ``` {{< expand-wrapper >}} @@ -153,7 +153,7 @@ sampledata.int() import "sampledata" sampledata.int(includeNull: true) - |> derivative() + |> derivative() ``` {{% expand "View input and output" %}} {{< flex >}} diff --git a/content/flux/v0.x/stdlib/universe/display.md b/content/flux/v0.x/stdlib/universe/display.md new file mode 100644 index 000000000..efd71e565 --- /dev/null +++ b/content/flux/v0.x/stdlib/universe/display.md @@ -0,0 +1,128 @@ +--- +title: display() function +description: > + `display()` returns the Flux literal representation of any value as a string. +menu: + flux_0_x_ref: + name: display + parent: universe +weight: 102 +introduced: 0.154.0 +--- + +`display()` returns the Flux literal representation of any value as a string. + +```js +display(v: "example value") +``` + +[Basic types](/flux/v0.x/data-types/basic/) are converted directly to a string. +[Bytes types](/flux/v0.x/data-types/basic/bytes/) are represented as a string of +lowercase hexadecimal characters prefixed with `0x`. +[Composite types](/flux/v0.x/data-types/composite/) (arrays, dictionaries, and records) +are represented in a syntax similar to their equivalent Flux literal representation. + +Note the following about the resulting string representation: + +- It cannot always be parsed back into the original value. +- It may span multiple lines. +- It may change between Flux versions. + +{{% note %}} +`display()` differs from [`string()`](/flux/v0.x/stdlib/universe/string/) in +that `display()` recursively converts values inside composite types to strings. +`string()` does not operate on composite types. +{{% /note %}} + +## Parameters + +### v +Value to convert for display. + +## Examples + +- [Display composite values as part of a table](#display-composite-values-as-part-of-a-table) +- [Display a record](#display-a-record) +- [Display an array](#display-an-array) +- [Display a dictionary](#display-a-dictionary) +- [Display bytes](#display-bytes) +- [Display a composite value](#display-a-composite-value) + +### Display composite values as part of a table +Use [`array.from()`](/flux/v0.x/stdlib/array/from/) and `display()` to quickly +observe any value. + +```js +import "array" + +array.from( + rows: [ + { + dict: display(v: ["a":1, "b": 2]), + record: display(v:{x: 1, y: 2}), + array: display(v: [5,6,7]) + } + ] +) +``` + +#### Output data +| dict | record | array | +| :----------- | :----------- | :-------- | +| [a: 1, b: 2] | {x: 1, y: 2} | [5, 6, 7] | + +### Display a record +```js +x = {a: 1, b: 2, c: 3} + +display(v: x) + +// Returns {a: 1, b: 2, c: 3} +``` + +### Display an array +```js +x = [1, 2, 3] + +display(v: x) + +// Returns [1, 2, 3] +``` + +### Display a dictionary +```js +x = ["a": 1, "b": 2, "c": 3] + +display(v: x) + +// Returns [a: 1, b: 2, c: 3] +``` + +### Display bytes +```js +x = bytes(v:"abc") + +display(v: x) + +// Returns 0x616263 +``` + +### Display a composite value +```js +x = { + bytes: bytes(v: "abc"), + string: "str", + array: [1,2,3], + dict: ["a": 1, "b": 2, "c": 3], +} + +display(v: x) + +// Returns +// { +// array: [1, 2, 3], +// bytes: 0x616263, +// dict: [a: 1, b: 2, c: 3], +// string: str +// } +``` \ No newline at end of file diff --git a/content/flux/v0.x/stdlib/universe/distinct.md b/content/flux/v0.x/stdlib/universe/distinct.md index 9e618190f..e424d0386 100644 --- a/content/flux/v0.x/stdlib/universe/distinct.md +++ b/content/flux/v0.x/stdlib/universe/distinct.md @@ -51,7 +51,7 @@ import "sampledata" data = sampledata.int() data - |> distinct() + |> distinct() ``` {{< expand-wrapper >}} @@ -93,7 +93,7 @@ data import "sampledata" sampledata.int() - |> distinct(column: "tag") + |> distinct(column: "tag") ``` {{< expand-wrapper >}} @@ -126,7 +126,7 @@ sampledata.int() import "sampledata" sampledata.int(includeNull: true) - |> distinct() + |> distinct() ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/doubleema.md b/content/flux/v0.x/stdlib/universe/doubleema.md index 7eb4829e7..a86db8caa 100644 --- a/content/flux/v0.x/stdlib/universe/doubleema.md +++ b/content/flux/v0.x/stdlib/universe/doubleema.md @@ -57,7 +57,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> doubleEMA(n: 3) + |> doubleEMA(n: 3) ``` {{< expand-wrapper >}} @@ -86,14 +86,3 @@ sampledata.int() {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -doubleEMA = (n, tables=<-) => - tables - |> exponentialMovingAverage(n:n) - |> duplicate(column:"_value", as:"ema") - |> exponentialMovingAverage(n:n) - |> map(fn: (r) => ({r with _value: 2.0 * r.ema - r._value})) - |> drop(columns: ["ema"]) -``` diff --git a/content/flux/v0.x/stdlib/universe/drop.md b/content/flux/v0.x/stdlib/universe/drop.md index 411f1a0a3..500bb2e69 100644 --- a/content/flux/v0.x/stdlib/universe/drop.md +++ b/content/flux/v0.x/stdlib/universe/drop.md @@ -62,7 +62,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> drop(columns: ["_time", "tid"]) + |> drop(columns: ["_time", "tid"]) ``` {{< expand-wrapper >}} @@ -103,7 +103,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> drop(fn: (column) => column =~ /^t/) + |> drop(fn: (column) => column =~ /^t/) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/duplicate.md b/content/flux/v0.x/stdlib/universe/duplicate.md index bf3a4a58f..98b80b0de 100644 --- a/content/flux/v0.x/stdlib/universe/duplicate.md +++ b/content/flux/v0.x/stdlib/universe/duplicate.md @@ -48,7 +48,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> duplicate(column: "tag", as: "tag_dup") + |> duplicate(column: "tag", as: "tag_dup") ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/duration.md b/content/flux/v0.x/stdlib/universe/duration.md index 06badc300..c5928753e 100644 --- a/content/flux/v0.x/stdlib/universe/duration.md +++ b/content/flux/v0.x/stdlib/universe/duration.md @@ -67,14 +67,14 @@ This example converts an integer to a duration and stores the value as a string. import "generate" data = generate.from( - count: 5, - fn: (n) => (n + 1) * 3600000000000, - start: 2021-01-01T00:00:00Z, - stop: 2021-01-01T05:00:00Z, + count: 5, + fn: (n) => (n + 1) * 3600000000000, + start: 2021-01-01T00:00:00Z, + stop: 2021-01-01T05:00:00Z, ) data - |> map(fn:(r) => ({ r with _value: string(v: duration(v: r._value)) })) + |> map(fn:(r) => ({ r with _value: string(v: duration(v: r._value)) })) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/elapsed.md b/content/flux/v0.x/stdlib/universe/elapsed.md index 71273fde0..fbac65fca 100644 --- a/content/flux/v0.x/stdlib/universe/elapsed.md +++ b/content/flux/v0.x/stdlib/universe/elapsed.md @@ -22,9 +22,9 @@ Given an input table, `elapsed()` returns the same table without the first recor ```js elapsed( - unit: 1s, - timeColumn: "_time", - columnName: "elapsed" + unit: 1s, + timeColumn: "_time", + columnName: "elapsed", ) ``` @@ -56,7 +56,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> elapsed(unit: 1s) + |> elapsed(unit: 1s) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/exponentialmovingaverage.md b/content/flux/v0.x/stdlib/universe/exponentialmovingaverage.md index aab2afc7f..9a0bd8403 100644 --- a/content/flux/v0.x/stdlib/universe/exponentialmovingaverage.md +++ b/content/flux/v0.x/stdlib/universe/exponentialmovingaverage.md @@ -60,7 +60,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> exponentialMovingAverage(n: 3) + |> exponentialMovingAverage(n: 3) ``` {{< expand-wrapper >}} @@ -100,7 +100,7 @@ sampledata.int() import "sampledata" sampledata.int(includeNull: true) - |> exponentialMovingAverage(n: 3) + |> exponentialMovingAverage(n: 3) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/fill.md b/content/flux/v0.x/stdlib/universe/fill.md index 6d1241edf..ae3a2374e 100644 --- a/content/flux/v0.x/stdlib/universe/fill.md +++ b/content/flux/v0.x/stdlib/universe/fill.md @@ -60,7 +60,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.float(includeNull: true) - |> fill(value: 0.0) + |> fill(value: 0.0) ``` {{< expand-wrapper >}} @@ -103,7 +103,7 @@ sampledata.float(includeNull: true) import "sampledata" sampledata.float(includeNull: true) - |> fill(usePrevious: true) + |> fill(usePrevious: true) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/filter.md b/content/flux/v0.x/stdlib/universe/filter.md index 83cde4a88..634d97670 100644 --- a/content/flux/v0.x/stdlib/universe/filter.md +++ b/content/flux/v0.x/stdlib/universe/filter.md @@ -25,8 +25,8 @@ The output tables have the same schema as the corresponding input tables. ```js filter( - fn: (r) => r._measurement == "cpu", - onEmpty: "drop" + fn: (r) => r._measurement == "cpu", + onEmpty: "drop", ) ``` @@ -79,13 +79,9 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi #### Filter based on InfluxDB measurement, field, and tag ```js -from(bucket:"example-bucket") - |> range(start:-1h) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" and - r.cpu == "cpu-total" - ) +from(bucket: "example-bucket") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total") ``` #### Keep empty tables when filtering @@ -96,7 +92,7 @@ import "sampledata" import "experimental/table" sampledata.int() - |> filter(fn: (r) => r._value > 18, onEmpty: "keep") + |> filter(fn: (r) => r._value > 18, onEmpty: "keep") ``` {{% note %}} @@ -135,7 +131,7 @@ The following example uses data provided by the [`sampledata` package](/flux/v0. import "sampledata" sampledata.int(includeNull: true) - |> filter(fn: (r) => exists r._value ) + |> filter(fn: (r) => exists r._value ) ``` {{< expand-wrapper >}} @@ -175,7 +171,7 @@ The following example uses data provided by the [`sampledata` package](/flux/v0. import "sampledata" sampledata.int() - |> filter(fn: (r) => r._value > 0 and r._value < 10 ) + |> filter(fn: (r) => r._value > 0 and r._value < 10 ) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/findcolumn.md b/content/flux/v0.x/stdlib/universe/findcolumn.md index 2579c7fa6..a47bd6fbf 100644 --- a/content/flux/v0.x/stdlib/universe/findcolumn.md +++ b/content/flux/v0.x/stdlib/universe/findcolumn.md @@ -24,8 +24,8 @@ is not present in the set of columns. ```js findColumn( - fn: (key) => key._field == "fieldName", - column: "_value" + fn: (key) => key._field == "fieldName", + column: "_value", ) ``` @@ -49,10 +49,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> findColumn( - fn: (key) => key.tag == "t1", - column: "_value" - ) - + |> findColumn(fn: (key) => key.tag == "t1", column: "_value") + // Returns [-2, 10, 7, 17, 15, 4] ``` diff --git a/content/flux/v0.x/stdlib/universe/findrecord.md b/content/flux/v0.x/stdlib/universe/findrecord.md index c2f41729f..a4b17dc0c 100644 --- a/content/flux/v0.x/stdlib/universe/findrecord.md +++ b/content/flux/v0.x/stdlib/universe/findrecord.md @@ -23,8 +23,8 @@ The function returns an empty record if no table is found or if the index is out ```js findRecord( - fn: (key) => key._field == "fieldName"), - idx: 0 + fn: (key) => key._field == "fieldName", + idx: 0, ) ``` @@ -48,10 +48,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> findRecord( - fn: (key) => key.tag == "t1", - idx: 0 - ) - + |> findRecord(fn: (key) => key.tag == "t1", idx: 0) + // Returns {_time: 2021-01-01T00:00:00.000000000Z, _value: -2, tag: t1} ``` diff --git a/content/flux/v0.x/stdlib/universe/first.md b/content/flux/v0.x/stdlib/universe/first.md index c977d7be9..470aae521 100644 --- a/content/flux/v0.x/stdlib/universe/first.md +++ b/content/flux/v0.x/stdlib/universe/first.md @@ -43,7 +43,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> first() + |> first() ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/float.md b/content/flux/v0.x/stdlib/universe/float.md index 33756a462..413035dee 100644 --- a/content/flux/v0.x/stdlib/universe/float.md +++ b/content/flux/v0.x/stdlib/universe/float.md @@ -71,10 +71,10 @@ _The following example uses data provided by the [`sampledata` package](/flux/v0 import "sampledata" data = sampledata.int() - |> rename(columns: {_value: "foo"}) + |> rename(columns: {_value: "foo"}) data - |> map(fn:(r) => ({ r with foo: float(v: r.foo) })) + |> map(fn: (r) => ({r with foo: float(v: r.foo)})) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/getcolumn.md b/content/flux/v0.x/stdlib/universe/getcolumn.md index 3eb2094b8..d6effcb30 100644 --- a/content/flux/v0.x/stdlib/universe/getcolumn.md +++ b/content/flux/v0.x/stdlib/universe/getcolumn.md @@ -48,8 +48,8 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> tableFind(fn: (key) => key.tag == "t1") - |> getColumn(column: "_value") - + |> tableFind(fn: (key) => key.tag == "t1") + |> getColumn(column: "_value") + // Returns [-2, 10, 7, 17, 15, 4] ``` diff --git a/content/flux/v0.x/stdlib/universe/getrecord.md b/content/flux/v0.x/stdlib/universe/getrecord.md index 58e682a9a..e4d212c29 100644 --- a/content/flux/v0.x/stdlib/universe/getrecord.md +++ b/content/flux/v0.x/stdlib/universe/getrecord.md @@ -48,8 +48,8 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> tableFind(fn: (key) => key.tag == "t1") - |> getRecord(idx: 0) - + |> tableFind(fn: (key) => key.tag == "t1") + |> getRecord(idx: 0) + // Returns {_time: 2021-01-01T00:00:00.000000000Z, _value: -2, tag: t1} ``` diff --git a/content/flux/v0.x/stdlib/universe/group.md b/content/flux/v0.x/stdlib/universe/group.md index ff8c4f6d0..ed97ccd94 100644 --- a/content/flux/v0.x/stdlib/universe/group.md +++ b/content/flux/v0.x/stdlib/universe/group.md @@ -46,8 +46,8 @@ after `group()`. ```js data - |> group() - |> sort(columns: ["_time"]) + |> group() + |> sort(columns: ["_time"]) ``` {{% /warn %}} @@ -80,7 +80,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> group(columns: ["_time", "tag"]) + |> group(columns: ["_time", "tag"]) ``` {{< expand-wrapper >}} @@ -156,7 +156,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> group(columns: ["_time"], mode: "except") + |> group(columns: ["_time"], mode: "except") ``` {{< expand-wrapper >}} @@ -231,7 +231,7 @@ import "sampledata" // Merge all tables into a single table sampledata.int() - |> group() + |> group() ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/highestaverage.md b/content/flux/v0.x/stdlib/universe/highestaverage.md index caf2038a5..7f82b7047 100644 --- a/content/flux/v0.x/stdlib/universe/highestaverage.md +++ b/content/flux/v0.x/stdlib/universe/highestaverage.md @@ -21,9 +21,9 @@ _`highestAverage()` is a [selector function](/flux/v0.x/function-types/#selector ```js highestAverage( - n:10, - column: "_value", - groupColumns: [] + n:10, + column: "_value", + groupColumns: [], ) ``` @@ -56,7 +56,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> highestAverage(n: 2, groupColumns: ["tag"]) + |> highestAverage(n: 2, groupColumns: ["tag"]) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/highestcurrent.md b/content/flux/v0.x/stdlib/universe/highestcurrent.md index c53810c7c..e66878b1c 100644 --- a/content/flux/v0.x/stdlib/universe/highestcurrent.md +++ b/content/flux/v0.x/stdlib/universe/highestcurrent.md @@ -21,9 +21,9 @@ _`highestCurrent()` is a [selector function](/flux/v0.x/function-types/#selector ```js highestCurrent( - n:10, - column: "_value", - groupColumns: [] + n:10, + column: "_value", + groupColumns: [], ) ``` @@ -56,7 +56,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> highestCurrent(n: 2, groupColumns: ["tag"]) + |> highestCurrent(n: 2, groupColumns: ["tag"]) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/highestmax.md b/content/flux/v0.x/stdlib/universe/highestmax.md index 81f5214a5..e39278310 100644 --- a/content/flux/v0.x/stdlib/universe/highestmax.md +++ b/content/flux/v0.x/stdlib/universe/highestmax.md @@ -21,9 +21,9 @@ _`highestMax()` is a [selector function](/flux/v0.x/function-types/#selectors)._ ```js highestMax( - n:10, - column: "_value", - groupColumns: [] + n:10, + column: "_value", + groupColumns: [], ) ``` @@ -56,7 +56,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> highestMax(n: 2, groupColumns: ["tag"]) + |> highestMax(n: 2, groupColumns: ["tag"]) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/histogram.md b/content/flux/v0.x/stdlib/universe/histogram.md index 4525dbd0a..7e6ba3800 100644 --- a/content/flux/v0.x/stdlib/universe/histogram.md +++ b/content/flux/v0.x/stdlib/universe/histogram.md @@ -27,12 +27,12 @@ Columns not part of the group key are removed and an upper bound column and a co ```js histogram( - column: "_value", - upperBoundColumn: "le", - countColumn: "_value", - bins: [50.0, 75.0, 90.0], - normalize: false - ) + column: "_value", + upperBoundColumn: "le", + countColumn: "_value", + bins: [50.0, 75.0, 90.0], + normalize: false, +) ``` ## Parameters @@ -84,7 +84,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.float() - |> histogram(bins: [0.0, 5.0, 10.0, 20.0]) + |> histogram(bins: [0.0, 5.0, 10.0, 20.0]) ``` {{< expand-wrapper >}} @@ -123,9 +123,7 @@ sampledata.float() import "sampledata" sampledata.float() - |> histogram( - bins: linearBins(start:0.0, width:4.0, count:3) - ) + |> histogram(bins: linearBins(start: 0.0, width: 4.0, count: 3)) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/histogramquantile.md b/content/flux/v0.x/stdlib/universe/histogramquantile.md index 529e2819b..2e8dc151e 100644 --- a/content/flux/v0.x/stdlib/universe/histogramquantile.md +++ b/content/flux/v0.x/stdlib/universe/histogramquantile.md @@ -42,11 +42,11 @@ _**Output data type:** Float_ ```js histogramQuantile( - quantile: 0.5, - countColumn: "_value", - upperBoundColumn: "le", - valueColumn: "_value", - minValue: 0.0 + quantile: 0.5, + countColumn: "_value", + upperBoundColumn: "le", + valueColumn: "_value", + minValue: 0.0, ) ``` @@ -95,10 +95,10 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" data = sampledata.float() - |> histogram(bins: [0.0, 5.0, 10.0, 20.0]) + |> histogram(bins: [0.0, 5.0, 10.0, 20.0]) data - |> histogramQuantile(quantile: 0.9) + |> histogramQuantile(quantile: 0.9) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/holtwinters.md b/content/flux/v0.x/stdlib/universe/holtwinters.md index 1a2b7e7b0..47e5a9864 100644 --- a/content/flux/v0.x/stdlib/universe/holtwinters.md +++ b/content/flux/v0.x/stdlib/universe/holtwinters.md @@ -25,12 +25,12 @@ _**Output data type:** Float_ ```js holtWinters( - n: 10, - seasonality: 4, - interval: 30d, - withFit: false, - timeColumn: "_time", - column: "_value", + n: 10, + seasonality: 4, + interval: 30d, + withFit: false, + timeColumn: "_time", + column: "_value", ) ``` @@ -119,7 +119,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> holtWinters(n: 6, interval: 10s) + |> holtWinters(n: 6, interval: 10s) ``` {{< expand-wrapper >}} @@ -162,11 +162,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> holtWinters( - n: 4, - interval: 10s, - seasonality: 4 - ) + |> holtWinters(n: 4, interval: 10s, seasonality: 4) ``` {{< expand-wrapper >}} @@ -205,11 +201,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> holtWinters( - n: 3, - interval: 10s, - withFit: true - ) + |> holtWinters(n: 3, interval: 10s, withFit: true) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/hourselection.md b/content/flux/v0.x/stdlib/universe/hourselection.md index 6c1f6d3d8..1cfb65276 100644 --- a/content/flux/v0.x/stdlib/universe/hourselection.md +++ b/content/flux/v0.x/stdlib/universe/hourselection.md @@ -21,9 +21,9 @@ The `hourSelection()` function retains all rows with time values in a specified ```js hourSelection( - start: 9, - stop: 17, - timeColumn: "_time" + start: 9, + stop: 17, + timeColumn: "_time", ) ``` @@ -56,14 +56,14 @@ to generate sample data and show how `covariance()` transforms data. import "generate" data = generate.from( - count: 8, - fn: (n) => n * n, - start: 2021-01-01T00:00:00Z, - stop: 2021-01-02T00:00:00Z + count: 8, + fn: (n) => n * n, + start: 2021-01-01T00:00:00Z, + stop: 2021-01-02T00:00:00Z, ) data - |> hourSelection(start: 9, stop: 17) + |> hourSelection(start: 9, stop: 17) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/increase.md b/content/flux/v0.x/stdlib/universe/increase.md index 27c0ed2c1..9ca615339 100644 --- a/content/flux/v0.x/stdlib/universe/increase.md +++ b/content/flux/v0.x/stdlib/universe/increase.md @@ -27,8 +27,6 @@ when they hit a threshold or are reset. In the case of a wrap/reset, we can assume that the absolute delta between two points will be at least their non-negative difference. -_**Output data type:** Float_ - ```js increase(columns: ["_value"]) ``` @@ -54,7 +52,7 @@ For each input table with `n` rows, `increase()` outputs a table with `n - 1` ro import "sampledata" sampledata.int() - |> increase() + |> increase() ``` {{< expand-wrapper >}} @@ -92,10 +90,3 @@ sampledata.int() {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -increase = (tables=<-, columns=["_value"]) => tables - |> difference(nonNegative: true, columns: column, keepFirst: true, initialZero: true) - |> cumulativeSum() -``` diff --git a/content/flux/v0.x/stdlib/universe/int.md b/content/flux/v0.x/stdlib/universe/int.md index 09d52eed6..eacf7f2b4 100644 --- a/content/flux/v0.x/stdlib/universe/int.md +++ b/content/flux/v0.x/stdlib/universe/int.md @@ -19,8 +19,6 @@ introduced: 0.7.0 The `int()` function converts a single value to an integer. -_**Output data type:** Integer_ - ```js int(v: "4") ``` @@ -98,10 +96,10 @@ _The following example uses data provided by the [`sampledata` package](/flux/v0 import "sampledata" data = sampledata.float() - |> rename(columns: {_value: "foo"}) + |> rename(columns: {_value: "foo"}) data - |> map(fn:(r) => ({ r with foo: int(v: r.foo) })) + |> map(fn: (r) => ({r with foo: int(v: r.foo)})) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/integral.md b/content/flux/v0.x/stdlib/universe/integral.md index 77a3cdf0b..9202248c5 100644 --- a/content/flux/v0.x/stdlib/universe/integral.md +++ b/content/flux/v0.x/stdlib/universe/integral.md @@ -27,10 +27,10 @@ _**Output data type:** Float_ ```js integral( - unit: 10s, - column: "_value", - timeColumn: "_time", - interpolate: "" + unit: 10s, + column: "_value", + timeColumn: "_time", + interpolate: "", ) ``` @@ -69,8 +69,8 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> range(start: sampledata.start, stop: sampledata.stop) - |> integral(unit:10s) + |> range(start: sampledata.start, stop: sampledata.stop) + |> integral(unit:10s) ``` {{< expand-wrapper >}} @@ -97,8 +97,8 @@ sampledata.int() import "sampledata" sampledata.int(includeNull: true) - |> range(start: sampledata.start, stop: sampledata.stop) - |> integral(unit:10s, interpolate: "linear") + |> range(start: sampledata.start, stop: sampledata.stop) + |> integral(unit:10s, interpolate: "linear") ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/intervals.md b/content/flux/v0.x/stdlib/universe/intervals.md index 4d955bf93..df3ee4e5e 100644 --- a/content/flux/v0.x/stdlib/universe/intervals.md +++ b/content/flux/v0.x/stdlib/universe/intervals.md @@ -85,23 +85,23 @@ intervals(every:1mo, period:-1d) ```js // 1 day intervals excluding weekends intervals( - every:1d, - filter: (interval) => !(weekday(time: interval.start) in [Sunday, Saturday]), + every: 1d, + filter: (interval) => !(weekday(time: interval.start) in [Sunday, Saturday]), ) // Work hours from 9AM - 5PM on work days. intervals( - every:1d, - period:8h, - offset:9h, - filter:(interval) => !(weekday(time: interval.start) in [Sunday, Saturday]), + every: 1d, + period: 8h, + offset: 9h, + filter: (interval) => !(weekday(time: interval.start) in [Sunday, Saturday]), ) ``` ##### Using known start and stop dates ```js // Every hour for six hours on Sep 5th. -intervals(every:1h)(start:2018-09-05T00:00:00-07:00, stop: 2018-09-05T06:00:00-07:00) +intervals(every: 1h)(start: 2018-09-05T00:00:00-07:00, stop: 2018-09-05T06:00:00-07:00) // Generates // [2018-09-05T00:00:00-07:00, 2018-09-05T01:00:00-07:00) @@ -112,7 +112,7 @@ intervals(every:1h)(start:2018-09-05T00:00:00-07:00, stop: 2018-09-05T06:00:00-0 // [2018-09-05T05:00:00-07:00, 2018-09-05T06:00:00-07:00) // Every hour for six hours with 1h30m periods on Sep 5th -intervals(every:1h, period:1h30m)(start:2018-09-05T00:00:00-07:00, stop: 2018-09-05T06:00:00-07:00) +intervals(every: 1h, period: 1h30m)(start: 2018-09-05T00:00:00-07:00, stop: 2018-09-05T06:00:00-07:00) // Generates // [2018-09-05T00:00:00-07:00, 2018-09-05T01:30:00-07:00) @@ -123,7 +123,7 @@ intervals(every:1h, period:1h30m)(start:2018-09-05T00:00:00-07:00, stop: 2018-09 // [2018-09-05T05:00:00-07:00, 2018-09-05T06:30:00-07:00) // Every hour for six hours using the previous hour on Sep 5th -intervals(every:1h, period:-1h)(start:2018-09-05T12:00:00-07:00, stop: 2018-09-05T18:00:00-07:00) +intervals(every: 1h, period: -1h)(start: 2018-09-05T12:00:00-07:00, stop: 2018-09-05T18:00:00-07:00) // Generates // [2018-09-05T11:00:00-07:00, 2018-09-05T12:00:00-07:00) @@ -135,7 +135,7 @@ intervals(every:1h, period:-1h)(start:2018-09-05T12:00:00-07:00, stop: 2018-09-0 // [2018-09-05T17:00:00-07:00, 2018-09-05T18:00:00-07:00) // Every month for 4 months starting on Jan 1st -intervals(every:1mo)(start:2018-01-01, stop: 2018-05-01) +intervals(every: 1mo)(start: 2018-01-01T00:00:00Z, stop: 2018-05-01T00:00:00Z) // Generates // [2018-01-01, 2018-02-01) @@ -144,7 +144,7 @@ intervals(every:1mo)(start:2018-01-01, stop: 2018-05-01) // [2018-04-01, 2018-05-01) // Every month for 4 months starting on Jan 15th -intervals(every:1mo)(start:2018-01-15, stop: 2018-05-15) +intervals(every: 1mo)(start: 2018-01-15T00:00:00Z, stop: 2018-05-15T00:00:00Z) // Generates // [2018-01-15, 2018-02-15) diff --git a/content/flux/v0.x/stdlib/universe/join.md b/content/flux/v0.x/stdlib/universe/join.md index 77a49ed6d..960bdd14b 100644 --- a/content/flux/v0.x/stdlib/universe/join.md +++ b/content/flux/v0.x/stdlib/universe/join.md @@ -28,9 +28,9 @@ The resulting group key is the union of the input group keys. ```js join( - tables: {key1: table1, key2: table2}, - on: ["_time", "_field"], - method: "inner" + tables: {key1: table1, key2: table2}, + on: ["_time", "_field"], + method: "inner", ) ``` @@ -95,13 +95,10 @@ import "generate" t1 = generate.from(count: 4, fn: (n) => n + 1, start: 2021-01-01T00:00:00Z, stop: 2021-01-05T00:00:00Z) |> set(key: "tag", value: "foo") -t2 = generate.from(count: 4, fn: (n) => n * -1, start: 2021-01-01T00:00:00Z, stop: 2021-01-05T00:00:00Z) +t2 = generate.from(count: 4, fn: (n) => n * (-1), start: 2021-01-01T00:00:00Z, stop: 2021-01-05T00:00:00Z) |> set(key: "tag", value: "foo") -join( - tables: {t1: t1, t2: t2}, - on: ["_time", "tag"], -) +join(tables: {t1: t1, t2: t2}, on: ["_time", "tag"]) ``` #### Input data streams @@ -145,24 +142,15 @@ The following example shows how data in different InfluxDB measurements can be joined with Flux. ```js -data_1 = from(bucket:"example-bucket") - |> range(start:-15m) - |> filter(fn: (r) => - r._measurement == "cpu" and - r._field == "usage_system" - ) +data_1 = from(bucket: "example-bucket") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") -data_2 = from(bucket:"example-bucket") - |> range(start:-15m) - |> filter(fn: (r) => - r._measurement == "mem" and - r._field == "used_percent" - ) +data_2 = from(bucket: "example-bucket") + |> range(start: -15m) + |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent") -join( - tables: {d1: data_1, d2: data_2}, - on: ["_time", "host"], -) +join(tables: {d1: data_1, d2: data_2}, on: ["_time", "host"]) ``` ## join() versus union() @@ -204,10 +192,7 @@ are illustrated below: #### join() output ```js -join( - tables: {t1: t1, t2: t2}, - on: ["_time", "tag"], -) +join(tables: {t1: t1, t2: t2}, on: ["_time", "tag"]) ``` | _time | tag | _value_t1 | _value_t2 | @@ -233,3 +218,4 @@ union(tables: [t1, t2]) | 2021-01-03T00:00:00Z | foo | 3 | | 2021-01-04T00:00:00Z | foo | 4 | {{% /expand %}} + diff --git a/content/flux/v0.x/stdlib/universe/kaufmansama.md b/content/flux/v0.x/stdlib/universe/kaufmansama.md index 93cc37236..0b87fa895 100644 --- a/content/flux/v0.x/stdlib/universe/kaufmansama.md +++ b/content/flux/v0.x/stdlib/universe/kaufmansama.md @@ -26,8 +26,8 @@ using values in an input table. ```js kaufmansAMA( - n: 10, - column: "_value" + n: 10, + column: "_value", ) ``` @@ -55,7 +55,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> kaufmansAMA(n: 3) + |> kaufmansAMA(n: 3) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/kaufmanser.md b/content/flux/v0.x/stdlib/universe/kaufmanser.md index 6fc30f9bc..80dcbe629 100644 --- a/content/flux/v0.x/stdlib/universe/kaufmanser.md +++ b/content/flux/v0.x/stdlib/universe/kaufmanser.md @@ -49,7 +49,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> kaufmansER(n: 3) + |> kaufmansER(n: 3) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/keep.md b/content/flux/v0.x/stdlib/universe/keep.md index 76ad4ff8d..89274683a 100644 --- a/content/flux/v0.x/stdlib/universe/keep.md +++ b/content/flux/v0.x/stdlib/universe/keep.md @@ -60,7 +60,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> keep(columns: ["_time", "_value"]) + |> keep(columns: ["_time", "_value"]) ``` {{< expand-wrapper >}} @@ -101,7 +101,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> keep(fn: (column) => column =~ /^_?t/) + |> keep(fn: (column) => column =~ /^_?t/) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/keys.md b/content/flux/v0.x/stdlib/universe/keys.md index 3939ee9d5..e18a86223 100644 --- a/content/flux/v0.x/stdlib/universe/keys.md +++ b/content/flux/v0.x/stdlib/universe/keys.md @@ -55,12 +55,12 @@ to simulate data queried from InfluxDB and illustrate how `keys()` transforms da ```js import "influxdata/influxdb/sample" -data = sample.data(set: "airSensor") - |> range(start: -30m) - |> filter(fn: (r) => r.sensor_id == "TLM0100") +data = sample.data(set: "airSensor") + |> range(start: -30m) + |> filter(fn: (r) => r.sensor_id == "TLM0100") data - |> keys() + |> keys() ``` {{< expand-wrapper >}} @@ -122,14 +122,14 @@ data ```js import "influxdata/influxdb/sample" -data = sample.data(set: "airSensor") - |> range(start: -30m) - |> filter(fn: (r) => r.sensor_id == "TLM0100") +data = sample.data(set: "airSensor") + |> range(start: -30m) + |> filter(fn: (r) => r.sensor_id == "TLM0100") data - |> keys() - |> keep(columns: ["_value"]) - |> distinct() + |> keys() + |> keep(columns: ["_value"]) + |> distinct() ``` {{< expand-wrapper >}} @@ -182,13 +182,13 @@ To return group key columns as an array: ```js import "influxdata/influxdb/sample" -data = sample.data(set: "airSensor") - |> range(start: -30m) - |> filter(fn: (r) => r.sensor_id == "TLM0100") +data = sample.data(set: "airSensor") + |> range(start: -30m) + |> filter(fn: (r) => r.sensor_id == "TLM0100") data - |> keys() - |> findColumn(fn: (key) => true, column: "_value") + |> keys() + |> findColumn(fn: (key) => true, column: "_value") // Returns [_start, _stop, _field, _measurement, sensor_id] ``` diff --git a/content/flux/v0.x/stdlib/universe/keyvalues.md b/content/flux/v0.x/stdlib/universe/keyvalues.md index a71689d85..c24698c2d 100644 --- a/content/flux/v0.x/stdlib/universe/keyvalues.md +++ b/content/flux/v0.x/stdlib/universe/keyvalues.md @@ -83,11 +83,11 @@ to simulate data queried from InfluxDB and illustrate how `keys()` transforms da ```js import "influxdata/influxdb/sample" -data = sample.data(set: "airSensor") - |> filter(fn: (r) => r.sensor_id == "TLM0100") +data = sample.data(set: "airSensor") + |> filter(fn: (r) => r.sensor_id == "TLM0100") data - |> keyValues(keyColumns: ["sensor_id", "_field"]) + |> keyValues(keyColumns: ["sensor_id", "_field"]) ``` {{< expand-wrapper >}} @@ -145,16 +145,16 @@ data ```js import "influxdata/influxdb/sample" -data = sample.data(set: "airSensor") - |> filter(fn: (r) => r.sensor_id == "TLM0100") +data = sample.data(set: "airSensor") + |> filter(fn: (r) => r.sensor_id == "TLM0100") keyColumns = data - |> keys() - |> findColumn(fn: (key) => true, column: "_value") - // Returns [_field, _measurement, sensor_id] + |> keys() + |> findColumn(fn: (key) => true, column: "_value") +// Returns [_field, _measurement, sensor_id] data - |> keyValues(keyColumns: keyColumns) + |> keyValues(keyColumns: keyColumns) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/last.md b/content/flux/v0.x/stdlib/universe/last.md index 3304f79b7..3d3881a31 100644 --- a/content/flux/v0.x/stdlib/universe/last.md +++ b/content/flux/v0.x/stdlib/universe/last.md @@ -49,7 +49,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> last() + |> last() ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/limit.md b/content/flux/v0.x/stdlib/universe/limit.md index 27b563932..5579705ea 100644 --- a/content/flux/v0.x/stdlib/universe/limit.md +++ b/content/flux/v0.x/stdlib/universe/limit.md @@ -26,10 +26,7 @@ If the input table has less than `offset + n` records, `limit()` outputs all rec _`limit()` is a [selector function](/flux/v0.x/function-types/#selectors)._ ```js -limit( - n:10, - offset: 0 -) +limit(n:10, offset: 0) ``` ## Parameters @@ -57,7 +54,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> limit(n: 3) + |> limit(n: 3) ``` {{< expand-wrapper >}} @@ -94,7 +91,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> limit(n: 3, offset: 2) + |> limit(n: 3, offset: 2) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/linearbins.md b/content/flux/v0.x/stdlib/universe/linearbins.md index 21f4bf4dd..18f31984e 100644 --- a/content/flux/v0.x/stdlib/universe/linearbins.md +++ b/content/flux/v0.x/stdlib/universe/linearbins.md @@ -21,10 +21,10 @@ _**Output data type:** Array of floats_ ```js linearBins( - start: 0.0, - width: 5.0, - count: 20, - infinity: true + start: 0.0, + width: 5.0, + count: 20, + infinity: true, ) ``` diff --git a/content/flux/v0.x/stdlib/universe/logarithmicbins.md b/content/flux/v0.x/stdlib/universe/logarithmicbins.md index ae33a1157..b22722ff4 100644 --- a/content/flux/v0.x/stdlib/universe/logarithmicbins.md +++ b/content/flux/v0.x/stdlib/universe/logarithmicbins.md @@ -21,10 +21,10 @@ _**Output data type:** Array of floats_ ```js logarithmicBins( - start:1.0, - factor: 2.0, - count: 10, - infinity: true + start:1.0, + factor: 2.0, + count: 10, + infinity: true, ) ``` diff --git a/content/flux/v0.x/stdlib/universe/lowestaverage.md b/content/flux/v0.x/stdlib/universe/lowestaverage.md index 5f1f4f193..cfbd9d0ae 100644 --- a/content/flux/v0.x/stdlib/universe/lowestaverage.md +++ b/content/flux/v0.x/stdlib/universe/lowestaverage.md @@ -21,9 +21,9 @@ _`lowestAverage()` is a [selector function](/flux/v0.x/function-types/#selectors ```js lowestAverage( - n:10, - column: "_value", - groupColumns: [] + n:10, + column: "_value", + groupColumns: [], ) ``` @@ -56,7 +56,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> lowestAverage(n: 2, groupColumns: ["tag"]) + |> lowestAverage(n: 2, groupColumns: ["tag"]) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/lowestcurrent.md b/content/flux/v0.x/stdlib/universe/lowestcurrent.md index e487bf744..4c34d5114 100644 --- a/content/flux/v0.x/stdlib/universe/lowestcurrent.md +++ b/content/flux/v0.x/stdlib/universe/lowestcurrent.md @@ -21,9 +21,9 @@ _`lowestCurrent()` is a [selector function](/flux/v0.x/function-types/#selectors ```js lowestCurrent( - n:10, - column: "_value", - groupColumns: [] + n:10, + column: "_value", + groupColumns: [], ) ``` @@ -56,7 +56,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> lowestCurrent(n: 2, groupColumns: ["tag"]) + |> lowestCurrent(n: 2, groupColumns: ["tag"]) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/lowestmin.md b/content/flux/v0.x/stdlib/universe/lowestmin.md index 7a6d39cee..fdfbaa540 100644 --- a/content/flux/v0.x/stdlib/universe/lowestmin.md +++ b/content/flux/v0.x/stdlib/universe/lowestmin.md @@ -21,9 +21,9 @@ _`lowestMin()` is a [selector function](/flux/v0.x/function-types/#selectors)._ ```js lowestMin( - n:10, - column: "_value", - groupColumns: [] + n:10, + column: "_value", + groupColumns: [], ) ``` @@ -56,7 +56,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> lowestMin(n: 2, groupColumns: ["tag"]) + |> lowestMin(n: 2, groupColumns: ["tag"]) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/map.md b/content/flux/v0.x/stdlib/universe/map.md index afa7d2fed..cdc390665 100644 --- a/content/flux/v0.x/stdlib/universe/map.md +++ b/content/flux/v0.x/stdlib/universe/map.md @@ -126,11 +126,13 @@ sampledata.int() import "sampledata" sampledata.int() - |> map(fn: (r) => ({ - time: r._time, - source: r.tag, - alert: if r._value > 10 then true else false - })) + |> map( + fn: (r) => ({ + time: r._time, + source: r.tag, + alert: if r._value > 10 then true else false + }) + ) ``` {{< expand-wrapper >}} @@ -173,11 +175,13 @@ operated on by the map operation. import "sampledata" sampledata.int() - |> map(fn: (r) => ({ - r with - server: "server-${r.tag}", - valueFloat: float(v: r._value) - })) + |> map( + fn: (r) => ({ + r with + server: "server-${r.tag}", + valueFloat: float(v: r._value) + }) + ) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/max.md b/content/flux/v0.x/stdlib/universe/max.md index 4b9576500..d8c994fe5 100644 --- a/content/flux/v0.x/stdlib/universe/max.md +++ b/content/flux/v0.x/stdlib/universe/max.md @@ -46,7 +46,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> max() + |> max() ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/mean.md b/content/flux/v0.x/stdlib/universe/mean.md index abf188bda..9bd016886 100644 --- a/content/flux/v0.x/stdlib/universe/mean.md +++ b/content/flux/v0.x/stdlib/universe/mean.md @@ -44,7 +44,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> mean() + |> mean() ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/median.md b/content/flux/v0.x/stdlib/universe/median.md index f14355225..6387e5d3f 100644 --- a/content/flux/v0.x/stdlib/universe/median.md +++ b/content/flux/v0.x/stdlib/universe/median.md @@ -30,9 +30,9 @@ the [`method`](#method) used._ ```js median( - column: "_value", - method: "estimate_tdigest", - compression: 0.0 + column: "_value", + method: "estimate_tdigest", + compression: 0.0, ) ``` @@ -91,7 +91,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.float() - |> median() + |> median() ``` {{< expand-wrapper >}} @@ -124,7 +124,7 @@ sampledata.float() import "sampledata" sampledata.float() - |> median(method: "exact_selector") + |> median(method: "exact_selector") ``` {{< expand-wrapper >}} @@ -151,13 +151,3 @@ sampledata.float() {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -median = (method="estimate_tdigest", compression=0.0, tables=<-) => - quantile( - q:0.5, - method:method, - compression:compression - ) -``` diff --git a/content/flux/v0.x/stdlib/universe/min.md b/content/flux/v0.x/stdlib/universe/min.md index 8e0f22877..93ac6495e 100644 --- a/content/flux/v0.x/stdlib/universe/min.md +++ b/content/flux/v0.x/stdlib/universe/min.md @@ -46,7 +46,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> min() + |> min() ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/mode.md b/content/flux/v0.x/stdlib/universe/mode.md index 9f0cb29c0..c63db4e41 100644 --- a/content/flux/v0.x/stdlib/universe/mode.md +++ b/content/flux/v0.x/stdlib/universe/mode.md @@ -62,7 +62,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> mode() + |> mode() ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/movingaverage.md b/content/flux/v0.x/stdlib/universe/movingaverage.md index 697f16147..f4b3dbfd6 100644 --- a/content/flux/v0.x/stdlib/universe/movingaverage.md +++ b/content/flux/v0.x/stdlib/universe/movingaverage.md @@ -58,7 +58,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> movingAverage(n: 3) + |> movingAverage(n: 3) ``` {{% expand-wrapper %}} @@ -97,7 +97,7 @@ sampledata.int() import "sampledata" sampledata.int(includeNull: true) - |> movingAverage(n: 3) + |> movingAverage(n: 3) ``` {{% expand-wrapper %}} diff --git a/content/flux/v0.x/stdlib/universe/now.md b/content/flux/v0.x/stdlib/universe/now.md index 09babc72f..cf05b8d3c 100644 --- a/content/flux/v0.x/stdlib/universe/now.md +++ b/content/flux/v0.x/stdlib/universe/now.md @@ -28,7 +28,7 @@ now() ##### Use the current UTC time as a query boundary ```js data - |> range(start: -10h, stop: now()) + |> range(start: -10h, stop: now()) ``` ##### Return the now option time diff --git a/content/flux/v0.x/stdlib/universe/pearsonr.md b/content/flux/v0.x/stdlib/universe/pearsonr.md index 1e2aab23f..542699fa6 100644 --- a/content/flux/v0.x/stdlib/universe/pearsonr.md +++ b/content/flux/v0.x/stdlib/universe/pearsonr.md @@ -44,18 +44,20 @@ to generate sample data and show how `pearsonr()` transforms data. import "generate" stream1 = generate.from( - count: 5, - fn: (n) => n * n, - start: 2021-01-01T00:00:00Z, - stop: 2021-01-01T00:01:00Z -) |> toFloat() + count: 5, + fn: (n) => n * n, + start: 2021-01-01T00:00:00Z, + stop: 2021-01-01T00:01:00Z, +) + |> toFloat() stream2 = generate.from( - count: 5, - fn: (n) => n * n * n / 2, - start: 2021-01-01T00:00:00Z, - stop: 2021-01-01T00:01:00Z -) |> toFloat() + count: 5, + fn: (n) => n * n * n / 2, + start: 2021-01-01T00:00:00Z, + stop: 2021-01-01T00:01:00Z, +) + |> toFloat() pearsonr(x: stream1, y: stream2, on: ["_time"]) ``` @@ -94,9 +96,3 @@ pearsonr(x: stream1, y: stream2, on: ["_time"]) {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -pearsonr = (x,y,on) => - cov(x:x, y:y, on:on, pearsonr:true) -``` diff --git a/content/flux/v0.x/stdlib/universe/pivot.md b/content/flux/v0.x/stdlib/universe/pivot.md index 523ef6444..68ee84331 100644 --- a/content/flux/v0.x/stdlib/universe/pivot.md +++ b/content/flux/v0.x/stdlib/universe/pivot.md @@ -70,11 +70,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi ```js data - |> pivot( - rowKey:["_time"], - columnKey: ["_field"], - valueColumn: "_value" - ) + |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value") ``` {{< expand-wrapper >}} @@ -116,11 +112,7 @@ data ```js data - |> pivot( - rowKey:["_time"], - columnKey: ["_measurement", "_field"], - valueColumn: "_value" - ) + |> pivot(rowKey: ["_time"], columnKey: ["_measurement", "_field"], valueColumn: "_value") ``` {{< expand-wrapper >}} @@ -161,11 +153,7 @@ data import "sampledata" sampledata.int() - |> pivot( - rowKey: ["_time"], - columnKey: ["tag"], - valueColumn: "_value" - ) + |> pivot(rowKey: ["_time"], columnKey: ["tag"], valueColumn: "_value") ``` {{< expand-wrapper >}} @@ -193,3 +181,4 @@ sampledata.int() {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} + diff --git a/content/flux/v0.x/stdlib/universe/quantile.md b/content/flux/v0.x/stdlib/universe/quantile.md index 29b7259b0..1b7ddf75f 100644 --- a/content/flux/v0.x/stdlib/universe/quantile.md +++ b/content/flux/v0.x/stdlib/universe/quantile.md @@ -31,10 +31,10 @@ the [`method`](#method) used._ ```js quantile( - column: "_value", - q: 0.99, - method: "estimate_tdigest", - compression: 1000.0 + column: "_value", + q: 0.99, + method: "estimate_tdigest", + compression: 1000.0, ) ``` @@ -90,11 +90,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.float() - |> quantile( - q: 0.99, - method: "estimate_tdigest", - compression: 1000.0 - ) + |> quantile(q: 0.99, method: "estimate_tdigest", compression: 1000.0) ``` {{< expand-wrapper >}} @@ -127,10 +123,7 @@ sampledata.float() import "sampledata" sampledata.float() - |> quantile( - q: 0.5, - method: "exact_selector" - ) + |> quantile(q: 0.5, method: "exact_selector") ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/range.md b/content/flux/v0.x/stdlib/universe/range.md index 539590ede..c42c74545 100644 --- a/content/flux/v0.x/stdlib/universe/range.md +++ b/content/flux/v0.x/stdlib/universe/range.md @@ -24,10 +24,7 @@ Each input table's group key value is modified to fit within the time bounds. Tables where all records exists outside the time bounds are filtered entirely. ```js -range( - start: -15m, - stop: now() -) +range(start: -15m, stop: now()) ``` #### Behavior of start and stop times @@ -72,28 +69,28 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi #### Time range relative to now ```js -from(bucket:"example-bucket") - |> range(start: -12h) - // ... +from(bucket: "example-bucket") + |> range(start: -12h) + // ... ``` #### Relative time range ```js -from(bucket:"example-bucket") - |> range(start: -12h, stop: -15m) - // ... +from(bucket: "example-bucket") + |> range(start: -12h, stop: -15m) + // ... ``` #### Absolute time range ```js -from(bucket:"example-bucket") - |> range(start: 2018-05-22T23:30:00Z, stop: 2018-05-23T00:00:00Z) - // ... +from(bucket: "example-bucket") + |> range(start: 2018-05-22T23:30:00Z, stop: 2018-05-23T00:00:00Z) + // ... ``` #### Absolute time range with Unix timestamps ```js -from(bucket:"example-bucket") - |> range(start: 1527031800, stop: 1527033600) - // ... +from(bucket: "example-bucket") + |> range(start: 1527031800, stop: 1527033600) + // ... ``` diff --git a/content/flux/v0.x/stdlib/universe/reduce.md b/content/flux/v0.x/stdlib/universe/reduce.md index 9c3ad1bea..26a5d1d2b 100644 --- a/content/flux/v0.x/stdlib/universe/reduce.md +++ b/content/flux/v0.x/stdlib/universe/reduce.md @@ -28,7 +28,7 @@ _`reduce()` is an [aggregate function](/flux/v0.x/function-types/#aggregates)._ ```js reduce( fn: (r, accumulator) => ({ sum: r._value + accumulator.sum }), - identity: {sum: 0.0} + identity: {sum: 0.0}, ) ``` @@ -109,7 +109,7 @@ import "sampledata" sampledata.int() |> reduce( fn: (r, accumulator) => ({sum: r._value + accumulator.sum}), - identity: {sum: 0} + identity: {sum: 0}, ) ``` @@ -182,10 +182,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> reduce( - fn: (r, accumulator) => ({prod: r._value * accumulator.prod}), - identity: {prod: 1} - ) + |> reduce(fn: (r, accumulator) => ({prod: r._value * accumulator.prod}), identity: {prod: 1}) ``` {{< expand-wrapper >}} @@ -219,12 +216,12 @@ import "sampledata" sampledata.int() |> reduce( - fn: (r, accumulator) => ({ - count: accumulator.count + 1, - total: accumulator.total + r._value, - avg: float(v: (accumulator.total + r._value)) / float(v: accumulator.count + 1) - }), - identity: {count: 0, total: 0, avg: 0.0} + fn: (r, accumulator) => ({ + count: accumulator.count + 1, + total: accumulator.total + r._value, + avg: float(v: (accumulator.total + r._value)) / float(v: accumulator.count + 1) + }), + identity: {count: 0, total: 0, avg: 0.0}, ) ``` diff --git a/content/flux/v0.x/stdlib/universe/relativestrengthindex.md b/content/flux/v0.x/stdlib/universe/relativestrengthindex.md index 5c582e391..33fbcfc1d 100644 --- a/content/flux/v0.x/stdlib/universe/relativestrengthindex.md +++ b/content/flux/v0.x/stdlib/universe/relativestrengthindex.md @@ -27,8 +27,8 @@ values in an input table. ```js relativeStrengthIndex( - n: 5, - columns: ["_value"] + n: 5, + columns: ["_value"], ) ``` @@ -67,7 +67,7 @@ with `x - n` rows. import "sampledata" sampledata.int() - |> relativeStrengthIndex(n: 3) + |> relativeStrengthIndex(n: 3) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/rename.md b/content/flux/v0.x/stdlib/universe/rename.md index 9daa938f4..a25a56ae5 100644 --- a/content/flux/v0.x/stdlib/universe/rename.md +++ b/content/flux/v0.x/stdlib/universe/rename.md @@ -63,7 +63,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> rename(columns: {tag: "uid", _value: "val"}) + |> rename(columns: {tag: "uid", _value: "val"}) ``` {{< expand-wrapper >}} @@ -107,7 +107,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> rename(fn: (column) => "${column}_new") + |> rename(fn: (column) => "${column}_new") ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/sample.md b/content/flux/v0.x/stdlib/universe/sample.md index d3ac0ef36..2e9b4bd51 100644 --- a/content/flux/v0.x/stdlib/universe/sample.md +++ b/content/flux/v0.x/stdlib/universe/sample.md @@ -52,7 +52,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> sample(n: 2, pos: 1) + |> sample(n: 2, pos: 1) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/set.md b/content/flux/v0.x/stdlib/universe/set.md index 03320a50a..2cfe24b27 100644 --- a/content/flux/v0.x/stdlib/universe/set.md +++ b/content/flux/v0.x/stdlib/universe/set.md @@ -44,7 +44,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> set(key: "host", value: "prod1") + |> set(key: "host", value: "prod1") ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/skew.md b/content/flux/v0.x/stdlib/universe/skew.md index 312d78a31..c6247f9a3 100644 --- a/content/flux/v0.x/stdlib/universe/skew.md +++ b/content/flux/v0.x/stdlib/universe/skew.md @@ -43,7 +43,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> skew() + |> skew() ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/sleep.md b/content/flux/v0.x/stdlib/universe/sleep.md index 02a908c75..0f94c3c42 100644 --- a/content/flux/v0.x/stdlib/universe/sleep.md +++ b/content/flux/v0.x/stdlib/universe/sleep.md @@ -17,10 +17,7 @@ The `sleep()` function was removed in **Flux 0.123.0**. The `sleep()` function delays execution by a specified duration. ```js -sleep( - v: x, - duration: 10s -) +sleep(v: x, duration: 10s) ``` ## Parameters @@ -40,8 +37,8 @@ Length of time to delay execution. ### Delay execution in a chained query ```js from(bucket: "example-bucket") - |> range(start: -1h) - |> sleep(duration: 10s) + |> range(start: -1h) + |> sleep(duration: 10s) ``` ### Delay execution using a stream variable diff --git a/content/flux/v0.x/stdlib/universe/sort.md b/content/flux/v0.x/stdlib/universe/sort.md index 47d38d537..c02402d2c 100644 --- a/content/flux/v0.x/stdlib/universe/sort.md +++ b/content/flux/v0.x/stdlib/universe/sort.md @@ -57,7 +57,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> sort() + |> sort() ``` {{< expand-wrapper >}} @@ -99,7 +99,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> sort(desc: true) + |> sort(desc: true) ``` {{< expand-wrapper >}} @@ -141,7 +141,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> sort(columns: ["tag", "_value"]) + |> sort(columns: ["tag", "_value"]) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/spread.md b/content/flux/v0.x/stdlib/universe/spread.md index 5ad364e15..777f3baad 100644 --- a/content/flux/v0.x/stdlib/universe/spread.md +++ b/content/flux/v0.x/stdlib/universe/spread.md @@ -48,7 +48,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> spread() + |> spread() ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/statecount.md b/content/flux/v0.x/stdlib/universe/statecount.md index 3b2d561bb..65f418cc1 100644 --- a/content/flux/v0.x/stdlib/universe/statecount.md +++ b/content/flux/v0.x/stdlib/universe/statecount.md @@ -59,7 +59,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> stateCount(fn: (r) => r._value > 10) + |> stateCount(fn: (r) => r._value > 10) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/stateduration.md b/content/flux/v0.x/stdlib/universe/stateduration.md index 6e76ff8c4..fa544e4e8 100644 --- a/content/flux/v0.x/stdlib/universe/stateduration.md +++ b/content/flux/v0.x/stdlib/universe/stateduration.md @@ -18,12 +18,17 @@ related: introduced: 0.7.0 --- -The `stateDuration()` function computes the duration of a given state. -The state is defined via the function `fn`. -For each consecutive point for that evaluates as `true`, the state duration will be -incremented by the duration between points. -When a point evaluates as `false`, the state duration is reset. +`stateDuration()` returns the cumulative duration of a given state. + +The state is defined by the `fn` predicate function. For each consecutive +record that evaluates to `true`, the state duration is incremented by the +duration of time between records using the specified `unit`. When a record +evaluates to `false`, the value is set to `-1` and the state duration is reset. +If the record generates an error during evaluation, the point is discarded, +and does not affect the state duration. + The state duration is added as an additional column to each record. +The duration is represented as an integer in the units specified. {{% note %}} As the first point in the given state has no previous point, its @@ -71,7 +76,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> stateDuration(fn: (r) => r._value > 10) + |> stateDuration(fn: (r) => r._value > 10) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/stddev.md b/content/flux/v0.x/stdlib/universe/stddev.md index 8f06f42ab..1c8df9fb9 100644 --- a/content/flux/v0.x/stdlib/universe/stddev.md +++ b/content/flux/v0.x/stdlib/universe/stddev.md @@ -24,10 +24,7 @@ _`stddev()` is an [aggregate function](/flux/v0.x/function-types/#aggregates)._ _**Output data type:** Float_ ```js -stddev( - column: "_value", - mode: "sample" -) +stddev(column: "_value", mode: "sample") ``` ## Parameters @@ -59,7 +56,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> stddev() + |> stddev() ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/string.md b/content/flux/v0.x/stdlib/universe/string.md index 7bbda8558..7fa3466ec 100644 --- a/content/flux/v0.x/stdlib/universe/string.md +++ b/content/flux/v0.x/stdlib/universe/string.md @@ -18,8 +18,6 @@ introduced: 0.7.0 The `string()` function converts a single value to a string. -_**Output data type:** String_ - ```js string(v: 123456789) ``` @@ -78,10 +76,10 @@ _The following example uses data provided by the [`sampledata` package](/flux/v0 import "sampledata" data = sampledata.int() - |> rename(columns: {_value: "foo"}) + |> rename(columns: {_value: "foo"}) data - |> map(fn:(r) => ({ r with foo: string(v: r.foo) })) + |> map(fn: (r) => ({r with foo: string(v: r.foo)})) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/sum.md b/content/flux/v0.x/stdlib/universe/sum.md index b628e0d68..2b8023c4a 100644 --- a/content/flux/v0.x/stdlib/universe/sum.md +++ b/content/flux/v0.x/stdlib/universe/sum.md @@ -42,7 +42,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> sum() + |> sum() ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/tail.md b/content/flux/v0.x/stdlib/universe/tail.md index 7021e008b..a27622aa2 100644 --- a/content/flux/v0.x/stdlib/universe/tail.md +++ b/content/flux/v0.x/stdlib/universe/tail.md @@ -22,10 +22,7 @@ Each output table contains the last `n` records before the [`offset`](#offset). If the input table has less than `offset + n` records, `tail()` outputs all records before the `offset`. ```js -tail( - n:10, - offset: 0 -) +tail(n: 10, offset: 0) ``` ## Parameters @@ -53,7 +50,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> tail(n: 3) + |> tail(n: 3) ``` {{< expand-wrapper >}} @@ -89,7 +86,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> tail(n: 3, offset: 1) + |> tail(n: 3, offset: 1) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/time.md b/content/flux/v0.x/stdlib/universe/time.md index e69b8eb27..65a6a94d8 100644 --- a/content/flux/v0.x/stdlib/universe/time.md +++ b/content/flux/v0.x/stdlib/universe/time.md @@ -66,11 +66,11 @@ To update values in columns other than `_value`: import "sampledata" data = sampledata.int() - |> map(fn: (r) => ({ r with _value: r._value * 1000000000 })) - |> rename(columns: {_value: "foo"}) + |> map(fn: (r) => ({r with _value: r._value * 1000000000})) + |> rename(columns: {_value: "foo"}) data - |> map(fn:(r) => ({ r with foo: time(v: r.foo) })) + |> map(fn: (r) => ({r with foo: time(v: r.foo)})) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/timedmovingaverage.md b/content/flux/v0.x/stdlib/universe/timedmovingaverage.md index c78f4073b..17330bec9 100644 --- a/content/flux/v0.x/stdlib/universe/timedmovingaverage.md +++ b/content/flux/v0.x/stdlib/universe/timedmovingaverage.md @@ -29,9 +29,9 @@ range at a specified frequency. ```js timedMovingAverage( - every: 1d, - period: 5d, - column: "_value" + every: 1d, + period: 5d, + column: "_value", ) ``` @@ -81,20 +81,13 @@ to generate sample data and illustrate how `timedMovingAverage()` transforms dat #### Calculate a five year moving average every year ```js -import "generate" - timeRange = {start: 2015-01-01T00:00:00Z, stop: 2021-01-01T00:00:00Z} -data = generate.from( - count: 6, - fn: (n) => n * n, - start: timeRange.start, - stop: timeRange.stop - ) - |> range(start: timeRange.start, stop: timeRange.stop) +data = generate.from(count: 6, fn: (n) => n * n, start: timeRange.start, stop: timeRange.stop) + |> range(start: timeRange.start, stop: timeRange.stop) data - |> timedMovingAverage(every: 1y, period: 5y) + |> timedMovingAverage(every: 1y, period: 5y) ``` {{< expand-wrapper >}} @@ -132,16 +125,11 @@ import "generate" timeRange = {start: 2021-01-01T00:00:00Z, stop: 2021-01-08T00:00:00Z} -data = generate.from( - count: 7, - fn: (n) => n + n, - start: timeRange.start, - stop: timeRange.stop - ) - |> range(start: timeRange.start, stop: timeRange.stop) +data = generate.from(count: 7, fn: (n) => n + n, start: timeRange.start, stop: timeRange.stop) + |> range(start: timeRange.start, stop: timeRange.stop) data - |> timedMovingAverage(every: 1d, period: 7d) + |> timedMovingAverage(every: 1d, period: 7d) ``` {{< expand-wrapper >}} @@ -176,13 +164,3 @@ data {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -timedMovingAverage = (every, period, column="_value", tables=<-) => - tables - |> window(every: every, period: period) - |> mean(column:column) - |> duplicate(column: "_stop", as: "_time") - |> window(every: inf) -``` diff --git a/content/flux/v0.x/stdlib/universe/timeshift.md b/content/flux/v0.x/stdlib/universe/timeshift.md index 82780a20c..0d2d39a65 100644 --- a/content/flux/v0.x/stdlib/universe/timeshift.md +++ b/content/flux/v0.x/stdlib/universe/timeshift.md @@ -50,7 +50,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> timeShift(duration: 12h) + |> timeShift(duration: 12h) ``` {{< expand-wrapper >}} @@ -92,7 +92,7 @@ sampledata.int() import "sampledata" sampledata.int() - |> timeShift(duration: -12h) + |> timeShift(duration: -12h) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/timeweightedavg.md b/content/flux/v0.x/stdlib/universe/timeweightedavg.md index 8c4194f3a..052a08120 100644 --- a/content/flux/v0.x/stdlib/universe/timeweightedavg.md +++ b/content/flux/v0.x/stdlib/universe/timeweightedavg.md @@ -42,12 +42,12 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" data = sampledata.int(includeNull: true) - |> range(start: sampledata.start, stop: sampledata.stop) - |> fill(usePrevious: true) - |> unique() + |> range(start: sampledata.start, stop: sampledata.stop) + |> fill(usePrevious: true) + |> unique() data - |> timeWeightedAvg(unit: 1s) + |> timeWeightedAvg(unit: 1s) ``` {{< expand-wrapper >}} @@ -78,16 +78,3 @@ data {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -timeWeightedAvg = (tables=<-, unit) => tables - |> integral( - unit: unit, - interpolate: "linear" - ) - |> map(fn: (r) => ({ - r with - _value: (r._value * float(v: uint(v: unit))) / float(v: int(v: r._stop) - int(v: r._start)) - })) -``` diff --git a/content/flux/v0.x/stdlib/universe/tobool.md b/content/flux/v0.x/stdlib/universe/tobool.md index ead850931..9591c4d7f 100644 --- a/content/flux/v0.x/stdlib/universe/tobool.md +++ b/content/flux/v0.x/stdlib/universe/tobool.md @@ -51,7 +51,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.numericBool() - |> toBool() + |> toBool() ``` {{< expand-wrapper >}} @@ -87,10 +87,3 @@ sampledata.numericBool() {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -toBool = (tables=<-) => - tables - |> map(fn:(r) => ({ r with _value: bool(v: r._value) })) -``` diff --git a/content/flux/v0.x/stdlib/universe/today.md b/content/flux/v0.x/stdlib/universe/today.md index ce36c7d44..e17fc1a94 100644 --- a/content/flux/v0.x/stdlib/universe/today.md +++ b/content/flux/v0.x/stdlib/universe/today.md @@ -37,5 +37,5 @@ today() ##### Query data from today ```js from(bucket: "example-bucket") - |> range(start: today()) + |> range(start: today()) ``` diff --git a/content/flux/v0.x/stdlib/universe/tofloat.md b/content/flux/v0.x/stdlib/universe/tofloat.md index d2effe351..f254a0e8a 100644 --- a/content/flux/v0.x/stdlib/universe/tofloat.md +++ b/content/flux/v0.x/stdlib/universe/tofloat.md @@ -55,7 +55,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> toFloat() + |> toFloat() ``` {{< expand-wrapper >}} @@ -97,7 +97,7 @@ sampledata.int() import "sampledata" sampledata.bool() - |> toFloat() + |> toFloat() ``` {{< expand-wrapper >}} @@ -133,10 +133,3 @@ sampledata.bool() {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -toFloat = (tables=<-) => - tables - |> map(fn:(r) => ({ r with _value: float(v: r._value) })) -``` diff --git a/content/flux/v0.x/stdlib/universe/toint.md b/content/flux/v0.x/stdlib/universe/toint.md index 2b3ab6132..884abb273 100644 --- a/content/flux/v0.x/stdlib/universe/toint.md +++ b/content/flux/v0.x/stdlib/universe/toint.md @@ -68,7 +68,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.float() - |> toInt() + |> toInt() ``` {{< expand-wrapper >}} @@ -110,7 +110,7 @@ sampledata.float() import "sampledata" sampledata.bool() - |> toInt() + |> toInt() ``` {{< expand-wrapper >}} @@ -137,7 +137,7 @@ sampledata.bool() import "sampledata" sampledata.uint() - |> toInt() + |> toInt() ``` {{< expand-wrapper >}} @@ -173,10 +173,3 @@ sampledata.uint() {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -toInt = (tables=<-) => - tables - |> map(fn:(r) => ({ r with _value: int(v: r._value) })) -``` diff --git a/content/flux/v0.x/stdlib/universe/top.md b/content/flux/v0.x/stdlib/universe/top.md index 2b6e4b9dd..c80c336c5 100644 --- a/content/flux/v0.x/stdlib/universe/top.md +++ b/content/flux/v0.x/stdlib/universe/top.md @@ -50,7 +50,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> top(n: 3) + |> top(n: 3) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/tostring.md b/content/flux/v0.x/stdlib/universe/tostring.md index 534b17a8e..7e66cadc8 100644 --- a/content/flux/v0.x/stdlib/universe/tostring.md +++ b/content/flux/v0.x/stdlib/universe/tostring.md @@ -51,7 +51,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.float() - |> toString() + |> toString() ``` {{< expand-wrapper >}} @@ -87,10 +87,3 @@ sampledata.float() {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -toString = (tables=<-) => - tables - |> map(fn:(r) => ({ r with _value: string(v: r._value) })) -``` diff --git a/content/flux/v0.x/stdlib/universe/totime.md b/content/flux/v0.x/stdlib/universe/totime.md index 20419f704..84da81cd9 100644 --- a/content/flux/v0.x/stdlib/universe/totime.md +++ b/content/flux/v0.x/stdlib/universe/totime.md @@ -54,10 +54,10 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" data = sampledata.int() - |> map(fn: (r) => ({ r with _value: r._value * 1000000000 })) + |> map(fn: (r) => ({r with _value: r._value * 1000000000})) data - |> toTime() + |> toTime() ``` {{< expand-wrapper >}} @@ -107,10 +107,3 @@ data {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -toTime = (tables=<-) => - tables - |> map(fn:(r) => ({ r with _value: time(v:r._value) })) -``` diff --git a/content/flux/v0.x/stdlib/universe/touint.md b/content/flux/v0.x/stdlib/universe/touint.md index e3d63c6bb..501924652 100644 --- a/content/flux/v0.x/stdlib/universe/touint.md +++ b/content/flux/v0.x/stdlib/universe/touint.md @@ -68,7 +68,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.float() - |> toUInt() + |> toUInt() ``` {{< expand-wrapper >}} @@ -95,7 +95,7 @@ sampledata.float() import "sampledata" sampledata.bool() - |> toUInt() + |> toUInt() ``` {{< expand-wrapper >}} @@ -122,7 +122,7 @@ sampledata.bool() import "sampledata" sampledata.uint() - |> toUInt() + |> toUInt() ``` {{< expand-wrapper >}} @@ -143,9 +143,3 @@ sampledata.uint() {{< /flex >}} {{% /expand %}} {{< /expand-wrapper >}} - -## Function definition -```js -toUInt = (tables=<-) => tables - |> map(fn:(r) => ({ r with _value: uint(v:r._value) })) -``` diff --git a/content/flux/v0.x/stdlib/universe/tripleema.md b/content/flux/v0.x/stdlib/universe/tripleema.md index af2cc71ad..d2a16daac 100644 --- a/content/flux/v0.x/stdlib/universe/tripleema.md +++ b/content/flux/v0.x/stdlib/universe/tripleema.md @@ -61,7 +61,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> tripleEMA(n: 3) + |> tripleEMA(n: 3) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/tripleexponentialderivative.md b/content/flux/v0.x/stdlib/universe/tripleexponentialderivative.md index f58799738..0de7f3194 100644 --- a/content/flux/v0.x/stdlib/universe/tripleexponentialderivative.md +++ b/content/flux/v0.x/stdlib/universe/tripleexponentialderivative.md @@ -72,7 +72,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.float() - |> tripleExponentialDerivative(n: 2) + |> tripleExponentialDerivative(n: 2) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/truncatetimecolumn.md b/content/flux/v0.x/stdlib/universe/truncatetimecolumn.md index 478101881..8ab9781bc 100644 --- a/content/flux/v0.x/stdlib/universe/truncatetimecolumn.md +++ b/content/flux/v0.x/stdlib/universe/truncatetimecolumn.md @@ -52,10 +52,10 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" data = sampledata.int() - |> range(start: sampledata.start, stop: sampledata.stop) + |> range(start: sampledata.start, stop: sampledata.stop) data - |> truncateTimeColumn(unit: 1m) + |> truncateTimeColumn(unit: 1m) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/uint.md b/content/flux/v0.x/stdlib/universe/uint.md index a2c5b773f..54ab3b0ce 100644 --- a/content/flux/v0.x/stdlib/universe/uint.md +++ b/content/flux/v0.x/stdlib/universe/uint.md @@ -19,8 +19,6 @@ introduced: 0.7.0 The `uint()` function converts a single value to a UInteger. -_**Output data type:** UInteger_ - ```js uint(v: "4") ``` @@ -98,10 +96,10 @@ _The following example uses data provided by the [`sampledata` package](/flux/v0 import "sampledata" data = sampledata.float() - |> rename(columns: {_value: "foo"}) + |> rename(columns: {_value: "foo"}) data - |> map(fn:(r) => ({ r with foo: uint(v: r.foo) })) + |> map(fn: (r) => ({r with foo: uint(v: r.foo)})) ``` {{% expand "View input and output" %}} diff --git a/content/flux/v0.x/stdlib/universe/union.md b/content/flux/v0.x/stdlib/universe/union.md index c37a4ad71..ad7797a92 100644 --- a/content/flux/v0.x/stdlib/universe/union.md +++ b/content/flux/v0.x/stdlib/universe/union.md @@ -247,10 +247,7 @@ union(tables: [t1, t2]) #### join() output ```js -join( - tables: {t1: t1, t2: t2}, - on: ["_time", "tag"], -) +join(tables: {t1: t1, t2: t2}, on: ["_time", "tag"]) ``` | _time | tag | _value_t1 | _value_t2 | diff --git a/content/flux/v0.x/stdlib/universe/unique.md b/content/flux/v0.x/stdlib/universe/unique.md index d97dde979..b365e0a2f 100644 --- a/content/flux/v0.x/stdlib/universe/unique.md +++ b/content/flux/v0.x/stdlib/universe/unique.md @@ -45,7 +45,7 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" sampledata.int() - |> unique() + |> unique() ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/window.md b/content/flux/v0.x/stdlib/universe/window.md index 2dbcbc09e..c35e179e5 100644 --- a/content/flux/v0.x/stdlib/universe/window.md +++ b/content/flux/v0.x/stdlib/universe/window.md @@ -29,14 +29,14 @@ the parameters passed into the `window()` function. ```js window( - every: 5m, - period: 5m, - offset: 12h, - timeColumn: "_time", - startColumn: "_start", - stopColumn: "_stop", - location: "UTC", - createEmpty: false + every: 5m, + period: 5m, + offset: 12h, + timeColumn: "_time", + startColumn: "_start", + stopColumn: "_stop", + location: "UTC", + createEmpty: false, ) ``` @@ -107,10 +107,10 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi import "sampledata" data = sampledata.int() - |> range(start: sampledata.start, stop: sampledata.stop) - -data - |> window(every: 30s) + |> range(start: sampledata.start, stop: sampledata.stop) + +data + |> window(every: 30s) ``` {{< expand-wrapper >}} @@ -152,10 +152,10 @@ data import "sampledata" data = sampledata.int() - |> range(start: sampledata.start, stop: sampledata.stop) - -data - |> window(every: 20s, period: 40s) + |> range(start: sampledata.start, stop: sampledata.stop) + +data + |> window(every: 20s, period: 40s) ``` {{< expand-wrapper >}} @@ -225,16 +225,11 @@ import "generate" timeRange = {start: 2021-01-01T00:00:00Z, stop: 2021-04-01T00:00:00Z} -data = generate.from( - count: 6, - fn: (n) => n + n, - start: timeRange.start, - stop: timeRange.stop - ) - |> range(start: timeRange.start, stop: timeRange.stop) +data = generate.from(count: 6, fn: (n) => n + n, start: timeRange.start, stop: timeRange.stop) + |> range(start: timeRange.start, stop: timeRange.stop) data - |> window(every: 1mo) + |> window(every: 1mo) ``` {{< expand-wrapper >}} diff --git a/content/flux/v0.x/stdlib/universe/yield.md b/content/flux/v0.x/stdlib/universe/yield.md index e7bff9241..3a49daf26 100644 --- a/content/flux/v0.x/stdlib/universe/yield.md +++ b/content/flux/v0.x/stdlib/universe/yield.md @@ -43,6 +43,6 @@ Default is piped-forward data ([`<-`](/flux/v0.x/spec/expressions/#pipe-expressi ## Examples ```js from(bucket: "example-bucket") - |> range(start: -5m) - |> yield(name: "1") + |> range(start: -5m) + |> yield(name: "result-name") ``` diff --git a/content/flux/v0.x/write-data/influxdb.md b/content/flux/v0.x/write-data/influxdb.md index 94fc19e57..d74c9a4af 100644 --- a/content/flux/v0.x/write-data/influxdb.md +++ b/content/flux/v0.x/write-data/influxdb.md @@ -16,7 +16,7 @@ related: list_code_example: | ```js data - |> to(bucket: "example-bucket") + |> to(bucket: "example-bucket") ``` --- @@ -82,30 +82,24 @@ m,id=001,loc=SF temp=71.2,hum=52.8 1609466400000000000 {{% code-tab-content %}} ```js data - |> to( - bucket: "example-bucket" - ) + |> to(bucket: "example-bucket") ``` {{% /code-tab-content %}} {{% code-tab-content %}} ```js data - |> to( - bucket: "example-bucket", - org: "example-org", - token: "mY5uPeRs3Cre7tok3N" - ) + |> to(bucket: "example-bucket", org: "example-org", token: "mY5uPeRs3Cre7tok3N") ``` {{% /code-tab-content %}} {{% code-tab-content %}} ```js data - |> to( - bucket: "example-bucket", - org: "example-org", - token: "mY5uPeRs3Cre7tok3N", - host: "https://myinfluxdbdomain.com/8086" - ) + |> to( + bucket: "example-bucket", + org: "example-org", + token: "mY5uPeRs3Cre7tok3N", + host: "https://myinfluxdbdomain.com/8086", + ) ``` {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} @@ -137,7 +131,7 @@ Columns **not in the group key** are written to InfluxDB as [fields](/{{< latest import "experimental" data - |> experimental.to(bucket: "example-bucket") + |> experimental.to(bucket: "example-bucket") ``` Given the following input [stream of tables](/flux/v0.x/get-started/data-model/#stream-of-tables): @@ -181,9 +175,7 @@ m,id=001,loc=BK min=5i,max=3i,mean=6.5 1609466400000000000 import "experimental" data - |> experimental.to( - bucket: "example-bucket" - ) + |> experimental.to(bucket: "example-bucket") ``` {{% /code-tab-content %}} {{% code-tab-content %}} @@ -191,11 +183,7 @@ data import "experimental" data - |> experimental.to( - bucket: "example-bucket", - org: "example-org", - token: "mY5uPeRs3Cre7tok3N" - ) + |> experimental.to(bucket: "example-bucket", org: "example-org", token: "mY5uPeRs3Cre7tok3N") ``` {{% /code-tab-content %}} {{% code-tab-content %}} @@ -203,12 +191,12 @@ data import "experimental" data - |> experimental.to( - bucket: "example-bucket", - org: "example-org", - token: "mY5uPeRs3Cre7tok3N", - host: "https://myinfluxdbdomain.com/8086" - ) + |> experimental.to( + bucket: "example-bucket", + org: "example-org", + token: "mY5uPeRs3Cre7tok3N", + host: "https://myinfluxdbdomain.com/8086", + ) ``` {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} diff --git a/content/flux/v0.x/write-data/sql/_index.md b/content/flux/v0.x/write-data/sql/_index.md index 2c9dcf2c9..506e0f7a6 100644 --- a/content/flux/v0.x/write-data/sql/_index.md +++ b/content/flux/v0.x/write-data/sql/_index.md @@ -14,12 +14,12 @@ related: list_code_example: | ```js import "sql" - + sql.to( - driverName: "postgres", - dataSourceName: "postgresql://user:password@localhost", - table: "ExampleTable", - batchSize: 10000 + driverName: "postgres", + dataSourceName: "postgresql://user:password@localhost", + table: "ExampleTable", + batchSize: 10000, ) ``` --- @@ -71,9 +71,9 @@ username = secrets.get(key: "POSTGRES_USER") password = secrets.get(key: "POSTGRES_PASS") sql.to( - driverName: "postgres", - dataSourceName: "postgresql://${username}:${password}@localhost:5432", - table: "example_table" + driverName: "postgres", + dataSourceName: "postgresql://${username}:${password}@localhost:5432", + table: "example_table", ) ``` @@ -106,21 +106,16 @@ Given the following following [stream of tables](/flux/v0.x/get-started/data-mod | 2021-01-01T00:00:10Z | t2 | 4 | | 2021-01-01T00:00:20Z | t2 | -3 | -##### Flux query +##### Flux script ```js import "sql" data - |> sql.from( - driver: "mysql", - dataSourceName: "username:passwOrd@tcp(localhost:3306)/db", - table: "exampleTable" - ) -``` - -##### SQL Query -```sql -SELECT * FROM exampleTable + |> sql.to( + driver: "mysql", + dataSourceName: "username:passwOrd@tcp(localhost:3306)/db", + table: "exampleTable" + ) ``` ##### SQL output diff --git a/content/flux/v0.x/write-data/sql/amazon-rds.md b/content/flux/v0.x/write-data/sql/amazon-rds.md index a3fc10f40..80934627d 100644 --- a/content/flux/v0.x/write-data/sql/amazon-rds.md +++ b/content/flux/v0.x/write-data/sql/amazon-rds.md @@ -15,13 +15,13 @@ related: list_code_example: | ```js import "sql" - + data - |> sql.to( - driverName: "snowflake", - dataSourceName: "postgresql://my-instance.123456789012.us-east-1.rds.amazonaws.com:5432", - query: "SELECT * FROM example_table" - ) + |> sql.to( + driverName: "postgres", + dataSourceName: "postgresql://my-instance.123456789012.us-east-1.rds.amazonaws.com:5432", + table: "example_table", + ) ``` --- @@ -43,11 +43,11 @@ with Flux: import "sql" data - |> sql.to( - driverName: "postgres", - dataSourceName: "postgresql://my-instance.123456789012.us-east-1.rds.amazonaws.com:5432", - table: "example_table" - ) + |> sql.to( + driverName: "postgres", + dataSourceName: "postgresql://my-instance.123456789012.us-east-1.rds.amazonaws.com:5432", + table: "example_table", + ) ``` ## Supported database engines diff --git a/content/flux/v0.x/write-data/sql/bigquery.md b/content/flux/v0.x/write-data/sql/bigquery.md index fd5589e87..99cc6caec 100644 --- a/content/flux/v0.x/write-data/sql/bigquery.md +++ b/content/flux/v0.x/write-data/sql/bigquery.md @@ -17,11 +17,11 @@ list_code_example: | import "sql" data - |> sql.from( - driverName: "bigquery", - dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y", - query: "SELECT * FROM exampleTable" - ) + |> sql.to( + driverName: "bigquery", + dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y", + table: "exampleTable", + ) ``` --- @@ -41,11 +41,11 @@ To write data to [Google BigQuery](https://cloud.google.com/bigquery) with Flux: import "sql" data - |> sql.to( - driverName: "bigquery", - dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y", - table: "exampleTable" - ) + |> sql.to( + driverName: "bigquery", + dataSourceName: "bigquery://projectid/?apiKey=mySuP3r5ecR3tAP1K3y", + table: "exampleTable", + ) ``` --- diff --git a/content/flux/v0.x/write-data/sql/cockroachdb.md b/content/flux/v0.x/write-data/sql/cockroachdb.md index b0b50a9be..7a3cefeef 100644 --- a/content/flux/v0.x/write-data/sql/cockroachdb.md +++ b/content/flux/v0.x/write-data/sql/cockroachdb.md @@ -15,13 +15,14 @@ related: list_code_example: | ```js import "sql" - + data - |> sql.to( - driverName: "postgres", - dataSourceName: "postgresql://username:password@localhost:26257/cluster_name.defaultdb?sslmode=verify-full&sslrootcert=certs_dir/cc-ca.crt", - table: "example_table" - ) + |> sql.to( + driverName: "postgres", + dataSourceName: + "postgresql://username:password@localhost:26257/cluster_name.defaultdb?sslmode=verify-full&sslrootcert=certs_dir/cc-ca.crt", + table: "example_table", + ) ``` --- @@ -41,11 +42,12 @@ To write data to [CockroachDB](https://www.cockroachlabs.com/) with Flux: import "sql" data - |> sql.to( - driverName: "postgres", - dataSourceName: "postgresql://username:password@localhost:26257/cluster_name.defaultdb?sslmode=verify-full&sslrootcert=certs_dir/cc-ca.crt", - table: "example_table" - ) + |> sql.to( + driverName: "postgres", + dataSourceName: + "postgresql://username:password@localhost:26257/cluster_name.defaultdb?sslmode=verify-full&sslrootcert=certs_dir/cc-ca.crt", + table: "example_table", + ) ``` --- diff --git a/content/flux/v0.x/write-data/sql/mariadb.md b/content/flux/v0.x/write-data/sql/mariadb.md index 47180d600..5597e80cb 100644 --- a/content/flux/v0.x/write-data/sql/mariadb.md +++ b/content/flux/v0.x/write-data/sql/mariadb.md @@ -17,11 +17,11 @@ list_code_example: | import "sql" data - |> sql.to( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - table: "example_table" - ) + |> sql.to( + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + table: "example_table", + ) ``` --- @@ -41,11 +41,11 @@ To write data to [MariaDB](https://mariadb.org/) with Flux: import "sql" data - |> sql.to( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - query: "SELECT * FROM example_table" - ) + |> sql.to( + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + query: "SELECT * FROM example_table", + ) ``` --- diff --git a/content/flux/v0.x/write-data/sql/mysql.md b/content/flux/v0.x/write-data/sql/mysql.md index 3da933471..fa21f6c9c 100644 --- a/content/flux/v0.x/write-data/sql/mysql.md +++ b/content/flux/v0.x/write-data/sql/mysql.md @@ -17,11 +17,11 @@ list_code_example: | import "sql" data - |> sql.to( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - table: "example_table" - ) + |> sql.to( + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + table: "example_table", + ) ``` --- @@ -41,11 +41,11 @@ To write data to [MySQL](https://www.mysql.com/) with Flux: import "sql" data - |> sql.to( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - table: "example_table" - ) + |> sql.to( + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + table: "example_table", + ) ``` --- diff --git a/content/flux/v0.x/write-data/sql/percona.md b/content/flux/v0.x/write-data/sql/percona.md index 8d23c516d..db041182e 100644 --- a/content/flux/v0.x/write-data/sql/percona.md +++ b/content/flux/v0.x/write-data/sql/percona.md @@ -17,11 +17,11 @@ list_code_example: | import "sql" data - |> sql.to( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - table: "example_table" - ) + |> sql.to( + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + table: "example_table", + ) ``` --- @@ -41,11 +41,11 @@ To write data to [Percona](https://www.percona.com/) with Flux: import "sql" data - |> sql.to( - driverName: "mysql", - dataSourceName: "user:password@tcp(localhost:3306)/db", - table: "example_table" - ) + |> sql.to( + driverName: "mysql", + dataSourceName: "user:password@tcp(localhost:3306)/db", + table: "example_table", + ) ``` --- diff --git a/content/flux/v0.x/write-data/sql/postgresql.md b/content/flux/v0.x/write-data/sql/postgresql.md index b1d255a0e..58f65f826 100644 --- a/content/flux/v0.x/write-data/sql/postgresql.md +++ b/content/flux/v0.x/write-data/sql/postgresql.md @@ -41,11 +41,11 @@ To write data to [PostgreSQL](https://www.postgresql.org/) with Flux: import "sql" data - |> sql.to( - driverName: "postgres", - dataSourceName: "postgresql://username:password@localhost:5432", - table: "example_table" - ) + |> sql.to( + driverName: "postgres", + dataSourceName: "postgresql://username:password@localhost:5432", + table: "example_table", + ) ``` --- diff --git a/content/flux/v0.x/write-data/sql/sap-hana.md b/content/flux/v0.x/write-data/sql/sap-hana.md index a43bc5d66..6d567f096 100644 --- a/content/flux/v0.x/write-data/sql/sap-hana.md +++ b/content/flux/v0.x/write-data/sql/sap-hana.md @@ -17,11 +17,11 @@ list_code_example: | import "sql" data - |> sql.to( - driverName: "hdb", - dataSourceName: "hdb://username:password@myserver:30015", - table: "SCHEMA.TABLE" - ) + |> sql.to( + driverName: "hdb", + dataSourceName: "hdb://username:password@myserver:30015", + table: "SCHEMA.TABLE", + ) ``` --- @@ -41,11 +41,11 @@ To write data to [SAP HANA](https://www.sap.com/products/hana.html) with Flux: import "sql" data - |> sql.to( - driverName: "hdb", - dataSourceName: "hdb://username:password@myserver:30015", - table: "SCHEMA.TABLE" - ) + |> sql.to( + driverName: "hdb", + dataSourceName: "hdb://username:password@myserver:30015", + table: "SCHEMA.TABLE", + ) ``` --- diff --git a/content/flux/v0.x/write-data/sql/snowflake.md b/content/flux/v0.x/write-data/sql/snowflake.md index 35c4e494b..0cb92aa52 100644 --- a/content/flux/v0.x/write-data/sql/snowflake.md +++ b/content/flux/v0.x/write-data/sql/snowflake.md @@ -17,11 +17,11 @@ list_code_example: | import "sql" data - |> sql.to( - driverName: "snowflake", - dataSourceName: "user:password@account/db/exampleschema?warehouse=wh", - table: "example_table" - ) + |> sql.to( + driverName: "snowflake", + dataSourceName: "user:password@account/db/exampleschema?warehouse=wh", + table: "example_table", + ) ``` --- @@ -41,11 +41,11 @@ To write data to [Snowflake](https://www.snowflake.com/) with Flux: import "sql" data - |> sql.to( - driverName: "snowflake", - dataSourceName: "user:password@account/db/exampleschema?warehouse=wh", - table: "example_table" - ) + |> sql.to( + driverName: "snowflake", + dataSourceName: "user:password@account/db/exampleschema?warehouse=wh", + table: "example_table", + ) ``` --- diff --git a/content/flux/v0.x/write-data/sql/sql-server.md b/content/flux/v0.x/write-data/sql/sql-server.md index c713235e4..5da8f57bf 100644 --- a/content/flux/v0.x/write-data/sql/sql-server.md +++ b/content/flux/v0.x/write-data/sql/sql-server.md @@ -17,11 +17,11 @@ list_code_example: | import "sql" data - |> sql.to( - driverName: "sqlserver", - dataSourceName: "sqlserver://user:password@localhost:1433?database=examplebdb", - table: "Example.Table" - ) + |> sql.to( + driverName: "sqlserver", + dataSourceName: "sqlserver://user:password@localhost:1433?database=examplebdb", + table: "Example.Table", + ) ``` --- @@ -41,11 +41,11 @@ To write data to [Microsoft SQL Server](https://www.microsoft.com/sql-server/) w import "sql" data - |> sql.to( - driverName: "sqlserver", - dataSourceName: "sqlserver://user:password@localhost:1433?database=examplebdb", - table: "Example.Table" - ) + |> sql.to( + driverName: "sqlserver", + dataSourceName: "sqlserver://user:password@localhost:1433?database=examplebdb", + table: "Example.Table", + ) ``` --- diff --git a/content/flux/v0.x/write-data/sql/sqlite.md b/content/flux/v0.x/write-data/sql/sqlite.md index 1da730a58..4b723a36f 100644 --- a/content/flux/v0.x/write-data/sql/sqlite.md +++ b/content/flux/v0.x/write-data/sql/sqlite.md @@ -17,11 +17,11 @@ list_code_example: | import "sql" data - |> sql.to( - driverName: "sqlite3", - dataSourceName: "file:/path/to/example.db?cache=shared&mode=ro", - table: "example_table" - ) + |> sql.to( + driverName: "sqlite3", + dataSourceName: "file:/path/to/example.db?cache=shared&mode=ro", + table: "example_table", + ) ``` --- @@ -41,11 +41,11 @@ To write data to [SQLite](https://www.sqlite.org/index.html) with Flux: import "sql" data - |> sql.to( - driverName: "sqlite3", - dataSourceName: "file:/path/to/example.db?cache=shared&mode=ro", - table: "example_table" - ) + |> sql.to( + driverName: "sqlite3", + dataSourceName: "file:/path/to/example.db?cache=shared&mode=ro", + table: "example_table", + ) ``` {{% note %}} diff --git a/content/flux/v0.x/write-data/sql/vertica.md b/content/flux/v0.x/write-data/sql/vertica.md index eadd06cf1..7bc6a8a5b 100644 --- a/content/flux/v0.x/write-data/sql/vertica.md +++ b/content/flux/v0.x/write-data/sql/vertica.md @@ -16,13 +16,13 @@ related: list_code_example: | ```js import "sql" - + data - |> sql.to( - driverName: "vertica", - dataSourceName: "vertica://username:password@localhost:5433/dbname", - table: "public.example_table" - ) + |> sql.to( + driverName: "vertica", + dataSourceName: "vertica://username:password@localhost:5433/dbname", + table: "public.example_table", + ) ``` --- @@ -40,13 +40,13 @@ To write data to [Vertica](https://www.vertica.com/) with Flux: ```js import "sql" - + data - |> sql.to( - driverName: "vertica", - dataSourceName: "vertica://username:password@localhost:5433/dbname", - table: "public.example_table" - ) + |> sql.to( + driverName: "vertica", + dataSourceName: "vertica://username:password@localhost:5433/dbname", + table: "public.example_table", + ) ``` --- diff --git a/content/influxdb/cloud/account-management/_index.md b/content/influxdb/cloud/account-management/_index.md index ce4e5efc7..cb48c56f0 100644 --- a/content/influxdb/cloud/account-management/_index.md +++ b/content/influxdb/cloud/account-management/_index.md @@ -4,14 +4,12 @@ description: > View and manage information related to your InfluxDB Cloud account such as pricing plans, data usage, account cancelation, etc. weight: 10 -products: [cloud] aliases: - /influxdb/v2.0/cloud/account-management/ - /influxdb/v2.0/account-management menu: influxdb_cloud: name: Account management -products: [cloud] --- {{< children >}} diff --git a/content/influxdb/cloud/account-management/billing.md b/content/influxdb/cloud/account-management/billing.md index 4785ce212..27e54fe68 100644 --- a/content/influxdb/cloud/account-management/billing.md +++ b/content/influxdb/cloud/account-management/billing.md @@ -13,7 +13,6 @@ menu: influxdb_cloud: parent: Account management name: Manage billing -products: [cloud] --- Learn how to upgrade your plan, access billing details, and review and resolve plan limit overages: @@ -31,33 +30,34 @@ Learn how to upgrade your plan, access billing details, and review and resolve p ## Upgrade to Usage-Based Plan -1. Click **Upgrade Now** in the lower left corner of the {{< cloud-name "short" >}} user interface (UI). +1. Click **Upgrade Now** in the upper right corner of the {{< cloud-name "short" >}} user interface (UI). 2. Set your limits (opt to receive an email when your usage exceeds the amount you enter in the **Limit ($1 minimum)** field). All service updates, security notifications, and other important information are sent to the email address you provide. 3. Enter your payment information and billing address, and then click **Upgrade**. A Ready To Rock confirmation appears; click **Start building your team**. Your plan will be upgraded and {{< cloud-name >}} opens with a default organization and bucket (both created from your email address). ## Access billing details 1. In the {{< cloud-name "short" >}} UI, select the **user avatar** in the left - navigation menu, and select **Billing**. + navigation menu, and select **Account** > + **Billing**. {{< nav-icon "account" >}} 2. Do one of the following: - If you subscribed to an InfluxDB Cloud plan through [**AWS Marketplace**](https://aws.amazon.com/marketplace/pp/B08234JZPS), [**Azure Marketplace**](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/influxdata.influxdb-cloud), or [**GCP Marketplace**](https://console.cloud.google.com/marketplace/details/influxdata-public/cloud2-gcp-marketplace-prod?pli=1), click the **AWS**, **Microsoft**, or **GCP** link to access your billing and subscription information. - - If you subscribed to an InfluxDB Cloud plan through **InfluxData**, complete the following procedures as needed: + - If you subscribed to an InfluxDB Cloud plan through **InfluxData**, complete the following procedures as needed: - - [Add or update your payment method](#add-or-update-your-payment-method) - - [Add or update your contact information](#add-or-update-your-contact-information) - - [Send notifications when usage exceeds an amount](#send-notifications-when-usage-exceeds-an-amount) + - [Add or update your payment method](#add-or-update-your-payment-method) + - [Add or update your contact information](#add-or-update-your-contact-information) + - [Send notifications when usage exceeds an amount](#send-notifications-when-usage-exceeds-an-amount) - View information about: +3. View information about: - - [Usage-Based Plan](#view-usage-based-plan-information) - - [Free Plan](#view-free-plan-information) - - [Exceeded rate limits](#exceeded-rate-limits) - - [Billing cycle](#billing-cycle) - - [Declined or late payments](#declined-or-late-payments) + - [Usage-Based Plan](#view-usage-based-plan-information) + - [Free Plan](#view-free-plan-information) + - [Exceeded rate limits](#review-and-resolve-plan-limit-overages) + - [Billing cycle](#billing-cycle) + - [Declined or late payments](#declined-or-late-payments) ### Add or update your payment method @@ -98,9 +98,9 @@ On the **Billing page**, view the total limits available for the Free Plan. ## Review and resolve plan limit overages -If you exceed your plan's [limits](/influxdb/cloud/account-management/pricing-plans), you'll receive a notification in the {{< cloud-name "short" >}} user interface (UI) **Usage** page. +If you exceed your plan's [adjustable quotas or limits](/influxdb/cloud/account-management/limits/), you'll receive a notification in the {{< cloud-name "short" >}} user interface (UI) **Usage** page. -If exceed the series cardinality limit, InfluxDB adds a rate limit event to your **Usage** page for review, and begins to reject write requests. To start processing write requests again, do the following as needed: +If you exceed the series cardinality limit, InfluxDB adds a rate limit event warning on the **Usage** page, and begins to reject write requests with new series. To start processing write requests again, do the following as needed: - **Usage-Based plan**: To request higher rate limits, contact [InfluxData Support](mailto:support@influxdata.com). - **Series cardinality limits**: If you exceed the series cardinality limit, see how to [resolve high series cardinality](https://docs.influxdata.com/influxdb/v2.0/write-data/best-practices/resolve-high-cardinality/). @@ -127,4 +127,3 @@ Billing occurs on the first day of the month for the previous month. For example | **One week later** | Account disabled except data writes. Update your payment method to successfully process your payment and enable your account. | | **10-14 days later** | Account completely disabled. During this period, you must contact us at support@influxdata.com to process your payment and enable your account. | | **21 days later** | Account suspended. Contact support@influxdata.com to settle your final bill and retrieve a copy of your data or access to InfluxDB Cloud dashboards, tasks, Telegraf configurations, and so on.| - diff --git a/content/influxdb/cloud/account-management/data-usage.md b/content/influxdb/cloud/account-management/data-usage.md index 92053a7b4..315183d0a 100644 --- a/content/influxdb/cloud/account-management/data-usage.md +++ b/content/influxdb/cloud/account-management/data-usage.md @@ -11,10 +11,12 @@ menu: influxdb_cloud: parent: Account management name: View data usage -products: [cloud] +related: + - /flux/v0.x/stdlib/experimental/usage/from/ + - /flux/v0.x/stdlib/experimental/usage/limits/ --- -View the statistics of your data usage and rate limits (reads, writes, and delete limits) on the Usage page. Some usage data affects monthly costs (pricing vectors) and other usage data, including delete limits, does not affect pricing. For more information about costs and limits, see the [pricing plans](/influxdb/cloud/account-management/pricing-plans/). +View the statistics of your data usage and rate limits (reads, writes, and delete limits) on the Usage page. Some usage data affects monthly costs ([pricing vectors](/influxdb/cloud/account-management/pricing-plans/#pricing-vectors)) and other usage data (for example, delete limits), does not affect pricing. For more information, see the [InfluxDB Cloud limits and adjustable quotas](/influxdb/cloud/account-management/limits/). To view your {{< cloud-name >}} data usage, do the following: @@ -28,34 +30,3 @@ To view your {{< cloud-name >}} data usage, do the following: - **Data Out:** Total data in MB sent as responses to queries from your {{< cloud-name "short" >}} instance. A line graph displays usage for the selected vector for the specified time period. - -## Exceeded rate limits - -If you exceed your [plan's data limits](/influxdb/cloud/account-management/pricing-plans/), {{< cloud-name >}} UI displays a notification message, and the following occurs: - -- When **write or read requests or series cardinality exceed** the specified limit within a five-minute window, the request is rejected and the following events appears under **Limit Events** on the Usage page as applicable: `event_type_limited_query` or `event_type_limited_write` or `event_type_limited_cardinality` - - _To raise these rate limits, [upgrade to a Usage-based Plan](/influxdb/cloud/account-management/billing/#upgrade-to-usage-based-plan)._ - -- When **delete requests exceed** the specified limit within a five-minute window, the request is rejected and `event_type_limited_delete_rate` appears under **Limit Events** on the Usage page. - {{% note %}} -**Tip:** -Combine predicate expressions (if possible) into a single request. InfluxDB rate limits per number of requests (not points in request). -{{% /note %}} - -### InfluxDB API: HTTP rate limit responses - -The InfluxDB API returns the following responses: - -- When a **read or write or delete request exceeds** limits: - - ``` - HTTP 429 “Too Many Requests” - Retry-After: xxx (seconds to wait before retrying the request) - ``` - -- When **series cardinality exceeds** your plan's limit: - - ``` - HTTP 503 “Series cardinality exceeds your plan's limit” - ``` diff --git a/content/influxdb/cloud/account-management/limits.md b/content/influxdb/cloud/account-management/limits.md new file mode 100644 index 000000000..9da3e2e4f --- /dev/null +++ b/content/influxdb/cloud/account-management/limits.md @@ -0,0 +1,105 @@ +--- +title: InfluxDB Cloud limits and adjustable quotas +description: > + InfluxDB Cloud has adjustable service quotas and global (non-adjustable) system limits. +weight: 110 +menu: + influxdb_cloud: + parent: Account management + name: Adjustable quotas and limits +related: + - /flux/v0.x/stdlib/experimental/usage/from/ + - /flux/v0.x/stdlib/experimental/usage/limits/ + - /influxdb/cloud/write-data/best-practices/resolve-high-cardinality/ +--- + +InfluxDB Cloud applies (non-adjustable) global system limits and adjustable service quotas on a per organization basis. Currently, InfluxDB Cloud supports one organization per account. + +{{% warn %}} +All __rates__ (data-in (writes), queries (reads), and deletes) are accrued within a fixed five-minute window. Once a rate is exceeded, an error response is returned until the current five-minute window resets. +{{% /warn %}} + +Review adjustable service quotas and global limits to plan for your bandwidth needs: + +- [Adjustable service quotas](#adjustable-service-quotas) +- [Global limits](#global-limits) +- [UI error messages](#ui-error-messages) +- [API error responses](#api-error-responses) + +## Adjustable service quotas + +To reduce the chance of unexpected charges and protect the service for all users, InfluxDB Cloud has adjustable service quotas applied per account. + +_To request higher service quotas, reach out to [InfluxData Support](https://support.influxdata.com/)._ + +### Free Plan + +- **Data-in**: Rate of 5 MB per 5 minutes (average of 17 kb/s) + - Uncompressed bytes of normalized [line protocol](/influxdb/cloud/reference/syntax/line-protocol/) +- **Read**: Rate of 300 MB per 5 minutes (average of 1000 kb/s) + - Bytes in HTTP in response payload +- **Cardinality**: 10k series (see [how to measure and resolve high cardinality](/influxdb/cloud/write-data/best-practices/resolve-high-cardinality/)) +- **Available resources**: + - 2 buckets (excluding `_monitoring` and `_tasks` buckets) + - 5 dashboards + - 5 tasks +- **Alerts**: + - 2 checks + - 2 notification rules + - Unlimited Slack notification endpoints +- **Storage**: 30 days of data retention (see [retention period](/influxdb/cloud/reference/glossary/#retention-period)) + +{{% note %}} +To write historical data older than 30 days, retain data for more than 30 days, or increase rate limits, upgrade to the Cloud [Usage-Based Plan](/influxdb/cloud/account-management/pricing-plans/#usage-based-plan). +{{% /note %}} + +### Usage-Based Plan + +- **Data-in**: Rate of 3 GB per 5 minutes + - Uncompressed bytes of normalized [line protocol](/influxdb/cloud/reference/syntax/line-protocol/) +- **Read**: Rate of 3 GB data per 5 minutes + - Bytes in HTTP in response payload +- **Cardinality**: 1M series (see [how to measure and resolve high cardinality](/influxdb/cloud/write-data/best-practices/resolve-high-cardinality/)) +- **Unlimited resources** + - dashboards + - tasks + - buckets + - users +- **Alerts** + - Unlimited checks + - Unlimited notification rules + - Unlimited notification endpoints for [all endpoints](/flux/v0.x/tags/notification-endpoints/) +- **Storage**: Set your retention period to unlimited or up to 1 year by [updating a bucket’s retention period in the InfluxDB UI](/influxdb/cloud/organizations/buckets/update-bucket/#update-a-buckets-retention-period-in-the-influxdb-ui), or [set a custom retention period](/influxdb/cloud/organizations/buckets/update-bucket/#update-a-buckets-retention-period) using the [`influx` CLI](influxdb/cloud/reference/cli/influx/). + +## Global limits + +InfluxDB Cloud applies global (non-adjustable) system limits to all accounts, which protects the InfluxDB Cloud infrastructure for all users. As the service continues to evolve, we'll continue to review these global limits and adjust them as appropriate. + +Limits include: + +- Write request limits: + - 50 MB maximum HTTP request batch size (compressed or uncompressed--defined in the `Content-Encoding` header) + - 250 MB maximum HTTP request batch size after decompression +- Query processing time: 90 seconds +- Task processing time: 150 seconds +- Delete request limit: Rate of 300 every 5 minutes + {{% note %}} +**Tip:** +Combine predicate expressions (if possible) into a single request. InfluxDB limits delete requests by number of requests (not points in request). +{{% /note %}} + +## UI error messages + +The {{< cloud-name >}} UI displays a notification message when service quotas or limits are exceeded. The error messages correspond with the relevant [API error responses](#api-error-responses). + +Errors can also be viewed in the [Usage page](/influxdb/cloud/account-management/data-usage/) under **Limit Events**, e.g. `event_type_limited_query`, `event_type_limited_write`,`event_type_limited_cardinality`, or `event_type_limited_delete_rate`. + +## API error responses + +The following API error responses occur when your plan's service quotas are exceeded. + +| HTTP response code | Error message | Description | +| :----------------------------- | :----------------------------------------- | :----------- | +| `HTTP 413 "Request Too Large"` | cannot read data: points in batch is too large | If a **write** request exceeds the maximum [global limit](/influxdb/cloud/account-management/limits/#global-limits) | +| `HTTP 429 "Too Many Requests"` | Retry-After: xxx (seconds to wait before retrying the request) | If a **read** or **write** request exceeds your plan's [adjustable service quotas](/influxdb/cloud/account-management/limits/#adjustable-service-quotas) or if a **delete** request exceeds the maximum [global limit](/influxdb/cloud/account-management/limits/#global-limits) | +| `HTTP 429 "Too Many Requests"` | Series cardinality exceeds your plan's service quota | If **series cardinality** exceeds your plan's [adjustable service quotas](/influxdb/cloud/account-management/limits/#adjustable-service-quotas) | diff --git a/content/influxdb/cloud/account-management/offboarding.md b/content/influxdb/cloud/account-management/offboarding.md index c2e3d4261..fb0fc3805 100644 --- a/content/influxdb/cloud/account-management/offboarding.md +++ b/content/influxdb/cloud/account-management/offboarding.md @@ -11,7 +11,6 @@ menu: influxdb_cloud: parent: Account management name: Cancel InfluxDB Cloud -products: [cloud] --- To cancel your {{< cloud-name >}} subscription, complete the following steps: diff --git a/content/influxdb/cloud/account-management/pricing-calculator.md b/content/influxdb/cloud/account-management/pricing-calculator.md index 714b55d9b..f372cdfe3 100644 --- a/content/influxdb/cloud/account-management/pricing-calculator.md +++ b/content/influxdb/cloud/account-management/pricing-calculator.md @@ -7,7 +7,6 @@ weight: 2 menu: influxdb_cloud: name: Pricing calculator -products: [cloud] draft: true --- diff --git a/content/influxdb/cloud/account-management/pricing-plans.md b/content/influxdb/cloud/account-management/pricing-plans.md index 0e881c676..83d2b5e1e 100644 --- a/content/influxdb/cloud/account-management/pricing-plans.md +++ b/content/influxdb/cloud/account-management/pricing-plans.md @@ -1,5 +1,5 @@ --- -title: InfluxDB Cloud pricing plans +title: InfluxDB Cloud plans description: > InfluxDB Cloud provides two pricing plans to fit your needs – the Free Plan and the Usage-based Plan. aliases: @@ -11,62 +11,21 @@ menu: influxdb_cloud: parent: Account management name: Pricing plans -products: [cloud] --- -InfluxDB Cloud offers two plans, which provide different data and resource usage limits: - -- [Free Plan](#free-plan) -- [Usage-Based Plan](#usage-based-plan) +InfluxDB Cloud offers a [Free Plan](#free-plan), a [Usage-Based Plan](#usage-based-plan) to pay as you go, and a discounted [Annual Plan](#annual-plan). ## Free Plan -All new {{< cloud-name >}} accounts start with Free Plan that limits data and resource usage. -Use this plan as much and as long as you want within the Free Plan limits below. - -### Data limits - -- **Data In:** 5.1MB every 5 minutes -- **Query:** 300MB every 5 minutes -- **Series cardinality:** 10,000 -- **Storage:** 30-day data retention -{{% note %}} -To write historical data older than 30 days or retain data for more than 30 days, upgrade to the Cloud [Usage-Based plan](/influxdb/cloud/account-management/pricing-plans/#usage-based-plan). -{{% /note %}} - -### Resource limits - - - 5 dashboards - - 5 tasks - - 2 buckets - - 2 checks - - 2 notification rules - - Unlimited Slack notification endpoints - -_To raise rate limits, [upgrade to a Usage-based Plan](/influxdb/cloud/account-management/billing/#upgrade-to-usage-based-plan)._ +All new {{< cloud-name >}} accounts start with Free Plan that provides a limited number of resources and data usage. See [plan limits](/influxdb/cloud/account-management/limits/). ## Usage-Based Plan -The Usage-based Plan offers more flexibility and ensures you only pay for what you [use](/influxdb/cloud/account-management/data-usage/). +The Usage-Based Plan offers more flexibility and ensures you only pay for what you [use](/influxdb/cloud/account-management/data-usage/). Usage-Based Plans are based on consumption as measured by usage on the [pricing vectors](#pricing-vectors). -### Data limit - -- **Ingest batch size:** 50MB - -### Soft data limits - -To protect against any intentional or unintentional harm, the Usage-Based Plan includes soft limits. -_To request higher soft data limits, contact [InfluxData Support](mailto:support@influxdata.com)._ - -- **Data In:** 300MB every 5 minutes -- **Query:** 3000MB every 5 minutes -- **Series cardinality:** 1,000,000 initial limit (higher limits available; [contact InfluxData Support](mailto:support@influxdata.com)) -- **Storage:** Unlimited retention -{{% note %}} -Set your retention period to unlimited or up to 1 year by [updating a bucket’s retention period in the InfluxDB UI](/influxdb/cloud/organizations/buckets/update-bucket/#update-a-buckets-retention-period-in-the-influxdb-ui), or [set a custom retention period](/influxdb/cloud/organizations/buckets/update-bucket/#update-a-buckets-retention-period) using the [`influx` CLI](/influxdb/cloud/reference/cli/influx/). -{{% /note %}} +Usage-Based Plans also offer access to all notification endpoints such as PagerDuty, Slack, HTTP, and [endpoints available in Flux](/flux/v0.x/tags/notification-endpoints/). ### Pricing vectors @@ -82,12 +41,11 @@ The Usage-Based Plan uses the following pricing vectors to calculate InfluxDB Cl Discover how to [manage InfluxDB Cloud billing](/influxdb/cloud/account-management/billing/). -### Unlimited resources +## Annual Plan + - - Dashboards - - Tasks - - Buckets - - Users - - Checks - - Notification rules - - PagerDuty, Slack, and HTTP notification endpoints +An Annual Plan offers a discount for a commitment to a specific amount of usage over set period of time. This plan uses the same pricing vectors and calculation methodology as Usage-Based Plans. + +__Interested in an Annual Plan? Reach out to [InfluxData Sales](https://www.influxdata.com/contact-sales/).__ + + diff --git a/content/influxdb/cloud/account-management/switch-account.md b/content/influxdb/cloud/account-management/switch-account.md new file mode 100644 index 000000000..2cab9717b --- /dev/null +++ b/content/influxdb/cloud/account-management/switch-account.md @@ -0,0 +1,23 @@ +--- +title: Switch InfluxDB Cloud accounts +seotitle: Switch between InfluxDB Cloud accounts +description: > + Switch from one InfluxDB Cloud account to another and set a default account. +menu: + influxdb_cloud: + name: Switch InfluxDB accounts + parent: Account management +weight: 105 +--- +If you belong to more than one {{< cloud-name >}} account with the same email address, you can switch from one account to another while staying logged in. An account can contain multiple organizations. + +You can also set a default account. The default account is the account automatically used when the user logs in. + +To switch {{< cloud-name "short" >}} accounts: + +1. In the {{< cloud-name "short" >}} UI, select the **user avatar** in the left + navigation menu, and select **Account** > **Settings**. +2. Click **Switch Account**. If this option doesn't appear, your email address is only associated with one account. +3. Select the account you want to switch to or set as the default in the window that appears. +4. To switch to the account, select **Switch Account**. +5. To set an account as the default, select **Set Default Account**. diff --git a/content/influxdb/cloud/api-guide/client-libraries/arduino.md b/content/influxdb/cloud/api-guide/client-libraries/arduino.md index a411af4f9..05bcc9508 100644 --- a/content/influxdb/cloud/api-guide/client-libraries/arduino.md +++ b/content/influxdb/cloud/api-guide/client-libraries/arduino.md @@ -9,6 +9,7 @@ menu: influxdb_cloud: name: Arduino parent: Client libraries - url: https://github.com/tobiasschuerg/InfluxDB-Client-for-Arduino + params: + url: https://github.com/tobiasschuerg/InfluxDB-Client-for-Arduino weight: 201 --- diff --git a/content/influxdb/cloud/api-guide/client-libraries/csharp.md b/content/influxdb/cloud/api-guide/client-libraries/csharp.md index b60f1b120..4d57259bd 100644 --- a/content/influxdb/cloud/api-guide/client-libraries/csharp.md +++ b/content/influxdb/cloud/api-guide/client-libraries/csharp.md @@ -8,6 +8,7 @@ menu: influxdb_cloud: name: C# parent: Client libraries - url: https://github.com/influxdata/influxdb-client-csharp + params: + url: https://github.com/influxdata/influxdb-client-csharp weight: 201 --- diff --git a/content/influxdb/cloud/api-guide/client-libraries/dart.md b/content/influxdb/cloud/api-guide/client-libraries/dart.md index 30bc56337..b54431fd8 100644 --- a/content/influxdb/cloud/api-guide/client-libraries/dart.md +++ b/content/influxdb/cloud/api-guide/client-libraries/dart.md @@ -8,6 +8,7 @@ menu: influxdb_cloud: name: Dart parent: Client libraries - url: https://github.com/influxdata/influxdb-client-dart + params: + url: https://github.com/influxdata/influxdb-client-dart weight: 201 --- diff --git a/content/influxdb/cloud/api-guide/client-libraries/java.md b/content/influxdb/cloud/api-guide/client-libraries/java.md index 350ad971e..9d5ae02fc 100644 --- a/content/influxdb/cloud/api-guide/client-libraries/java.md +++ b/content/influxdb/cloud/api-guide/client-libraries/java.md @@ -8,6 +8,7 @@ menu: influxdb_cloud: name: Java parent: Client libraries - url: https://github.com/influxdata/influxdb-client-java + params: + url: https://github.com/influxdata/influxdb-client-java weight: 201 --- diff --git a/content/influxdb/cloud/api-guide/client-libraries/kotlin.md b/content/influxdb/cloud/api-guide/client-libraries/kotlin.md index c47846680..bf3c6489c 100644 --- a/content/influxdb/cloud/api-guide/client-libraries/kotlin.md +++ b/content/influxdb/cloud/api-guide/client-libraries/kotlin.md @@ -8,6 +8,7 @@ menu: influxdb_cloud: name: Kotlin parent: Client libraries - url: https://github.com/influxdata/influxdb-client-java/tree/master/client-kotlin + params: + url: https://github.com/influxdata/influxdb-client-java/tree/master/client-kotlin weight: 201 --- diff --git a/content/influxdb/cloud/api-guide/client-libraries/php.md b/content/influxdb/cloud/api-guide/client-libraries/php.md index ca70c3a03..1ce4c2180 100644 --- a/content/influxdb/cloud/api-guide/client-libraries/php.md +++ b/content/influxdb/cloud/api-guide/client-libraries/php.md @@ -8,6 +8,7 @@ menu: influxdb_cloud: name: PHP parent: Client libraries - url: https://github.com/influxdata/influxdb-client-php + params: + url: https://github.com/influxdata/influxdb-client-php weight: 201 --- diff --git a/content/influxdb/cloud/api-guide/client-libraries/r.md b/content/influxdb/cloud/api-guide/client-libraries/r.md index f7681bedd..34bd2bad2 100644 --- a/content/influxdb/cloud/api-guide/client-libraries/r.md +++ b/content/influxdb/cloud/api-guide/client-libraries/r.md @@ -8,6 +8,7 @@ menu: influxdb_cloud: name: R parent: Client libraries - url: https://github.com/influxdata/influxdb-client-r + params: + url: https://github.com/influxdata/influxdb-client-r weight: 201 --- diff --git a/content/influxdb/cloud/api-guide/client-libraries/ruby.md b/content/influxdb/cloud/api-guide/client-libraries/ruby.md index 198e74cf7..8f7e25302 100644 --- a/content/influxdb/cloud/api-guide/client-libraries/ruby.md +++ b/content/influxdb/cloud/api-guide/client-libraries/ruby.md @@ -8,6 +8,7 @@ menu: influxdb_cloud: name: Ruby parent: Client libraries - url: https://github.com/influxdata/influxdb-client-ruby + params: + url: https://github.com/influxdata/influxdb-client-ruby weight: 201 --- diff --git a/content/influxdb/cloud/api-guide/client-libraries/scala.md b/content/influxdb/cloud/api-guide/client-libraries/scala.md index d6bc90ea9..96af468fa 100644 --- a/content/influxdb/cloud/api-guide/client-libraries/scala.md +++ b/content/influxdb/cloud/api-guide/client-libraries/scala.md @@ -8,6 +8,7 @@ menu: influxdb_cloud: name: Scala parent: Client libraries - url: https://github.com/influxdata/influxdb-client-java/tree/master/client-scala + params: + url: https://github.com/influxdata/influxdb-client-java/tree/master/client-scala weight: 201 --- diff --git a/content/influxdb/cloud/api-guide/client-libraries/swift.md b/content/influxdb/cloud/api-guide/client-libraries/swift.md index f0ccbc96d..273ab9811 100644 --- a/content/influxdb/cloud/api-guide/client-libraries/swift.md +++ b/content/influxdb/cloud/api-guide/client-libraries/swift.md @@ -8,6 +8,7 @@ menu: influxdb_cloud: name: Swift parent: Client libraries - url: https://github.com/influxdata/influxdb-client-swift + params: + url: https://github.com/influxdata/influxdb-client-swift weight: 201 --- diff --git a/content/influxdb/cloud/get-started.md b/content/influxdb/cloud/get-started.md index 4b38b7a7f..a08d9cc23 100644 --- a/content/influxdb/cloud/get-started.md +++ b/content/influxdb/cloud/get-started.md @@ -65,12 +65,11 @@ Add the following as an [InfluxDB task](/influxdb/cloud/process-data/manage-task ```js import "influxdata/influxdb/sample" -option task = { - name: "Collect NOAA NDBC data" - every: 15m, -} + +option task = {name: "Collect NOAA NDBC data", every: 15m} + sample.data(set: "noaa") - |> to(bucket: "noaa" ) + |> to(bucket: "noaa") ``` For more information about this and other InfluxDB sample datasets, see [InfluxDB sample data](/influxdb/cloud/reference/sample-data/). diff --git a/content/influxdb/cloud/influxdb-templates/cloud.md b/content/influxdb/cloud/influxdb-templates/cloud.md index 1f50e1126..8ebde2517 100644 --- a/content/influxdb/cloud/influxdb-templates/cloud.md +++ b/content/influxdb/cloud/influxdb-templates/cloud.md @@ -12,7 +12,6 @@ weight: 101 aliases: - /influxdb/cloud/influxdb-templates/get_started_cloud/ influxdb/cloud/tags: [templates] -products: [cloud] --- To use templates in InfluxDB Cloud, you have a couple options: diff --git a/content/influxdb/cloud/influxdb-templates/create.md b/content/influxdb/cloud/influxdb-templates/create.md index 03835cb30..08195eefa 100644 --- a/content/influxdb/cloud/influxdb-templates/create.md +++ b/content/influxdb/cloud/influxdb-templates/create.md @@ -54,7 +54,7 @@ Provide the following: ###### Export all resources to a template ```sh # Syntax -influx export all -o -f -t +influx export all -o -f -t # Example influx export all \ @@ -108,7 +108,7 @@ Provide the following: ###### Export specific resources to a template ```sh # Syntax -influx export all -o -f -t [resource-flags] +influx export all -o -f -t [resource-flags] # Example influx export all \ @@ -136,10 +136,10 @@ Provide the following: ```sh # Syntax influx export stack \ - -o \ - -t \ - -f \ - + -o \ + -t \ + -f \ + # Example influx export stack \ diff --git a/content/influxdb/cloud/influxdb-templates/monitor-enterprise.md b/content/influxdb/cloud/influxdb-templates/monitor-enterprise.md index dc10bfc41..b75a416b1 100644 --- a/content/influxdb/cloud/influxdb-templates/monitor-enterprise.md +++ b/content/influxdb/cloud/influxdb-templates/monitor-enterprise.md @@ -31,7 +31,7 @@ Before you begin, make sure you have access to the following: - InfluxDB Cloud account ([sign up for free here](https://cloud2.influxdata.com/signup)) - Command line access to a machine [running InfluxDB Enterprise 1.x](/enterprise_influxdb/v1.9/introduction/install-and-deploy/) and permissions to install Telegraf on this machine - Internet connectivity from the machine running InfluxDB Enterprise 1.x and Telegraf to InfluxDB Cloud - - Sufficient resource availability to install the template. InfluxDB Cloud Free Plan accounts include [resource limits](/influxdb/cloud/account-management/pricing-plans/#resource-limits/influxdb/cloud/account-management/pricing-plans/#resource-limits) + - Sufficient resource availability to install the template. InfluxDB Cloud Free Plan accounts include [resource limits](/influxdb/cloud/account-management/limits/#free-plan-limits) ## Install the InfluxDB Enterprise Monitoring template diff --git a/content/influxdb/cloud/migrate-data/_index.md b/content/influxdb/cloud/migrate-data/_index.md new file mode 100644 index 000000000..d3cc29a0f --- /dev/null +++ b/content/influxdb/cloud/migrate-data/_index.md @@ -0,0 +1,14 @@ +--- +title: Migrate data to InfluxDB +description: > + Migrate data from InfluxDB OSS (open source) to InfluxDB Cloud or other InfluxDB OSS instances--or from InfluxDB Cloud to InfluxDB OSS. +menu: + influxdb_cloud: + name: Migrate data +weight: 9 +--- + +Migrate data to InfluxDB from other InfluxDB instances including by InfluxDB OSS +and InfluxDB Cloud. + +{{< children >}} diff --git a/content/influxdb/cloud/migrate-data/migrate-cloud-to-cloud.md b/content/influxdb/cloud/migrate-data/migrate-cloud-to-cloud.md new file mode 100644 index 000000000..eb1fdee51 --- /dev/null +++ b/content/influxdb/cloud/migrate-data/migrate-cloud-to-cloud.md @@ -0,0 +1,380 @@ +--- +title: Migrate data between InfluxDB Cloud organizations +description: > + To migrate data from one InfluxDB Cloud organization to another, query the + data from time-based batches and write the queried data to a bucket in another + InfluxDB Cloud organization. +menu: + influxdb_cloud: + name: Migrate from Cloud to Cloud + parent: Migrate data +weight: 102 +--- + +To migrate data from one InfluxDB Cloud organization to another, query the +data from time-based batches and write the queried data to a bucket in another +InfluxDB Cloud organization. +Because full data migrations will likely exceed your organizations' limits and +adjustable quotas, migrate your data in batches. + +The following guide provides instructions for setting up an InfluxDB task +that queries data from an InfluxDB Cloud bucket in time-based batches and writes +each batch to another InfluxDB Cloud bucket in another organization. + +{{% cloud %}} +All query and write requests are subject to your InfluxDB Cloud organization's +[rate limits and adjustable quotas](/influxdb/cloud/account-management/limits/). +{{% /cloud %}} + +- [Set up the migration](#set-up-the-migration) +- [Migration task](#migration-task) + - [Configure the migration](#configure-the-migration) + - [Migration Flux script](#migration-flux-script) + - [Configuration help](#configuration-help) +- [Monitor the migration progress](#monitor-the-migration-progress) +- [Troubleshoot migration task failures](#troubleshoot-migration-task-failures) + +## Set up the migration + +{{% note %}} +The migration process requires two buckets in your destination InfluxDB +organization—one bucket to store the migrated data and another bucket to store migration metadata. +If the destination organization uses the [InfluxDB Cloud Free Plan](/influxdb/cloud/account-management/limits/#free-plan), +any buckets in addition to these two will exceed the your plan's bucket limit. +{{% /note %}} + +1. **In the InfluxDB Cloud organization you're migrating data _from_**, + [create an API token](/influxdb/cloud/security/tokens/create-token/) + with **read access** to the bucket you want to migrate. + +2. **In the InfluxDB Cloud organization you're migrating data _to_**: + 1. Add the **InfluxDB Cloud API token from the source organization** as a + secret using the key, `INFLUXDB_CLOUD_TOKEN`. + _See [Add secrets](/influxdb/cloud/security/secrets/add/) for more information._ + 2. [Create a bucket](/influxdb/cloud/organizations/buckets/create-bucket/) + **to migrate data to**. + 3. [Create a bucket](/influxdb/cloud/organizations/buckets/create-bucket/) + **to store temporary migration metadata**. + 4. [Create a new task](/influxdb/cloud/process-data/manage-tasks/create-task/) + using the provided [migration task](#migration-task). + Update the necessary [migration configuration options](#configure-the-migration). + 5. _(Optional)_ Set up [migration monitoring](#monitor-the-migration-progress). + 6. Save the task. + + {{% note %}} +Newly-created tasks are enabled by default, so the data migration begins when you save the task. + {{% /note %}} + +**After the migration is complete**, each subsequent migration task execution +will fail with the following error: + +``` +error exhausting result iterator: error calling function "die" @41:9-41:86: +Batch range is beyond the migration range. Migration is complete. +``` + +## Migration task + +### Configure the migration +1. Specify how often you want the task to run using the `task.every` option. + _See [Determine your task interval](#determine-your-task-interval)._ + +2. Define the following properties in the `migration` + [record](/{{< latest "flux" >}}/data-types/composite/record/): + + ##### migration + - **start**: Earliest time to include in the migration. + _See [Determine your migration start time](#determine-your-migration-start-time)._ + - **stop**: Latest time to include in the migration. + - **batchInterval**: Duration of each time-based batch. + _See [Determine your batch interval](#determine-your-batch-interval)._ + - **batchBucket**: InfluxDB bucket to store migration batch metadata in. + - **sourceHost**: [InfluxDB Cloud region URL](/influxdb/cloud/reference/regions) + to migrate data from. + - **sourceOrg**: InfluxDB Cloud organization to migrate data from. + - **sourceToken**: InfluxDB Cloud API token. To keep the API token secure, store + it as a secret in InfluxDB OSS. + - **sourceBucket**: InfluxDB Cloud bucket to migrate data from. + - **destinationBucket**: InfluxDB OSS bucket to migrate data to. + +### Migration Flux script + +```js +import "array" +import "experimental" +import "influxdata/influxdb/secrets" + +// Configure the task +option task = {every: 5m, name: "Migrate data from InfluxDB Cloud"} + +// Configure the migration +migration = { + start: 2022-01-01T00:00:00Z, + stop: 2022-02-01T00:00:00Z, + batchInterval: 1h, + batchBucket: "migration", + sourceHost: "https://cloud2.influxdata.com", + sourceOrg: "example-cloud-org", + sourceToken: secrets.get(key: "INFLUXDB_CLOUD_TOKEN"), + sourceBucket: "example-cloud-bucket", + destinationBucket: "example-oss-bucket", +} + +// batchRange dynamically returns a record with start and stop properties for +// the current batch. It queries migration metadata stored in the +// `migration.batchBucket` to determine the stop time of the previous batch. +// It uses the previous stop time as the new start time for the current batch +// and adds the `migration.batchInterval` to determine the current batch stop time. +batchRange = () => { + _lastBatchStop = + (from(bucket: migration.batchBucket) + |> range(start: migration.start) + |> filter(fn: (r) => r._field == "batch_stop") + |> filter(fn: (r) => r.srcOrg == migration.sourceOrg) + |> filter(fn: (r) => r.srcBucket == migration.sourceBucket) + |> last() + |> findRecord(fn: (key) => true, idx: 0))._value + _batchStart = + if exists _lastBatchStop then + time(v: _lastBatchStop) + else + migration.start + + return {start: _batchStart, stop: experimental.addDuration(d: migration.batchInterval, to: _batchStart)} +} + +// Define a static record with batch start and stop time properties +batch = {start: batchRange().start, stop: batchRange().stop} + +// Check to see if the current batch start time is beyond the migration.stop +// time and exit with an error if it is. +finished = + if batch.start >= migration.stop then + die(msg: "Batch range is beyond the migration range. Migration is complete.") + else + "Migration in progress" + +// Query all data from the specified source bucket within the batch-defined time +// range. To limit migrated data by measurement, tag, or field, add a `filter()` +// function after `range()` with the appropriate predicate fn. +data = () => + from(host: migration.sourceHost, org: migration.sourceOrg, token: migration.sourceToken, bucket: migration.sourceBucket) + |> range(start: batch.start, stop: batch.stop) + +// rowCount is a stream of tables that contains the number of rows returned in +// the batch and is used to generate batch metadata. +rowCount = + data() + |> group(columns: ["_start", "_stop"]) + |> count() + +// emptyRange is a stream of tables that acts as filler data if the batch is +// empty. This is used to generate batch metadata for empty batches and is +// necessary to correctly increment the time range for the next batch. +emptyRange = array.from(rows: [{_start: batch.start, _stop: batch.stop, _value: 0}]) + +// metadata returns a stream of tables representing batch metadata. +metadata = () => { + _input = + if exists (rowCount |> findRecord(fn: (key) => true, idx: 0))._value then + rowCount + else + emptyRange + + return + _input + |> map( + fn: (r) => + ({ + _time: now(), + _measurement: "batches", + srcOrg: migration.sourceOrg, + srcBucket: migration.sourceBucket, + dstBucket: migration.destinationBucket, + batch_start: string(v: batch.start), + batch_stop: string(v: batch.stop), + rows: r._value, + percent_complete: + float(v: int(v: r._stop) - int(v: migration.start)) / float( + v: int(v: migration.stop) - int(v: migration.start), + ) * 100.0, + }), + ) + |> group(columns: ["_measurement", "srcOrg", "srcBucket", "dstBucket"]) +} + +// Write the queried data to the specified InfluxDB OSS bucket. +data() + |> to(bucket: migration.destinationBucket) + +// Generate and store batch metadata in the migration.batchBucket. +metadata() + |> experimental.to(bucket: migration.batchBucket) +``` + +### Configuration help + +{{< expand-wrapper >}} + + +{{% expand "Determine your task interval" %}} + +The task interval determines how often the migration task runs and is defined by +the [`task.every` option](/influxdb/cloud/process-data/task-options/#every). +InfluxDB Cloud rate limits and quotas reset every five minutes, so +**we recommend a `5m` task interval**. + +You can do shorter task intervals and execute the migration task more often, +but you need to balance the task interval with your [batch interval](#determine-your-batch-interval) +and the amount of data returned in each batch. +If the total amount of data queried in each five-minute interval exceeds your +InfluxDB Cloud organization's [rate limits and quotas](/influxdb/cloud/account-management/limits/), +the batch will fail until rate limits and quotas reset. + +{{% /expand %}} + + + +{{% expand "Determine your migration start time" %}} + +The `migration.start` time should be at or near the same time as the earliest +data point you want to migrate. +All migration batches are determined using the `migration.start` time and +`migration.batchInterval` settings. + +To find time of the earliest point in your bucket, run the following query: + +```js +from(bucket: "example-cloud-bucket") + |> range(start: 0) + |> group() + |> first() + |> keep(columns: ["_time"]) +``` + +{{% /expand %}} + + + +{{% expand "Determine your batch interval" %}} + +The `migration.batchInterval` setting controls the time range queried by each batch. +The "density" of the data in your InfluxDB Cloud bucket and your InfluxDB Cloud +organization's [rate limits and quotas](/influxdb/cloud/account-management/limits/) +determine what your batch interval should be. + +For example, if you're migrating data collected from hundreds of sensors with +points recorded every second, your batch interval will need to be shorter. +If you're migrating data collected from five sensors with points recorded every +minute, your batch interval can be longer. +It all depends on how much data gets returned in a single batch. + +If points occur at regular intervals, you can get a fairly accurate estimate of +how much data will be returned in a given time range by using the `/api/v2/query` +endpoint to execute a query for the time range duration and then measuring the +size of the response body. + +The following `curl` command queries an InfluxDB Cloud bucket for the last day +and returns the size of the response body in bytes. +You can customize the range duration to match your specific use case and +data density. + +```sh +INFLUXDB_CLOUD_ORG= +INFLUXDB_CLOUD_TOKEN= +INFLUXDB_CLOUD_BUCKET= + +curl -so /dev/null --request POST \ + https://cloud2.influxdata.com/api/v2/query?org=$INFLUXDB_CLOUD_ORG \ + --header "Authorization: Token $INFLUXDB_CLOUD_TOKEN" \ + --header "Accept: application/csv" \ + --header "Content-type: application/vnd.flux" \ + --data "from(bucket:\"$INFLUXDB_CLOUD_BUCKET\") |> range(start: -1d, stop: now())" \ + --write-out '%{size_download}' +``` + +{{% note %}} +You can also use other HTTP API tools like [Postman](https://www.postman.com/) +that provide the size of the response body. +{{% /note %}} + +Divide the output of this command by 1000000 to convert it to megabytes (MB). + +``` +batchInterval = (write-rate-limit-mb / response-body-size-mb) * range-duration +``` + +For example, if the response body of your query that returns data from one day +is 1 MB and you're using the InfluxDB Cloud Free Plan with a write limit of +5 MB per five minutes: + +```js +batchInterval = (5 / 1) * 1d +// batchInterval = 5d +``` + +You _could_ query 5 days of data before hitting your write limit, but this is just an estimate. +We recommend setting the `batchInterval` slightly lower than the calculated interval +to allow for variation between batches. + +So in this example, **it would be best to set your `batchInterval` to `4d`**. + +##### Important things to note +- This assumes no other queries are running in your source InfluxDB Cloud organization. +- This assumes no other writes are happening in your destination InfluxDB Cloud organization. +{{% /expand %}} + +{{< /expand-wrapper >}} + +## Monitor the migration progress +The [InfluxDB Cloud Migration Community template](https://github.com/influxdata/community-templates/tree/master/influxdb-cloud-oss-migration/) +installs the migration task outlined in this guide as well as a dashboard +for monitoring running data migrations. + +{{< img-hd src="/img/influxdb/2-1-migration-dashboard.png" alt="InfluxDB Cloud migration dashboard" />}} + +Install the InfluxDB Cloud Migration template + +## Troubleshoot migration task failures +If the migration task fails, [view your task logs](/influxdb/cloud/process-data/manage-tasks/task-run-history/) +to identify the specific error. Below are common causes of migration task failures. + +- [Exceeded rate limits](#exceeded-rate-limits) +- [Invalid API token](#invalid-api-token) +- [Query timeout](#query-timeout) + +### Exceeded rate limits +If your data migration causes you to exceed your InfluxDB Cloud organization's +limits and quotas, the task will return an error similar to: + +``` +too many requests +``` + +**Possible solutions**: +- Update the `migration.batchInterval` setting in your migration task to use + a smaller interval. Each batch will then query less data. + +### Invalid API token +If the API token you add as the `INFLUXDB_CLOUD_SECRET` doesn't have read access to +your InfluxDB Cloud bucket, the task will return an error similar to: + +``` +unauthorized access +``` + +**Possible solutions**: +- Ensure the API token has read access to your InfluxDB Cloud bucket. +- Generate a new InfluxDB Cloud API token with read access to the bucket you + want to migrate. Then, update the `INFLUXDB_CLOUD_TOKEN` secret in your + InfluxDB OSS instance with the new token. + +### Query timeout +The InfluxDB Cloud query timeout is 90 seconds. If it takes longer than this to +return the data from the batch interval, the query will time out and the +task will fail. + +**Possible solutions**: +- Update the `migration.batchInterval` setting in your migration task to use + a smaller interval. Each batch will then query less data and take less time + to return results. diff --git a/content/influxdb/cloud/migrate-data/migrate-cloud-to-oss.md b/content/influxdb/cloud/migrate-data/migrate-cloud-to-oss.md new file mode 100644 index 000000000..b1336208d --- /dev/null +++ b/content/influxdb/cloud/migrate-data/migrate-cloud-to-oss.md @@ -0,0 +1,13 @@ +--- +title: Migrate data from InfluxDB Cloud to InfluxDB OSS +description: > + To migrate data from InfluxDB Cloud to InfluxDB OSS, query the data from + InfluxDB Cloud in time-based batches and write the data to InfluxDB OSS. +menu: + influxdb_cloud: + name: Migrate from Cloud to OSS + parent: Migrate data +weight: 103 +--- + +{{< duplicate-oss >}} diff --git a/content/influxdb/cloud/migrate-data/migrate-oss.md b/content/influxdb/cloud/migrate-data/migrate-oss.md new file mode 100644 index 000000000..bb3fcc962 --- /dev/null +++ b/content/influxdb/cloud/migrate-data/migrate-oss.md @@ -0,0 +1,13 @@ +--- +title: Migrate data from InfluxDB OSS to InfluxDB Cloud +description: > + To migrate data from an InfluxDB OSS bucket to an InfluxDB Cloud bucket, export + your data as line protocol and then write it to your InfluxDB Cloud bucket. +menu: + influxdb_cloud: + name: Migrate data from OSS + parent: Migrate data +weight: 101 +--- + +{{< duplicate-oss >}} diff --git a/content/influxdb/cloud/organizations/buckets/create-bucket.md b/content/influxdb/cloud/organizations/buckets/create-bucket.md index 3f9e8975a..c7eb1b938 100644 --- a/content/influxdb/cloud/organizations/buckets/create-bucket.md +++ b/content/influxdb/cloud/organizations/buckets/create-bucket.md @@ -59,9 +59,13 @@ to create a new bucket. A bucket requires the following: - days (`d`) - weeks (`w`) + {{% note %}} + The minimum retention period is **one hour**. + {{% /note %}} + ```sh # Syntax -influx bucket create -n -o -r +influx bucket create -n -o -r # Example influx bucket create -n my-bucket -o my-org -r 72h diff --git a/content/influxdb/cloud/organizations/buckets/delete-bucket.md b/content/influxdb/cloud/organizations/buckets/delete-bucket.md index b6de8c85b..d95d74550 100644 --- a/content/influxdb/cloud/organizations/buckets/delete-bucket.md +++ b/content/influxdb/cloud/organizations/buckets/delete-bucket.md @@ -49,7 +49,7 @@ influx bucket delete -n my-bucket -o my-org ```sh # Syntax -influx bucket delete -i +influx bucket delete -i # Example influx bucket delete -i 034ad714fdd6f000 diff --git a/content/influxdb/cloud/organizations/buckets/update-bucket.md b/content/influxdb/cloud/organizations/buckets/update-bucket.md index 791137682..676fa2ec8 100644 --- a/content/influxdb/cloud/organizations/buckets/update-bucket.md +++ b/content/influxdb/cloud/organizations/buckets/update-bucket.md @@ -71,7 +71,20 @@ influx bucket update -i 034ad714fdd6f000 -n my-new-bucket ##### Update a bucket's retention period -Valid retention period duration units are nanoseconds (`ns`), microseconds (`us` or `µs`), milliseconds (`ms`), seconds (`s`), minutes (`m`), hours (`h`), days (`d`), or weeks (`w`). +Valid retention period duration units: + +- nanoseconds (`ns`) +- microseconds (`us` or `µs`) +- milliseconds (`ms`) +- seconds (`s`) +- minutes (`m`) +- hours (`h`) +- days (`d`) +- weeks (`w`) + +{{% note %}} +The minimum retention period is **one hour**. +{{% /note %}} ```sh # Syntax diff --git a/content/influxdb/cloud/organizations/users.md b/content/influxdb/cloud/organizations/users.md index 4969f783e..ee4c53b66 100644 --- a/content/influxdb/cloud/organizations/users.md +++ b/content/influxdb/cloud/organizations/users.md @@ -7,7 +7,7 @@ weight: 106 menu: influxdb_cloud: parent: Manage organizations - name: Manage Users + name: Manage users aliases: - /influxdb/v2.0/account-management/multi-user/ - /influxdb/cloud/account-management/multi-user/ @@ -16,7 +16,7 @@ aliases: - /influxdb/cloud/users/ --- -{{< cloud-name >}} lets you invite and collaborate with multiple users in your organization. +{{< cloud-name >}} lets you invite and collaborate with multiple users in your organization. By default, each user has full permissions on resources in your organization. - [Users management page](#users-management-page) @@ -26,16 +26,16 @@ By default, each user has full permissions on resources in your organization. - [Remove a user from your organization](#remove-a-user-from-your-organization) - [Remove yourself from an organization](#remove-yourself-from-an-organization) -## Users management page -Manage your organization's users from your organization's **Users management page**. +## Members page +Manage your organization's users from your organization's **Members page**. In the {{< cloud-name "short" >}} user interface (UI), click your user avatar in the left -navigation menu, and select **Users**. +navigation menu, and select **Organization** > **Members**. {{< nav-icon "account" >}} ## Invite a user to your organization -1. Navigate to your organization's [Users management page](#users-management-page). +1. Navigate to your organization's [Members page](#members-page). 2. Under **Add a new user to your organization**, enter the email address of the user to invite and select their role in your organization. @@ -57,22 +57,17 @@ Accounts can have up to 50 pending invitations at one time. ### Resend an invitation -1. Navigate to your organization's [Users management page](#users-management-page). +1. Navigate to your organization's [Members page](#members-page). 2. Click the **{{< icon "refresh" >}}** icon next to the invitation you want to resend. ### Withdraw an invitation -1. Navigate to your organization's [Users management page](#users-management-page). +1. Navigate to your organization's [Members page](#members-page). 2. Click the **{{< icon "delete" >}}** icon next to the invitation you want to withdraw. 3. Click **{{< caps >}}Withdraw Invitation{{< /caps >}}**. ## Remove a user from your organization -1. Navigate to your organization's [Users management page](#users-management-page). +1. Navigate to your organization's [Members page](#members-page). 2. Click the **{{< icon "delete" >}}** icon next to the user you want to remove. 3. Click **{{< caps >}}Remove user access{{< /caps >}}**. - -### Remove yourself from an organization - -You cannot remove yourself from an organization. -Have another member of your organization remove you. diff --git a/content/influxdb/cloud/process-data/common-tasks/downsample-data.md b/content/influxdb/cloud/process-data/common-tasks/downsample-data.md index 8c3b14441..1fc48d5e8 100644 --- a/content/influxdb/cloud/process-data/common-tasks/downsample-data.md +++ b/content/influxdb/cloud/process-data/common-tasks/downsample-data.md @@ -48,21 +48,19 @@ The example task script below is a very basic form of data downsampling that doe ```js // Task Options -option task = { - name: "cq-mem-data-1w", - every: 1w, -} +// Task Options +option task = {name: "cq-mem-data-1w", every: 1w} // Defines a data source data = from(bucket: "system-data") - |> range(start: -duration(v: int(v: task.every) * 2)) - |> filter(fn: (r) => r._measurement == "mem") + |> range(start: -duration(v: int(v: task.every) * 2)) + |> filter(fn: (r) => r._measurement == "mem") data - // Windows and aggregates the data in to 1h averages - |> aggregateWindow(fn: mean, every: 1h) - // Stores the aggregated data in a new bucket - |> to(bucket: "system-data-downsampled", org: "my-org") + // Windows and aggregates the data in to 1h averages + |> aggregateWindow(fn: mean, every: 1h) + // Stores the aggregated data in a new bucket + |> to(bucket: "system-data-downsampled", org: "my-org") ``` Again, this is a very basic example, but it should provide you with a foundation diff --git a/content/influxdb/cloud/query-data/execute-queries/query-demo-data.md b/content/influxdb/cloud/query-data/execute-queries/query-demo-data.md index c86742747..87186a701 100644 --- a/content/influxdb/cloud/query-data/execute-queries/query-demo-data.md +++ b/content/influxdb/cloud/query-data/execute-queries/query-demo-data.md @@ -25,7 +25,7 @@ types of demo data that let you explore and familiarize yourself with InfluxDB C {{% note %}} #### Free to use and read-only - InfluxDB Cloud demo data buckets are **free to use** and are **_not_ subject to - [Free Plan](/influxdb/cloud/account-management/pricing-plans/#free-plan) rate limits**. + [Free Plan rate limits](influxdb/cloud/account-management/limits/#free-plan-rate-limits) rate limits**. - Demo data buckets are **read-only**. You cannot write data into demo data buckets. {{% /note %}} diff --git a/content/influxdb/cloud/query-data/flux/custom-functions/_index.md b/content/influxdb/cloud/query-data/flux/custom-functions/_index.md index 689a1f73b..19bc72ffd 100644 --- a/content/influxdb/cloud/query-data/flux/custom-functions/_index.md +++ b/content/influxdb/cloud/query-data/flux/custom-functions/_index.md @@ -10,12 +10,11 @@ menu: weight: 220 list_code_example: | ```js - multByX = (tables=<-, x) => - tables - |> map(fn: (r) => ({ r with _value: r._value * x})) + multByX = (tables=<-, x) => tables + |> map(fn: (r) => ({r with _value: r._value * x})) data - |> multByX(x: 2.0) + |> multByX(x: 2.0) ``` --- diff --git a/content/influxdb/cloud/query-data/flux/exists.md b/content/influxdb/cloud/query-data/flux/exists.md index e3acb0326..3a30f6e8a 100644 --- a/content/influxdb/cloud/query-data/flux/exists.md +++ b/content/influxdb/cloud/query-data/flux/exists.md @@ -18,7 +18,7 @@ list_code_example: | ##### Filter null values ```js data - |> filter(fn: (r) => exists r._value) + |> filter(fn: (r) => exists r._value) ``` --- diff --git a/content/influxdb/cloud/query-data/flux/geo/_index.md b/content/influxdb/cloud/query-data/flux/geo/_index.md index e049882b4..f13d116c4 100644 --- a/content/influxdb/cloud/query-data/flux/geo/_index.md +++ b/content/influxdb/cloud/query-data/flux/geo/_index.md @@ -15,8 +15,8 @@ list_code_example: | import "experimental/geo" sampleGeoData - |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}) - |> geo.groupByArea(newColumn: "geoArea", level: 5) + |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}) + |> geo.groupByArea(newColumn: "geoArea", level: 5) ``` --- diff --git a/content/influxdb/cloud/query-data/flux/geo/filter-by-region.md b/content/influxdb/cloud/query-data/flux/geo/filter-by-region.md index a076d7f7b..94d515eae 100644 --- a/content/influxdb/cloud/query-data/flux/geo/filter-by-region.md +++ b/content/influxdb/cloud/query-data/flux/geo/filter-by-region.md @@ -15,10 +15,7 @@ list_code_example: | import "experimental/geo" sampleGeoData - |> geo.filterRows( - region: {lat: 30.04, lon: 31.23, radius: 200.0}, - strict: true - ) + |> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0}, strict: true) ``` --- diff --git a/content/influxdb/cloud/query-data/flux/geo/group-geo-data.md b/content/influxdb/cloud/query-data/flux/geo/group-geo-data.md index 598e2272c..b2359d0b4 100644 --- a/content/influxdb/cloud/query-data/flux/geo/group-geo-data.md +++ b/content/influxdb/cloud/query-data/flux/geo/group-geo-data.md @@ -16,8 +16,8 @@ list_code_example: | import "experimental/geo" sampleGeoData - |> geo.groupByArea(newColumn: "geoArea", level: 5) - |> geo.asTracks(groupBy: ["id"],sortBy: ["_time"]) + |> geo.groupByArea(newColumn: "geoArea", level: 5) + |> geo.asTracks(groupBy: ["id"],sortBy: ["_time"]) ``` --- diff --git a/content/influxdb/cloud/query-data/flux/geo/shape-geo-data.md b/content/influxdb/cloud/query-data/flux/geo/shape-geo-data.md index e5c73b561..fc97a3dcc 100644 --- a/content/influxdb/cloud/query-data/flux/geo/shape-geo-data.md +++ b/content/influxdb/cloud/query-data/flux/geo/shape-geo-data.md @@ -17,15 +17,17 @@ list_code_example: | import "experimental/geo" sampleGeoData - |> map(fn: (r) => ({ r with - _field: - if r._field == "latitude" then "lat" - else if r._field == "longitude" then "lon" - else r._field - })) - |> map(fn: (r) => ({ r with - s2_cell_id: geo.s2CellIDToken(point: {lon: r.lon, lat: r.lat}, level: 10) - })) + |> map( + fn: (r) => ({r with + _field: if r._field == "latitude" then + "lat" + else if r._field == "longitude" then + "lon" + else + r._field, + }), + ) + |> map(fn: (r) => ({r with s2_cell_id: geo.s2CellIDToken(point: {lon: r.lon, lat: r.lat}, level: 10)})) ``` --- diff --git a/content/influxdb/cloud/query-data/flux/query-fields.md b/content/influxdb/cloud/query-data/flux/query-fields.md index 0a194d259..8d65fb94e 100644 --- a/content/influxdb/cloud/query-data/flux/query-fields.md +++ b/content/influxdb/cloud/query-data/flux/query-fields.md @@ -17,12 +17,8 @@ related: list_code_example: | ```js from(bucket: "example-bucket") - |> range(start: -1h) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" and - r.tag == "example-tag" - ) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field" and r.tag == "example-tag") ``` --- diff --git a/content/influxdb/cloud/query-data/flux/scalar-values.md b/content/influxdb/cloud/query-data/flux/scalar-values.md index 1c549022b..cb89eb9e9 100644 --- a/content/influxdb/cloud/query-data/flux/scalar-values.md +++ b/content/influxdb/cloud/query-data/flux/scalar-values.md @@ -14,11 +14,11 @@ related: - /{{< latest "flux" >}}/function-types/#dynamic-queries, Flux dynamic query functions list_code_example: | ```js - scalarValue = { - _record = - data - |> findRecord(fn: key => true, idx: 0) - return _record._value + scalarValue = (tables=<-) => { + _record = tables + |> findRecord(fn: (key) => true, idx: 0) + + return _record._value } ``` --- diff --git a/content/influxdb/cloud/query-data/flux/sql.md b/content/influxdb/cloud/query-data/flux/sql.md index 9c9083c8d..3ec223700 100644 --- a/content/influxdb/cloud/query-data/flux/sql.md +++ b/content/influxdb/cloud/query-data/flux/sql.md @@ -19,9 +19,9 @@ list_code_example: | import "sql" sql.from( - driverName: "postgres", - dataSourceName: "postgresql://user:password@localhost", - query: "SELECT * FROM example_table" + driverName: "postgres", + dataSourceName: "postgresql://user:password@localhost", + query: "SELECT * FROM example_table", ) ``` --- diff --git a/content/influxdb/cloud/query-data/parameterized-queries.md b/content/influxdb/cloud/query-data/parameterized-queries.md index eb0455b6e..07fb7e48b 100644 --- a/content/influxdb/cloud/query-data/parameterized-queries.md +++ b/content/influxdb/cloud/query-data/parameterized-queries.md @@ -43,16 +43,16 @@ For example, using the example `params` JSON above, the following query ```js from(bucket: params.ex1) - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == params.ex2) + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == params.ex2) ``` would execute as ```js from(bucket: "foo") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "bar") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "bar") ``` ## Example @@ -67,8 +67,8 @@ To use a parameterized query, do the following: ```js from(bucket: params.mybucket) - |> range(start: -7d) - |> limit(n:2) + |> range(start: -7d) + |> limit(n:2) ``` 2. Use the InfluxDB Cloud `/api/v2/query` API endpoint to execute your query. Provide the following in your request body: @@ -103,8 +103,8 @@ For example, to define the `start` parameter of the `range()` function using a p ```js from(bucket:"example-bucket") - |> range(start: duration(v: params.mystart)) - |> limit(n:2) + |> range(start: duration(v: params.mystart)) + |> limit(n:2) ``` 2. In the `param` field of your query request body, format the duration parameter as a string: diff --git a/content/influxdb/cloud/reference/api/influxdb-1x/_index.md b/content/influxdb/cloud/reference/api/influxdb-1x/_index.md index 5b5bb0552..cd7c0d34b 100644 --- a/content/influxdb/cloud/reference/api/influxdb-1x/_index.md +++ b/content/influxdb/cloud/reference/api/influxdb-1x/_index.md @@ -9,9 +9,8 @@ menu: parent: InfluxDB v2 API weight: 104 influxdb/cloud/tags: [influxql, query, write] -products: [cloud] related: - /influxdb/cloud/query-data/influxql --- -{{% duplicate-oss %}} +{{< duplicate-oss >}} diff --git a/content/influxdb/cloud/reference/api/influxdb-1x/query.md b/content/influxdb/cloud/reference/api/influxdb-1x/query.md index d45905cb0..d1f91a5ed 100644 --- a/content/influxdb/cloud/reference/api/influxdb-1x/query.md +++ b/content/influxdb/cloud/reference/api/influxdb-1x/query.md @@ -9,7 +9,6 @@ menu: parent: 1.x compatibility weight: 301 influxdb/cloud/tags: [influxql, query] -products: [cloud] list_code_example: |
       GET https://cloud2.influxdata.com/query
    diff --git a/content/influxdb/cloud/reference/api/influxdb-1x/write.md b/content/influxdb/cloud/reference/api/influxdb-1x/write.md
    index db7fcddee..ee3cc2ee5 100644
    --- a/content/influxdb/cloud/reference/api/influxdb-1x/write.md
    +++ b/content/influxdb/cloud/reference/api/influxdb-1x/write.md
    @@ -10,7 +10,6 @@ menu:
         parent: 1.x compatibility
     weight: 301
     influxdb/cloud/tags: [write]
    -products: [cloud]
     list_code_example: |
       
       POST https://cloud2.influxdata.com/write
    diff --git a/content/influxdb/cloud/reference/cli/influx/remote/_index.md b/content/influxdb/cloud/reference/cli/influx/remote/_index.md
    new file mode 100644
    index 000000000..6875564b8
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/cli/influx/remote/_index.md
    @@ -0,0 +1,15 @@
    +---
    +title: influx remote
    +description: Manage remote InfluxDB connections for replicating data.
    +menu:
    +  influxdb_cloud_ref:
    +    name: influx remote
    +    parent: influx
    +weight: 101
    +influxdb/cloud/tags: [write, replication]
    +related:
    +  - /influxdb/cloud/reference/cli/influx/replication
    +  - /influxdb/cloud/write-data/replication
    +---
    +
    +{{< duplicate-oss >}}
    diff --git a/content/influxdb/cloud/reference/cli/influx/remote/create.md b/content/influxdb/cloud/reference/cli/influx/remote/create.md
    new file mode 100644
    index 000000000..b4bcd6755
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/cli/influx/remote/create.md
    @@ -0,0 +1,14 @@
    +---
    +title: influx remote create
    +description: Create a new remote InfluxDB connection for replicating data.
    +menu:
    +  influxdb_cloud_ref:
    +    name: influx remote create
    +    parent: influx remote
    +weight: 101
    +influxdb/cloud/tags: [write, replication]
    +related:
    +  - /influxdb/cloud/reference/cli/influx/replication
    +---
    +
    +{{< duplicate-oss >}}
    diff --git a/content/influxdb/cloud/reference/cli/influx/remote/delete.md b/content/influxdb/cloud/reference/cli/influx/remote/delete.md
    new file mode 100644
    index 000000000..c58ee6401
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/cli/influx/remote/delete.md
    @@ -0,0 +1,14 @@
    +---
    +title: influx remote delete
    +description: Delete remote InfluxDB connections used for replicating data.
    +menu:
    +  influxdb_cloud_ref:
    +    name: influx remote delete
    +    parent: influx remote
    +weight: 102
    +influxdb/cloud/tags: [write, replication]
    +related:
    +  - /influxdb/cloud/reference/cli/influx/replication
    +---
    +
    +{{< duplicate-oss >}}
    diff --git a/content/influxdb/cloud/reference/cli/influx/remote/list.md b/content/influxdb/cloud/reference/cli/influx/remote/list.md
    new file mode 100644
    index 000000000..b689a60ee
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/cli/influx/remote/list.md
    @@ -0,0 +1,14 @@
    +---
    +title: influx remote list
    +description: List remote InfluxDB connections sued for replicating data.
    +menu:
    +  influxdb_cloud_ref:
    +    name: influx remote list
    +    parent: influx remote
    +weight: 102
    +influxdb/cloud/tags: [write, replication]
    +related:
    +  - /influxdb/cloud/reference/cli/influx/replication
    +---
    +
    +{{< duplicate-oss >}}
    diff --git a/content/influxdb/cloud/reference/cli/influx/remote/update.md b/content/influxdb/cloud/reference/cli/influx/remote/update.md
    new file mode 100644
    index 000000000..277855c78
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/cli/influx/remote/update.md
    @@ -0,0 +1,14 @@
    +---
    +title: influx remote update
    +description: Update remote InfluxDB connections used for for replicating data.
    +menu:
    +  influxdb_cloud_ref:
    +    name: influx remote update
    +    parent: influx remote
    +weight: 102
    +influxdb/cloud/tags: [write, replication]
    +related:
    +  - /influxdb/cloud/reference/cli/influx/replication
    +---
    +
    +{{< duplicate-oss >}}
    diff --git a/content/influxdb/cloud/reference/cli/influx/replication/_index.md b/content/influxdb/cloud/reference/cli/influx/replication/_index.md
    new file mode 100644
    index 000000000..9351f6da6
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/cli/influx/replication/_index.md
    @@ -0,0 +1,14 @@
    +---
    +title: influx replication
    +description: Use the `influx` CLI to manage InfluxDB replication streams.
    +menu:
    +  influxdb_cloud_ref:
    +    name: influx replication
    +    parent: influx
    +weight: 101
    +influxdb/cloud/tags: [write, replication]
    +related:
    +  - /influxdb/cloud/reference/cli/influx/remote
    +---
    +
    +{{< duplicate-oss >}}
    diff --git a/content/influxdb/cloud/reference/cli/influx/replication/create.md b/content/influxdb/cloud/reference/cli/influx/replication/create.md
    new file mode 100644
    index 000000000..54105343d
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/cli/influx/replication/create.md
    @@ -0,0 +1,14 @@
    +---
    +title: influx replication create
    +description: Create a new InfluxDB replication stream.
    +menu:
    +  influxdb_cloud_ref:
    +    name: influx replication create
    +    parent: influx replication
    +weight: 101
    +influxdb/cloud/tags: [write]
    +related:
    +  - /influxdb/cloud/reference/cli/influx/replication
    +---
    +
    +{{< duplicate-oss >}}
    diff --git a/content/influxdb/cloud/reference/cli/influx/replication/delete.md b/content/influxdb/cloud/reference/cli/influx/replication/delete.md
    new file mode 100644
    index 000000000..18ee2c99c
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/cli/influx/replication/delete.md
    @@ -0,0 +1,14 @@
    +---
    +title: influx replication delete
    +description: Delete an InfluxDB replication stream.
    +menu:
    +  influxdb_cloud_ref:
    +    name: influx replication delete
    +    parent: influx replication
    +weight: 102
    +influxdb/cloud/tags: [write, replication]
    +related:
    +  - /influxdb/cloud/reference/cli/influx/replication
    +---
    +
    +{{< duplicate-oss >}}
    diff --git a/content/influxdb/cloud/reference/cli/influx/replication/list.md b/content/influxdb/cloud/reference/cli/influx/replication/list.md
    new file mode 100644
    index 000000000..020fa3d89
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/cli/influx/replication/list.md
    @@ -0,0 +1,14 @@
    +---
    +title: influx replication list
    +description: List InfluxDB replication streams and corresponding metrics.
    +menu:
    +  influxdb_cloud_ref:
    +    name: influx replication list
    +    parent: influx replication
    +weight: 102
    +influxdb/cloud/tags: [write, replication]
    +related:
    +  - /influxdb/cloud/reference/cli/influx/replication
    +---
    +
    +{{< duplicate-oss >}}
    diff --git a/content/influxdb/cloud/reference/cli/influx/replication/update.md b/content/influxdb/cloud/reference/cli/influx/replication/update.md
    new file mode 100644
    index 000000000..839e21f06
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/cli/influx/replication/update.md
    @@ -0,0 +1,14 @@
    +---
    +title: influx replication update
    +description: Update InfluxDB replication streams.
    +menu:
    +  influxdb_cloud_ref:
    +    name: influx replication update
    +    parent: influx replication
    +weight: 102
    +influxdb/cloud/tags: [write, replication]
    +related:
    +  - /influxdb/cloud/reference/cli/influx/replication
    +---
    +
    +{{< duplicate-oss >}}
    diff --git a/content/influxdb/cloud/reference/internals/data-retention.md b/content/influxdb/cloud/reference/internals/data-retention.md
    new file mode 100644
    index 000000000..2c11923c7
    --- /dev/null
    +++ b/content/influxdb/cloud/reference/internals/data-retention.md
    @@ -0,0 +1,42 @@
    +---
    +title: Data retention in InfluxDB Cloud
    +description: >
    +  The InfluxDB Cloud retention service checks for and removes data with timestamps
    +  beyond the defined retention period of the bucket the data is stored in.
    +weight: 103
    +menu:
    +  influxdb_cloud_ref:
    +    name: Data retention
    +    parent: InfluxDB Cloud internals
    +influxdb/cloud/tags: [internals]
    +---
    +
    +The **InfluxDB Cloud retention enforcement service** checks for and removes data
    +with timestamps beyond the defined retention period of the
    +[bucket](/influxdb/cloud/reference/glossary/#bucket) the data is stored in.
    +This service is designed to automatically delete "expired" data and optimize disk
    +usage without any user intervention.
    +
    +- [Bucket retention period](#bucket-retention-period)
    +- [When does data actually get deleted?](#when-does-data-actually-get-deleted)
    +
    +## Bucket retention period
    +A **bucket retention period** is the duration of time that a bucket retains data.
    +Retention periods can be as short as an hour or infinite.
    +[Points](/influxdb/cloud/reference/glossary/#point) in a bucket with timestamps
    +beyond the defined retention period (relative to now) are flagged for deletion
    +(also known as "tombstoned").
    +
    +{{% note %}}
    +#### View bucket retention periods
    +Use the [`influx bucket list` command](/influxdb/cloud/reference/cli/influx/bucket/list/)
    +to view the retention period buckets in your organization.
    +{{% /note %}}
    +
    +## When does data actually get deleted?
    +The InfluxDB Cloud retention enforcement service **runs hourly** and tombstones
    +all points with timestamps beyond the bucket retention period.
    +Tombstoned points persist on disk, but are filtered from all query results until
    +the next [compaction](/influxdb/cloud/reference/glossary/#compaction) cycle,
    +when they are removed from disk.
    +Compaction cycle intervals vary.
    diff --git a/content/influxdb/cloud/reference/internals/durability.md b/content/influxdb/cloud/reference/internals/durability.md
    index 020460fa9..99281da31 100644
    --- a/content/influxdb/cloud/reference/internals/durability.md
    +++ b/content/influxdb/cloud/reference/internals/durability.md
    @@ -18,12 +18,13 @@ data is consistent and readable.
     
     ##### On this page
     
    -- [Data replication](#data-replication)
    +
     - [Backup processes](#backup-processes)
     - [Recovery](#recovery)
     - [Data verification](#data-verification)
     
    -## Data replication
    +
     
     ## Backup processes
     InfluxDB Cloud backs up all data in the following way:
     
     - [Backup on write](#backup-on-write)
     - [Backup after compaction](#backup-after-compaction)
    +- [Periodic TSM snapshots](#periodic-tsm-snapshots)
     
     ### Backup on write
     All inbound write requests to InfluxDB Cloud are added to a durable message queue.
    @@ -63,17 +66,22 @@ When each compaction cycle completes, InfluxDB Cloud stores compressed
     [TSM](/influxdb/cloud/reference/glossary/#tsm-time-structured-merge-tree) files
     in object storage.
     
    +### Periodic TSM snapshots
    +To provide multiple data recovery points, InfluxDB Cloud takes weekly snapshots of TSM files uploaded to object storage. The TSM snapshot includes a copy of all (non-deleted) data when the snapshot is created.
    +These snapshots are preserved for 100 days.
    +
     ## Recovery
     InfluxDB Cloud uses the following out-of-band backups stored in object storage to recover data:
     
     - **Message queue backup:** line protocol from inbound write requests within the last 96 hours
    -- **Historic backup:** compressed TSM files
    +- **Compaction backup:** TSM files
    +- **TSM snapshots:** Weekly snapshots of TSM files in objectstore
     
     The Recovery Point Objective (RPO) is any accepted write.
     The Recovery Time Objective (RTO) is harder to definitively predict as potential failure modes can vary.
     While most common failure modes can be resolved within minutes or hours,
     critical failure modes may take longer.
    -For example, if we need to rebuild all data from the message queue backup,
    +For example, if we need to rebuild all data from the TSM snapshots and message queue backup,
     it could take 24 hours or longer.
     
     ## Data verification
    diff --git a/content/influxdb/cloud/reference/internals/security.md b/content/influxdb/cloud/reference/internals/security.md
    index c618c02f0..a01c41d8d 100644
    --- a/content/influxdb/cloud/reference/internals/security.md
    +++ b/content/influxdb/cloud/reference/internals/security.md
    @@ -242,4 +242,4 @@ For users needing stricter security around data access and risk mitigation measu
     ## Compliance and auditing
     
     InfluxDB Cloud is **SOC2 Type II** certified.
    -To request a copy of our SOC2 Type II report, please email .
    +To request a copy of our SOC2 Type II report, please email .
    diff --git a/content/influxdb/cloud/reference/prometheus-metrics.md b/content/influxdb/cloud/reference/prometheus-metrics.md
    index 3449e0f9a..f1a3885db 100644
    --- a/content/influxdb/cloud/reference/prometheus-metrics.md
    +++ b/content/influxdb/cloud/reference/prometheus-metrics.md
    @@ -12,7 +12,7 @@ related:
       - https://prometheus.io/docs/concepts/data_model/, Prometheus data model
       - /influxdb/cloud/write-data/developer-tools/scrape-prometheus-metrics/
       - /{{< latest "flux" >}}/prometheus/, Work with Prometheus in Flux
    -  - /{{< latest "telegraf" >}}/plugins/#prometheus, Telegraf Prometheus input plugin
    +  - /{{< latest "telegraf" >}}/input-prometheus, Telegraf Prometheus input plugin
       - /{{< latest "flux" >}}/stdlib/experimental/prometheus/scrape/
     ---
     
    diff --git a/content/influxdb/cloud/reference/regions.md b/content/influxdb/cloud/reference/regions.md
    index e2555b575..5049dc568 100644
    --- a/content/influxdb/cloud/reference/regions.md
    +++ b/content/influxdb/cloud/reference/regions.md
    @@ -19,4 +19,16 @@ Use the URLs below to interact with your InfluxDB Cloud instances with the
     
     Request a cloud region
     
    +{{% note %}}
    +#### Regions with multiple clusters
    +Some InfluxDB Cloud regions have multiple Cloud clusters, each with a unique URL.
    +To find your cluster URL, [log in to your InfluxDB Cloud organization](https://cloud2.influxdata.com)
    +and review your organization URL. The first subdomain identifies your 
    +InfluxDB Cloud cluster. For example:
    +
    +
    +https://us-west-2-1.aws.cloud2.influxdata.com/orgs/03a2bbf46249a000/...
    +
    +{{% /note %}} + {{< cloud_regions >}} diff --git a/content/influxdb/cloud/reference/release-notes/cloud-updates.md b/content/influxdb/cloud/reference/release-notes/cloud-updates.md index cb147efe5..8e64e8a0a 100644 --- a/content/influxdb/cloud/reference/release-notes/cloud-updates.md +++ b/content/influxdb/cloud/reference/release-notes/cloud-updates.md @@ -14,10 +14,35 @@ aliases: InfluxDB Cloud updates occur frequently. Find a compilation of recent updates below. To find information about the latest Flux updates in InfluxDB Cloud, see [Flux release notes](/influxdb/cloud/reference/release-notes/flux/). - +To enhance security, the Tokens UI will only display an InfluxDB Cloud token when it's [first created](/influxdb/cloud/security/tokens/create-token/). If you return to the Token page later, you won't be able to view or copy the token. To learn more about token access restrictions, see [Create an API token](/influxdb/cloud/security/tokens/create-token/). + +### Multi-account support + +You can now invite a user to join an organization using the same email they've used in another InfluxDB Cloud account. Users [can switch between accounts in the UI](/influxdb/cloud/account-management/switch-account/). ## December 2021 diff --git a/content/influxdb/cloud/reference/sample-data.md b/content/influxdb/cloud/reference/sample-data.md index ed38e96e0..10e7b6c98 100644 --- a/content/influxdb/cloud/reference/sample-data.md +++ b/content/influxdb/cloud/reference/sample-data.md @@ -142,16 +142,15 @@ To do so, run the following: ```js import "experimental/csv" -relativeToNow = (tables=<-) => - tables +relativeToNow = (tables=<-) => tables |> elapsed() |> sort(columns: ["_time"], desc: true) |> cumulativeSum(columns: ["elapsed"]) - |> map(fn: (r) => ({ r with _time: time(v: int(v: now()) - (r.elapsed * 1000000000))})) + |> map(fn: (r) => ({r with _time: time(v: int(v: now()) - r.elapsed * 1000000000)})) csv.from(url: "https://influx-testdata.s3.amazonaws.com/noaa.csv") - |> relativeToNow() - |> to(bucket: "noaa", org: "example-org") + |> relativeToNow() + |> to(bucket: "noaa", org: "example-org") ``` {{% /note %}} @@ -186,10 +185,9 @@ Add the following as an [InfluxDB task](/influxdb/cloud/process-data/manage-task ```js import "influxdata/influxdb/sample" -option task = { - name: "Collect NOAA NDBC data" - every: 15m, -} + +option task = {name: "Collect NOAA NDBC data", every: 15m} + sample.data(set: "noaa") - |> to(bucket: "noaa" ) + |> to(bucket: "noaa") ``` diff --git a/content/influxdb/cloud/security/secrets/use.md b/content/influxdb/cloud/security/secrets/use.md index 119935f31..8adb814bf 100644 --- a/content/influxdb/cloud/security/secrets/use.md +++ b/content/influxdb/cloud/security/secrets/use.md @@ -23,8 +23,8 @@ username = secrets.get(key: "POSTGRES_USERNAME") password = secrets.get(key: "POSTGRES_PASSWORD") sql.from( - driverName: "postgres", - dataSourceName: "postgresql://${username}:${password}@localhost", - query:"SELECT * FROM example-table" + driverName: "postgres", + dataSourceName: "postgresql://${username}:${password}@localhost", + query:"SELECT * FROM example-table", ) ``` diff --git a/content/influxdb/cloud/sign-up.md b/content/influxdb/cloud/sign-up.md index 815b8cec2..65d3e0811 100644 --- a/content/influxdb/cloud/sign-up.md +++ b/content/influxdb/cloud/sign-up.md @@ -30,10 +30,10 @@ The primary differences between InfluxDB OSS 2.0 and InfluxDB Cloud are: Start using {{< cloud-name >}} at no cost with the [Free Plan](/influxdb/cloud/account-management/pricing-plans/#free-plan). Use it as much and as long as you like within the plan's rate-limits. -Limits are designed to let you monitor 5-10 sensors, stacks or servers comfortably. +[Limits](/influxdb/cloud/account-management/limits/) are designed to let you monitor 5-10 sensors, stacks or servers comfortably. {{% note %}} -Users on the Free Plan are limited to one organization. +Users on the Free Plan are limited to one organization. {{% /note %}} ## Sign up diff --git a/content/influxdb/cloud/tools/grafana.md b/content/influxdb/cloud/tools/grafana.md index f54f633f1..71eea89e5 100644 --- a/content/influxdb/cloud/tools/grafana.md +++ b/content/influxdb/cloud/tools/grafana.md @@ -60,10 +60,11 @@ configure your InfluxDB connection: - **Default Bucket**: The default [bucket](/influxdb/cloud/organizations/buckets/) to use in Flux queries. - **Min time interval**: The [Grafana minimum time interval](https://grafana.com/docs/grafana/latest/features/datasources/influxdb/#min-time-interval). - {{< img-hd src="/img/influxdb/cloud-tools-grafana.png" />}} - 2. Click **Save & Test**. Grafana attempts to connect to the InfluxDB datasource and returns the results of the test. + +{{< img-hd src="/img/influxdb/cloud-tools-grafana.png" />}} + {{% /tab-content %}} @@ -194,11 +195,11 @@ With **InfluxQL** selected as the query language in your InfluxDB data source se - **Password**: Leave empty - **HTTP Method**: Select **GET** - - {{< img-hd src="/img/influxdb/cloud-tools-grafana-influxql.png" />}} - 3. Click **Save & Test**. Grafana attempts to connect to the InfluxDB Cloud data source and returns the results of the test. + +{{< img-hd src="/img/influxdb/cloud-tools-grafana-influxql.png" />}} + {{% /tab-content %}} {{< /tabs-wrapper >}} diff --git a/content/influxdb/cloud/tools/repl.md b/content/influxdb/cloud/tools/repl.md index 07fc66ab2..08fcf9776 100644 --- a/content/influxdb/cloud/tools/repl.md +++ b/content/influxdb/cloud/tools/repl.md @@ -43,10 +43,10 @@ to the [`from()` function](/influxdb/cloud/reference/flux/stdlib/built-in/inputs ```js from( - bucket: "example-bucket", - host: "https://cloud2.influxdata.com", - org: "example-org", - token: "My5uP3rS3cRetT0k3n" + bucket: "example-bucket", + host: "https://cloud2.influxdata.com", + org: "example-org", + token: "My5uP3rS3cRetT0k3n" ) ``` diff --git a/content/influxdb/cloud/upgrade/v1-to-cloud/_index.md b/content/influxdb/cloud/upgrade/v1-to-cloud/_index.md index f3c3ba95b..3c40e1f4c 100644 --- a/content/influxdb/cloud/upgrade/v1-to-cloud/_index.md +++ b/content/influxdb/cloud/upgrade/v1-to-cloud/_index.md @@ -238,8 +238,8 @@ See [Required InfluxDB Cloud credentials](#required-influxdb-cloud-credentials) {{% note %}} #### InfluxDB Cloud write rate limits -Write requests are subject to rate limits associated with your -[InfluxDB Cloud pricing plan](/influxdb/cloud/account-management/pricing-plans/). +Read and write requests are subject to [rate limits](/influxdb/cloud/account-management/limits/#rate-limits) associated with your +InfluxDB Cloud plan. If your exported line protocol size potentially exceeds your rate limits, include the `--rate-limit` flag with `influx write` to rate limit written data. diff --git a/content/influxdb/cloud/upgrade/v1-to-cloud/migrate-cqs.md b/content/influxdb/cloud/upgrade/v1-to-cloud/migrate-cqs.md index 1e687348f..d012c235e 100644 --- a/content/influxdb/cloud/upgrade/v1-to-cloud/migrate-cqs.md +++ b/content/influxdb/cloud/upgrade/v1-to-cloud/migrate-cqs.md @@ -58,21 +58,15 @@ END ##### Equivalent Flux task ```js -option task = { - name: "downsample-daily", - every: 1d -} +option task = {name: "downsample-daily", every: 1d} from(bucket: "my-db/") - |> range(start: -task.every) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> filter(fn: (r) => r._field == "example-field") - |> aggregateWindow(every: 1h, fn: mean) - |> set(key: "_measurement", value: "average-example-measurement") - |> to( - org: "example-org", - bucket: "my-db/example-rp" - ) + |> range(start: -task.every) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> filter(fn: (r) => r._field == "example-field") + |> aggregateWindow(every: 1h, fn: mean) + |> set(key: "_measurement", value: "average-example-measurement") + |> to(org: "example-org", bucket: "my-db/example-rp") ``` ### Convert InfluxQL continuous queries to Flux @@ -142,15 +136,15 @@ INTO "example-db"."example-rp"."example-measurement" {{% code-tab-content %}} ```js // ... - |> set(key: "_measurement", value: "example-measurement") - |> to(bucket: "example-db/example-rp") + |> set(key: "_measurement", value: "example-measurement") + |> to(bucket: "example-db/example-rp") ``` {{% /code-tab-content %}} {{% code-tab-content %}} ```js // ... - |> map(fn: (r) => ({ r with _measurement: "example-measurement"})) - |> to(bucket: "example-db/example-rp") + |> map(fn: (r) => ({ r with _measurement: "example-measurement"})) + |> to(bucket: "example-db/example-rp") ``` {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} @@ -181,12 +175,12 @@ END // ... from(bucket: "my-db/") - |> range(start: -task.every) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> filter(fn: (r) => r._field == "example-field-1" or r._field == "example-field-2") - |> aggregateWindow(every: task.every, fn: mean) - |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") - |> experimental.to(bucket: "example-db/example-rp") + |> range(start: -task.every) + |> filter(fn: (r) => r._measurement == "example-measurement") + |> filter(fn: (r) => r._field == "example-field-1" or r._field == "example-field-2") + |> aggregateWindow(every: task.every, fn: mean) + |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") + |> experimental.to(bucket: "example-db/example-rp") ``` #### FROM clause @@ -204,7 +198,7 @@ FROM "example-measurement" ###### Flux ```js // ... - |> filter(fn: (r) => r._measurement == "example-measurement") + |> filter(fn: (r) => r._measurement == "example-measurement") ``` #### AS clause @@ -229,13 +223,13 @@ AS newfield {{% code-tab-content %}} ```js // ... - |> set(key: "_field", value: "newfield") + |> set(key: "_field", value: "newfield") ``` {{% /code-tab-content %}} {{% code-tab-content %}} ```js // ... - |> map(fn: (r) => ({ r with _field: "newfield"})) + |> map(fn: (r) => ({ r with _field: "newfield"})) ``` {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} @@ -256,8 +250,8 @@ WHERE "example-tag" = "foo" AND time > now() - 7d ###### Flux ```js // ... - |> range(start: -7d) - |> filter(fn: (r) => r["example-tag"] == "foo") + |> range(start: -7d) + |> filter(fn: (r) => r["example-tag"] == "foo") ``` #### GROUP BY clause @@ -277,7 +271,7 @@ GROUP BY "location" ###### Flux ```js // ... - |> group(columns: ["location"]) + |> group(columns: ["location"]) ``` ##### Group by time @@ -296,17 +290,11 @@ GROUP BY time(1h) ###### Flux ```js -option task = { - name: "task-name", - every: 1h -} +option task = {name: "task-name", every: 1h} // ... - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" - ) - |> aggregateWindow(every: task.every, fn: mean) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field") + |> aggregateWindow(every: task.every, fn: mean) ``` #### RESAMPLE clause @@ -335,22 +323,15 @@ END ###### Flux ```js -option task = { - name: "resample-example", - every: 1m -} +option task = {name: "resample-example", every: 1m} from(bucket: "my-db/") - |> range(start: -30m) - |> filter(fn: (r) => - r._measurement == "example-measurement" and - r._field == "example-field" and - r.region == "example-region" - ) - |> aggregateWindow(every: 1m, fn: mean) - |> exponentialMovingAverage(n: 30) - |> set(key: "_measurement", value: "resample-average-example-measurement") - |> to(bucket: "my-db/") + |> range(start: -30m) + |> filter(fn: (r) => r._measurement == "example-measurement" and r._field == "example-field" and r.region == "example-region") + |> aggregateWindow(every: 1m, fn: mean) + |> exponentialMovingAverage(n: 30) + |> set(key: "_measurement", value: "resample-average-example-measurement") + |> to(bucket: "my-db/") ``` ## Create new InfluxDB tasks diff --git a/content/influxdb/cloud/upgrade/v2-to-cloud.md b/content/influxdb/cloud/upgrade/v2-to-cloud.md index db40e713c..fa68c85b2 100644 --- a/content/influxdb/cloud/upgrade/v2-to-cloud.md +++ b/content/influxdb/cloud/upgrade/v2-to-cloud.md @@ -112,8 +112,7 @@ your **InfluxDB Cloud** instance. {{% note %}} #### InfluxDB Cloud Free Plan resource limits -If upgrading to an [InfluxDB Cloud Free Plan](/influxdb/cloud/account-management/pricing-plans/#free-plan), -you are only able to create a limited number of resources. +If upgrading to an InfluxDB Cloud Free Plan, you are only able to create a [limited number of resources](/influxdb/cloud/account-management/limits/#free-plan-limits). If your exported template exceeds these limits, the resource migration will fail. {{% /note %}} diff --git a/content/influxdb/cloud/visualize-data/variables/_index.md b/content/influxdb/cloud/visualize-data/variables/_index.md index 9185ffcd8..1551a1214 100644 --- a/content/influxdb/cloud/visualize-data/variables/_index.md +++ b/content/influxdb/cloud/visualize-data/variables/_index.md @@ -23,9 +23,9 @@ Reference each variable using dot-notation (e.g. `v.variableName`). ```js from(bucket: v.bucket) - |> range(start: v.timeRangeStart, stop: v.timeRangeStop) - |> filter(fn: (r) => r._measurement == v.measurement and r._field == v.field) - |> aggregateWindow(every: v.windowPeriod, fn: mean) + |> range(start: v.timeRangeStart, stop: v.timeRangeStop) + |> filter(fn: (r) => r._measurement == v.measurement and r._field == v.field) + |> aggregateWindow(every: v.windowPeriod, fn: mean) ``` When building Flux queries for dashboard cells, view available dashboard variables diff --git a/content/influxdb/cloud/visualize-data/variables/common-variables.md b/content/influxdb/cloud/visualize-data/variables/common-variables.md index 2238491cf..69c36bbb7 100644 --- a/content/influxdb/cloud/visualize-data/variables/common-variables.md +++ b/content/influxdb/cloud/visualize-data/variables/common-variables.md @@ -19,8 +19,8 @@ _**Flux functions:** ```js buckets() - |> rename(columns: {"name": "_value"}) - |> keep(columns: ["_value"]) + |> rename(columns: {"name": "_value"}) + |> keep(columns: ["_value"]) ``` ## List measurements @@ -31,6 +31,7 @@ _**Flux package:** [InfluxDB v1](/influxdb/cloud/reference/flux/stdlib/influxdb- ```js import "influxdata/influxdb/v1" + v1.measurements(bucket: "bucket-name") ``` @@ -42,10 +43,11 @@ _**Flux package:** [InfluxDB v1](/influxdb/cloud/reference/flux/stdlib/influxdb- ```js import "influxdata/influxdb/v1" + v1.measurementTagValues( - bucket: "bucket-name", - measurement: "measurment-name", - tag: "_field" + bucket: "bucket-name", + measurement: "measurment-name", + tag: "_field", ) ``` @@ -58,49 +60,53 @@ _**Flux functions:** [v1.tagValues()](/influxdb/cloud/reference/flux/stdlib/infl ```js import "influxdata/influxdb/v1" + v1.tagValues(bucket: "bucket-name", tag: "host") ``` ## List Docker containers List all Docker containers when using the Docker Telegraf plugin. -_**Telegraf plugin:** [Docker](/{{< latest "telegraf" >}}/plugins/#docker)_ +_**Telegraf plugin:** [Docker](/{{< latest "telegraf" >}}/plugins/#input-docker)_ _**Flux package:** [InfluxDB v1](/influxdb/cloud/reference/flux/stdlib/influxdb-v1/)_ _**Flux functions:** [v1.tagValues()](/influxdb/cloud/reference/flux/stdlib/influxdb-v1/tagvalues/)_ ```js import "influxdata/influxdb/v1" + v1.tagValues(bucket: "bucket-name", tag: "container_name") ``` ## List Kubernetes pods List all Kubernetes pods when using the Kubernetes Telegraf plugin. -_**Telegraf plugin:** [Kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes)_ +_**Telegraf plugin:** [Kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes)_ _**Flux package:** [InfluxDB v1](/influxdb/cloud/reference/flux/stdlib/influxdb-v1/)_ _**Flux functions:** [v1.measurementTagValues()](/influxdb/cloud/reference/flux/stdlib/influxdb-v1/measurementtagvalues/)_ ```js import "influxdata/influxdb/v1" + v1.measurementTagValues( - bucket: "bucket-name", - measurement: "kubernetes_pod_container", - tag: "pod_name" + bucket: "bucket-name", + measurement: "kubernetes_pod_container", + tag: "pod_name", ) ``` ## List Kubernetes nodes List all Kubernetes nodes when using the Kubernetes Telegraf plugin. -_**Telegraf plugin:** [Kubernetes](/{{< latest "telegraf" >}}/plugins/#kubernetes)_ +_**Telegraf plugin:** [Kubernetes](/{{< latest "telegraf" >}}/plugins/#input-kubernetes)_ _**Flux package:** [InfluxDB v1](/influxdb/cloud/reference/flux/stdlib/influxdb-v1/)_ _**Flux functions:** [v1.measurementTagValues()](/influxdb/cloud/reference/flux/stdlib/influxdb-v1/measurementtagvalues/)_ ```js import "influxdata/influxdb/v1" + v1.measurementTagValues( - bucket: "bucket-name", - measurement: "kubernetes_node", - tag: "node_name" + bucket: "bucket-name", + measurement: "kubernetes_node", + tag: "node_name", ) ``` diff --git a/content/influxdb/cloud/visualize-data/variables/variable-types.md b/content/influxdb/cloud/visualize-data/variables/variable-types.md index 68b814b90..7c28b358b 100644 --- a/content/influxdb/cloud/visualize-data/variables/variable-types.md +++ b/content/influxdb/cloud/visualize-data/variables/variable-types.md @@ -39,8 +39,8 @@ Query variable values are populated using the `_value` column of a Flux query. ```js // List all buckets buckets() - |> rename(columns: {"name": "_value"}) - |> keep(columns: ["_value"]) + |> rename(columns: {"name": "_value"}) + |> keep(columns: ["_value"]) ``` _For examples of dashboard variable queries, see [Common variable queries](/influxdb/cloud/visualize-data/variables/common-variables)._ diff --git a/content/influxdb/cloud/visualize-data/visualization-types/map.md b/content/influxdb/cloud/visualize-data/visualization-types/map.md index 0ae9ba9a0..d7f64e293 100644 --- a/content/influxdb/cloud/visualize-data/visualization-types/map.md +++ b/content/influxdb/cloud/visualize-data/visualization-types/map.md @@ -84,11 +84,11 @@ to display the migration path of a specific bird. ```js from(bucket: "migration") - |> range(start: v.timeRangeStart, stop: v.timeRangeStop) - |> filter(fn: (r) => r._measurement == "migration") - |> filter(fn: (r) => r._field == "lat" or r._field == "lon") - |> filter(fn: (r) => r.id == "91864A") - |> aggregateWindow(every: v.windowPeriod, fn: last) + |> range(start: v.timeRangeStart, stop: v.timeRangeStop) + |> filter(fn: (r) => r._measurement == "migration") + |> filter(fn: (r) => r._field == "lat" or r._field == "lon") + |> filter(fn: (r) => r.id == "91864A") + |> aggregateWindow(every: v.windowPeriod, fn: last) ``` ### View earthquakes reported by USGS @@ -96,8 +96,8 @@ The following query uses the [United States Geological Survey (USGS) earthquake ```js from(bucket: "usgs") - |> range(start: v.timeRangeStart, stop: v.timeRangeStop) - |> filter(fn: (r) => r._measurement == "earthquakes") - |> filter(fn: (r) => r._field == "lat" or r._field == "lon") + |> range(start: v.timeRangeStart, stop: v.timeRangeStop) + |> filter(fn: (r) => r._measurement == "earthquakes") + |> filter(fn: (r) => r._field == "lat" or r._field == "lon") ``` diff --git a/content/influxdb/cloud/write-data/bulk-ingest-cloud.md b/content/influxdb/cloud/write-data/bulk-ingest-cloud.md index d6287822c..ecc6586bc 100644 --- a/content/influxdb/cloud/write-data/bulk-ingest-cloud.md +++ b/content/influxdb/cloud/write-data/bulk-ingest-cloud.md @@ -9,7 +9,6 @@ menu: influxdb_cloud: name: Bulk ingest parent: Write data -products: [cloud] alias: - /influxdb/v2.0/write-data/bulk-ingest-cloud --- diff --git a/content/influxdb/cloud/write-data/delete-data.md b/content/influxdb/cloud/write-data/delete-data.md index 39714aee9..d4ccba6c3 100644 --- a/content/influxdb/cloud/write-data/delete-data.md +++ b/content/influxdb/cloud/write-data/delete-data.md @@ -2,7 +2,8 @@ title: Delete data list_title: Delete data description: > - Delete data in the InfluxDB CLI and API. + Use the `influx` CLI or the InfluxDB API `/api/v2/delete` endpoint to delete + data from an InfluxDB Bucket. menu: influxdb_cloud: name: Delete data diff --git a/content/influxdb/cloud/write-data/developer-tools/scrape-prometheus-metrics.md b/content/influxdb/cloud/write-data/developer-tools/scrape-prometheus-metrics.md index c34ce2527..01f5c0ae9 100644 --- a/content/influxdb/cloud/write-data/developer-tools/scrape-prometheus-metrics.md +++ b/content/influxdb/cloud/write-data/developer-tools/scrape-prometheus-metrics.md @@ -10,7 +10,7 @@ menu: name: Scrape Prometheus metrics parent: Developer tools related: - - /{{< latest "telegraf" >}}/plugins/#prometheus, Telegraf Prometheus input plugin + - /{{< latest "telegraf" >}}/plugins/#input-prometheus, Telegraf Prometheus input plugin - /{{< latest "flux" >}}/prometheus/scrape-prometheus/ - /{{< latest "flux" >}}/stdlib/experimental/prometheus/scrape/ - /{{< latest "flux" >}}/prometheus/metric-types/ diff --git a/content/influxdb/cloud/write-data/no-code/use-telegraf/auto-config.md b/content/influxdb/cloud/write-data/no-code/use-telegraf/auto-config.md index b96c25985..0b796ab3e 100644 --- a/content/influxdb/cloud/write-data/no-code/use-telegraf/auto-config.md +++ b/content/influxdb/cloud/write-data/no-code/use-telegraf/auto-config.md @@ -29,7 +29,7 @@ for using Telegraf with InfluxDB v2.0._ ## Create a Telegraf configuration -1. Open the InfluxDB Clou UI. +1. Open the InfluxDB Cloud UI. 2. In the navigation menu on the left, select **Data** (**Load Data**) > **Telegraf**. {{< nav-icon "load data" >}} diff --git a/content/influxdb/cloud/write-data/replication.md b/content/influxdb/cloud/write-data/replication.md new file mode 100644 index 000000000..371f8ff3f --- /dev/null +++ b/content/influxdb/cloud/write-data/replication.md @@ -0,0 +1,17 @@ +--- +title: Replicate data from InfluxDB OSS to InfluxDB Cloud +weight: 106 +description: > + Use InfluxDB replication streams to replicate all data written to an InfluxDB OSS + instance to InfluxDB Cloud. +menu: + influxdb_cloud: + name: Replicate data + parent: Write data +influxdb/cloud/tags: [write, replication] +related: + - /influxdb/cloud/reference/cli/influx/remote + - /influxdb/cloud/reference/cli/influx/replication +--- + +{{< duplicate-oss >}} diff --git a/content/influxdb/cloud/write-data/troubleshoot.md b/content/influxdb/cloud/write-data/troubleshoot.md index baa60c5a0..2a8642254 100644 --- a/content/influxdb/cloud/write-data/troubleshoot.md +++ b/content/influxdb/cloud/write-data/troubleshoot.md @@ -14,6 +14,7 @@ related: - /influxdb/cloud/api/#tag/Write, InfluxDB API /write endpoint - /influxdb/cloud/reference/internals - /influxdb/cloud/reference/cli/influx/write + - influxdb/cloud/account-management/limits --- Learn how to handle and recover from errors when writing to InfluxDB. @@ -29,8 +30,8 @@ Write requests made to InfluxDB may fail for a number of reasons. Common failure scenarios that return an HTTP `4xx` or `5xx` error status code include the following: - API token was invalid. See how to [manage API tokens](/influxdb/cloud/security/tokens/). -- Exceeded a rate limit. -- Payload size was too large. +- Requests exceeded [service quotas](/influxdb/cloud/account-management/limits/#adjustable-service-quotas). +- Payload size exceeded [global limits](/influxdb/cloud/account-management/limits/#global-limits). - Client or server reached a timeout threshold. - Data was not formatted correctly. See how to [find parsing errors](#find-parsing-errors) - Data did not conform to the [explicit bucket schema](/influxdb/cloud/organizations/buckets/bucket-schema/). @@ -46,28 +47,25 @@ To resolve partial writes and rejected points, see [troubleshoot failures](#trou ## Review HTTP status codes InfluxDB uses conventional HTTP status codes to indicate the success or failure of a request. + +{{% warn %}} +#### Asynchronous writes + +`204` indicates InfluxDB validated the request data format. Because data is written to InfluxDB asynchronously, data may not yet be written to a bucket. If some of your data didn't write to the bucket, see how to [check for rejected points](#review-rejected-points). +{{% /warn %}} + Write requests return the following status codes: -- `204` **Success**: InfluxDB validated the request data format and accepted the data for writing to the bucket. - {{% note %}} - `204` doesn't indicate a successful write operation given writes are asynchronous. If some of your data did not write to the bucket, see how to [check for rejected points](#review-rejected-points). - {{% /note %}} -- `400` **Bad request**: InfluxDB rejected some or all of the request data. - `code` and `message` in the response body provide details about the problem. - For more information, see how to [troubleshoot rejected points](#troubleshoot-rejected-points). -- `401` **Unauthorized**: May indicate one of the following: - - [`Authorization: Token` header](/influxdb/cloud/api-guide/api_intro/#authentication) is missing or malformed. - - [API token](/influxdb/cloud/api-guide/api_intro/#authentication) value is missing from the header. - - API token does not have sufficient permissions to write to the organization and bucket. - For more information about token types and permissions, see [Manage API tokens](/influxdb/cloud/security/tokens/) -- `404` **Not found**: A requested resource (e.g. an organization or bucket) was not found. - The response body contains the requested resource type, e.g. "organization", and resource name. -- `413` **Request entity too large**: Occurs in the following cases: - - The write request payload exceeds the size limit **50 MB** (compressed or uncompressed). - - A compressed payload attempts to uncompress more than **250 MB** of data. -- `429` **Too many requests**: API token is temporarily over quota. The `Retry-After` header describes when to try the write request again. -- `500` **Internal server error**: Default HTTP status for an error. -- `503` **Service unavailable**: Server is temporarily unavailable to accept writes. The `Retry-After` header describes when to try the write again. +| HTTP response code | Message | Description | +| :-------------------------------| :--------------------------------------------------------------- | :------------- | +| `204 "Success"` | | If InfluxDB validated the request data format and accepted the data for writing to the bucket | +| `400 "Bad request"` | `message` contains the first malformed line | If data is malformed | +| `401 "Unauthorized"` | | If the [`Authorization: Token` header](/influxdb/cloud/api-guide/api_intro/#authentication) is missing or malformed or if the [API token](/influxdb/cloud/api-guide/api_intro/#authentication) doesn't have [permission](/influxdb/cloud/security/tokens/) to write to the bucket | +| `404 "Not found"` | requested **resource type**, e.g. "organization", and **resource name** | If a requested resource (e.g. organization or bucket) wasn't found. | +| `413 “Request too large”` | cannot read data: points in batch is too large | If a **write** request exceeds the maximum [global limit](/influxdb/cloud/account-management/limits/#global-limits) | +| `429 “Too many requests”` | `Retry-After` header: xxx (seconds to wait before retrying the request) | If a **read** or **write** request exceeds your plan's [adjustable service quotas](/influxdb/cloud/account-management/limits/#adjustable-service-quotas) or if a **delete** request exceeds the maximum [global limit](/influxdb/cloud/account-management/limits/#global-limits) | +| `500 "Internal server error"` | | Default status for an error | +| `503 “Service unavailable“` | Series cardinality exceeds your plan's service quota | If **series cardinality** exceeds your plan's [adjustable service quotas](/influxdb/cloud/account-management/limits/#adjustable-service-quotas) | The `message` property of the response body may contain additional details about the error. @@ -125,9 +123,9 @@ To find parsing error details, query `rejected_points` entries that contain the ```js from(bucket: "_monitoring") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "rejected_points") - |> filter(fn: (r) => r._field == "error") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "rejected_points") + |> filter(fn: (r) => r._field == "error") ``` #### Find data type conflicts and schema rejections @@ -137,9 +135,9 @@ query for the `count` field. ```js from(bucket: "_monitoring") - |> range(start: -1h) - |> filter(fn: (r) => r._measurement == "rejected_points") - |> filter(fn: (r) => r._field == "count") + |> range(start: -1h) + |> filter(fn: (r) => r._measurement == "rejected_points") + |> filter(fn: (r) => r._field == "count") ``` ### Resolve data type conflicts diff --git a/content/influxdb/v1.3/tools/collectd.md b/content/influxdb/v1.3/tools/collectd.md index 437f7ff47..a351157d6 100644 --- a/content/influxdb/v1.3/tools/collectd.md +++ b/content/influxdb/v1.3/tools/collectd.md @@ -6,7 +6,8 @@ menu: influxdb_1_3: weight: 80 parent: Tools - url: https://github.com/influxdata/influxdb/blob/master/services/collectd/README.md + params: + url: https://github.com/influxdata/influxdb/blob/master/services/collectd/README.md --- See the [README](https://github.com/influxdata/influxdb/blob/master/services/collectd/README.md) on GitHub. diff --git a/content/influxdb/v1.3/tools/graphite.md b/content/influxdb/v1.3/tools/graphite.md index 15a8cd2ee..364889f1a 100644 --- a/content/influxdb/v1.3/tools/graphite.md +++ b/content/influxdb/v1.3/tools/graphite.md @@ -6,7 +6,8 @@ menu: influxdb_1_3: weight: 70 parent: Tools - url: https://github.com/influxdata/influxdb/blob/master/services/graphite/README.md + params: + url: https://github.com/influxdata/influxdb/blob/master/services/graphite/README.md --- See the [README](https://github.com/influxdata/influxdb/blob/master/services/graphite/README.md) on GitHub. diff --git a/content/influxdb/v1.3/tools/opentsdb.md b/content/influxdb/v1.3/tools/opentsdb.md index aa716d836..e8932225d 100644 --- a/content/influxdb/v1.3/tools/opentsdb.md +++ b/content/influxdb/v1.3/tools/opentsdb.md @@ -6,7 +6,8 @@ menu: influxdb_1_3: weight: 90 parent: Tools - url: https://github.com/influxdb/influxdb/blob/1.3/services/opentsdb/README.md + params: + url: https://github.com/influxdb/influxdb/blob/1.3/services/opentsdb/README.md --- See the [README](https://github.com/influxdata/influxdb/blob/master/services/opentsdb/README.md) on GitHub. diff --git a/content/influxdb/v1.3/tools/udp.md b/content/influxdb/v1.3/tools/udp.md index e4ccd4c80..44e4a4d82 100644 --- a/content/influxdb/v1.3/tools/udp.md +++ b/content/influxdb/v1.3/tools/udp.md @@ -6,7 +6,8 @@ menu: influxdb_1_3: weight: 60 parent: Tools - url: https://github.com/influxdata/influxdb/blob/master/services/udp/README.md + params: + url: https://github.com/influxdata/influxdb/blob/master/services/udp/README.md --- See the [README](https://github.com/influxdata/influxdb/blob/master/services/udp/README.md) on GitHub. diff --git a/content/influxdb/v1.6/troubleshooting/frequently-asked-questions.md b/content/influxdb/v1.6/troubleshooting/frequently-asked-questions.md index 6d91d6bd8..4108575a2 100644 --- a/content/influxdb/v1.6/troubleshooting/frequently-asked-questions.md +++ b/content/influxdb/v1.6/troubleshooting/frequently-asked-questions.md @@ -15,66 +15,66 @@ Where applicable, it links to outstanding issues on GitHub. **Administration** -* [How do I include a single quote in a password?](#how-do-i-include-a-single-quote-in-a-password) -* [How can I identify my version of InfluxDB?](#how-can-i-identify-my-version-of-influxdb) -* [Where can I find InfluxDB logs?](#where-can-i-find-influxdb-logs) -* [What is the relationship between shard group durations and retention policies?](#what-is-the-relationship-between-shard-group-durations-and-retention-policies) -* [Why aren't data dropped after I've altered a retention policy?](#why-aren-t-data-dropped-after-i-ve-altered-a-retention-policy) -* [Why does InfluxDB fail to parse microsecond units in the configuration file?](#why-does-influxdb-fail-to-parse-microsecond-units-in-the-configuration-file) +- [How do I include a single quote in a password?](#how-do-i-include-a-single-quote-in-a-password) +- [How can I identify my version of InfluxDB?](#how-can-i-identify-my-version-of-influxdb) +- [Where can I find InfluxDB logs?](#where-can-i-find-influxdb-logs) +- [What is the relationship between shard group durations and retention policies?](#what-is-the-relationship-between-shard-group-durations-and-retention-policies) +- [Why aren't data dropped after I've altered a retention policy?](#why-aren-t-data-dropped-after-i-ve-altered-a-retention-policy) +- [Why does InfluxDB fail to parse microsecond units in the configuration file?](#why-does-influxdb-fail-to-parse-microsecond-units-in-the-configuration-file) **Command Line Interface (CLI)** -* [How do I make InfluxDB’s CLI return human readable timestamps?](#how-do-i-make-influxdb-s-cli-return-human-readable-timestamps) -* [How can a non-admin user `USE` a database in InfluxDB's CLI?](#how-can-a-non-admin-user-use-a-database-in-influxdb-s-cli) -* [How do I write to a non-`DEFAULT` retention policy with InfluxDB's CLI?](#how-do-i-write-to-a-non-default-retention-policy-with-influxdb-s-cli) -* [How do I cancel a long-running query?](#how-do-i-cancel-a-long-running-query) +- [How do I make InfluxDB’s CLI return human readable timestamps?](#how-do-i-make-influxdb-s-cli-return-human-readable-timestamps) +- [How can a non-admin user `USE` a database in InfluxDB's CLI?](#how-can-a-non-admin-user-use-a-database-in-influxdb-s-cli) +- [How do I write to a non-`DEFAULT` retention policy with InfluxDB's CLI?](#how-do-i-write-to-a-non-default-retention-policy-with-influxdb-s-cli) +- [How do I cancel a long-running query?](#how-do-i-cancel-a-long-running-query) **Data Types** -* [Why can't I query Boolean field values?](#why-can-t-i-query-boolean-field-values) -* [How does InfluxDB handle field type discrepancies across shards?](#how-does-influxdb-handle-field-type-discrepancies-across-shards) -* [What are the minimum and maximum integers that InfluxDB can store?](#what-are-the-minimum-and-maximum-integers-that-influxdb-can-store) -* [What are the minimum and maximum timestamps that InfluxDB can store?](#what-are-the-minimum-and-maximum-timestamps-that-influxdb-can-store) -* [How can I tell what type of data is stored in a field?](#how-can-i-tell-what-type-of-data-is-stored-in-a-field) -* [Can I change a field's data type?](#can-i-change-a-field-s-data-type) +- [Why can't I query Boolean field values?](#why-can-t-i-query-boolean-field-values) +- [How does InfluxDB handle field type discrepancies across shards?](#how-does-influxdb-handle-field-type-discrepancies-across-shards) +- [What are the minimum and maximum integers that InfluxDB can store?](#what-are-the-minimum-and-maximum-integers-that-influxdb-can-store) +- [What are the minimum and maximum timestamps that InfluxDB can store?](#what-are-the-minimum-and-maximum-timestamps-that-influxdb-can-store) +- [How can I tell what type of data is stored in a field?](#how-can-i-tell-what-type-of-data-is-stored-in-a-field) +- [Can I change a field's data type?](#can-i-change-a-field-s-data-type) **InfluxQL Functions** -* [How do I perform mathematical operations within a function?](#how-do-i-perform-mathematical-operations-within-a-function) -* [Why does my query return epoch 0 as the timestamp?](#why-does-my-query-return-epoch-0-as-the-timestamp) -* [Which InfluxQL functions support nesting?](#which-influxql-functions-support-nesting) +- [How do I perform mathematical operations within a function?](#how-do-i-perform-mathematical-operations-within-a-function) +- [Why does my query return epoch 0 as the timestamp?](#why-does-my-query-return-epoch-0-as-the-timestamp) +- [Which InfluxQL functions support nesting?](#which-influxql-functions-support-nesting) **Querying Data** -* [What determines the time intervals returned by `GROUP BY time()` queries?](#what-determines-the-time-intervals-returned-by-group-by-time-queries) -* [Why do my queries return no data or partial data?](#why-do-my-queries-return-no-data-or-partial-data) -* [Why don't my `GROUP BY time()` queries return timestamps that occur after `now()`?](#why-don-t-my-group-by-time-queries-return-timestamps-that-occur-after-now) -* [Can I perform mathematical operations against timestamps?](#can-i-perform-mathematical-operations-against-timestamps) -* [Can I identify write precision from returned timestamps?](#can-i-identify-write-precision-from-returned-timestamps) -* [When should I single quote and when should I double quote in queries?](#when-should-i-single-quote-and-when-should-i-double-quote-in-queries) -* [Why am I missing data after creating a new `DEFAULT` retention policy?](#why-am-i-missing-data-after-creating-a-new-default-retention-policy) -* [Why is my query with a `WHERE OR` time clause returning empty results?](#why-is-my-query-with-a-where-or-time-clause-returning-empty-results) -* [Why does `fill(previous)` return empty results?](#why-does-fill-previous-return-empty-results) -* [Why are my `INTO` queries missing data?](#why-are-my-into-queries-missing-data) -* [How do I query data with an identical tag key and field key?](#how-do-i-query-data-with-an-identical-tag-key-and-field-key) -* [How do I query data across measurements?](#how-do-i-query-data-across-measurements) -* [Does the order of the timestamps matter?](#does-the-order-of-the-timestamps-matter) -* [How do I `SELECT` data with a tag that has no value?](#how-do-i-select-data-with-a-tag-that-has-no-value) +- [What determines the time intervals returned by `GROUP BY time()` queries?](#what-determines-the-time-intervals-returned-by-group-by-time-queries) +- [Why do my queries return no data or partial data?](#why-do-my-queries-return-no-data-or-partial-data) +- [Why don't my `GROUP BY time()` queries return timestamps that occur after `now()`?](#why-don-t-my-group-by-time-queries-return-timestamps-that-occur-after-now) +- [Can I perform mathematical operations against timestamps?](#can-i-perform-mathematical-operations-against-timestamps) +- [Can I identify write precision from returned timestamps?](#can-i-identify-write-precision-from-returned-timestamps) +- [When should I single quote and when should I double quote in queries?](#when-should-i-single-quote-and-when-should-i-double-quote-in-queries) +- [Why am I missing data after creating a new `DEFAULT` retention policy?](#why-am-i-missing-data-after-creating-a-new-default-retention-policy) +- [Why is my query with a `WHERE OR` time clause returning empty results?](#why-is-my-query-with-a-where-or-time-clause-returning-empty-results) +- [Why does `fill(previous)` return empty results?](#why-does-fill-previous-return-empty-results) +- [Why are my `INTO` queries missing data?](#why-are-my-into-queries-missing-data) +- [How do I query data with an identical tag key and field key?](#how-do-i-query-data-with-an-identical-tag-key-and-field-key) +- [How do I query data across measurements?](#how-do-i-query-data-across-measurements) +- [Does the order of the timestamps matter?](#does-the-order-of-the-timestamps-matter) +- [How do I `SELECT` data with a tag that has no value?](#how-do-i-select-data-with-a-tag-that-has-no-value) **Series and Series Cardinality** -* [Why does series cardinality matter?](#why-does-series-cardinality-matter) -* [How can I remove series from the index?](#how-can-i-remove-series-from-the-index) +- [Why does series cardinality matter?](#why-does-series-cardinality-matter) +- [How can I remove series from the index?](#how-can-i-remove-series-from-the-index) **Writing Data** -* [How do I write integer field values?](#how-do-i-write-integer-field-values) -* [How does InfluxDB handle duplicate points?](#how-does-influxdb-handle-duplicate-points) -* [What newline character does the HTTP API require?](#what-newline-character-does-the-http-api-require) -* [What words and characters should I avoid when writing data to InfluxDB?](#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb) -* [When should I single quote and when should I double quote when writing data?](#when-should-i-single-quote-and-when-should-i-double-quote-when-writing-data) -* [Does the precision of the timestamp matter?](#does-the-precision-of-the-timestamp-matter) -* [What are the configuration recommendations and schema guidelines for writing sparse, historical data?](#what-are-the-configuration-recommendations-and-schema-guidelines-for-writing-sparse-historical-data) +- [How do I write integer field values?](#how-do-i-write-integer-field-values) +- [How does InfluxDB handle duplicate points?](#how-does-influxdb-handle-duplicate-points) +- [What newline character does the HTTP API require?](#what-newline-character-does-the-http-api-require) +- [What words and characters should I avoid when writing data to InfluxDB?](#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb) +- [When should I single quote and when should I double quote when writing data?](#when-should-i-single-quote-and-when-should-i-double-quote-when-writing-data) +- [Does the precision of the timestamp matter?](#does-the-precision-of-the-timestamp-matter) +- [What are the configuration recommendations and schema guidelines for writing sparse, historical data?](#what-are-the-configuration-recommendations-and-schema-guidelines-for-writing-sparse-historical-data) ## How do I include a single quote in a password? Escape the single quote with a backslash (`\`) both when creating the password @@ -606,9 +606,9 @@ a `GROUP BY time()` clause must provide an alternative upper bound in the In the following example, the first query covers data with timestamps between `2015-09-18T21:30:00Z` and `now()`. The second query covers data with timestamps between `2015-09-18T21:30:00Z` and 180 weeks from `now()`. -``` -> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none) +```sql +> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none) > SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' AND time <= now() + 180w GROUP BY time(12m) fill(none) ``` @@ -617,6 +617,7 @@ Note that the `WHERE` clause must provide an alternative **upper** bound to override the default `now()` upper bound. The following query merely resets the lower bound to `now()` such that the query's time range is between `now()` and `now()`: + ``` > SELECT MEAN("boards") FROM "hillvalley" WHERE time >= now() GROUP BY time(12m) fill(none) > @@ -651,7 +652,9 @@ time value precision_supplied timestamp_supplied 1970-01-01T02:00:00Z 6 h 2 ``` -{{% warn %}} [GitHub Issue #2977](https://github.com/influxdb/influxdb/issues/2977) {{% /warn %}} +{{% warn %}} +[GitHub Issue #2977](https://github.com/influxdb/influxdb/issues/2977) +{{% /warn %}} ## When should I single quote and when should I double quote in queries? diff --git a/content/influxdb/v1.7/flux/get-started/_index.md b/content/influxdb/v1.7/flux/get-started/_index.md index c5b40bfec..ff86bb7bd 100644 --- a/content/influxdb/v1.7/flux/get-started/_index.md +++ b/content/influxdb/v1.7/flux/get-started/_index.md @@ -113,10 +113,11 @@ The Functions pane provides a list of functions available in your Flux queries. ### 2. influx CLI The `influx` CLI is an interactive shell for querying InfluxDB. -With InfluxDB v1.7+, use the `-type=flux` option to open a Flux REPL where you write and run Flux queries. +With InfluxDB v1.7+, use the `-type=flux` and `-path-prefix=/api/v2/query` options +to open a Flux REPL where you write and run Flux queries. ```bash -influx -type=flux +influx -type=flux -path-prefix=/api/v2/query ```