openapi: 3.0.0 info: title: InfluxDB Clustered API Service version: '' description: | The InfluxDB v2 HTTP API lets you use `/api/v2` endpoints for managing retention policy mappings and writing data stored in an InfluxDB v3 instance. This documentation is generated from the [InfluxDB OpenAPI specification](https://raw.githubusercontent.com/influxdata/openapi/master/contracts/ref/cloud.yml). license: name: MIT url: https://opensource.org/licenses/MIT summary: The InfluxDB v2 HTTP API for InfluxDB Clustered provides a v2-compatible programmatic interface for writing data stored in an InfluxDB Clustered database. servers: - url: https://{baseurl} description: InfluxDB Clustered API URL variables: baseurl: enum: - cluster-host.com default: cluster-host.com description: InfluxDB Clustered URL security: - BearerAuthentication: [] - TokenAuthentication: [] - BasicAuthentication: [] - QuerystringAuthentication: [] tags: - description: | ### Write data InfluxDB Clustered provides the following HTTP API endpoints for writing data: - **Recommended**: [`/api/v2/write` endpoint](/influxdb/clustered/api/#operation/PostWrite) for new write workloads or for bringing existing InfluxDB v2 write workloads to v3. - [`/write` endpoint](/influxdb/clustered/api/#operation/PostLegacyWrite) for bringing existing InfluxDB v1 write workloads to v3. Both endpoints accept the same line protocol format and process data in the same way. ### Query data InfluxDB Clustered provides the following protocols for executing a query: - **Recommended**: _Flight+gRPC_ request that contains an SQL or InfluxQL query. See how to [get started querying InfluxDB using Flight and SQL](/influxdb/clustered/get-started/query/). - HTTP API [`/query` request](/influxdb/clustered/api/#operation/GetLegacyQuery) that contains an InfluxQL query. Use this protocol when bringing existing InfluxDB v1 query workloads to v3. ### InfluxDB v2 compatibility The HTTP API [`/api/v2/write` endpoint](/influxdb/clustered/api/#operation/PostWrite) works with the [`Bearer`](#section/Authentication/BearerAuthentication) and [`Token`](#section/Authentication/TokenAuthentication) authentication schemes and existing InfluxDB 2.x tools and code for [writing data](/influxdb/clustered/write-data/). See how to [use the InfluxDB v2 HTTP API with InfluxDB Clustered ](/influxdb/clustered/guides/api-compatibility/v2/). ### InfluxDB v1 compatibility The HTTP API [`/write` endpoint](/influxdb/clustered/api/#operation/PostLegacyWrite) and [`/query` endpoint](/influxdb/clustered/api/#operation/GetLegacyQuery) work with InfluxDB 1.x username/password [authentication schemes](#section/Authentication/) and existing InfluxDB 1.x tools and code. See how to [use the InfluxDB v1 HTTP API with InfluxDB Clustered ](/influxdb/clustered/guides/api-compatibility/v1/). name: API compatibility x-traitTag: true - description: | Use one of the following schemes to authenticate to the InfluxDB API: - [Bearer authentication](#section/Authentication/BearerAuthentication) - [Token authentication](#section/Authentication/TokenAuthentication) - [Basic authentication](#section/Authentication/BasicAuthentication) - [Querystring authentication](#section/Authentication/QuerystringAuthentication) name: Authentication x-traitTag: true - description: | To specify resources, some InfluxDB API endpoints require parameters or properties in the request--for example, writing to a `database` resource. ### Common parameters | Query parameter | Value type | Description | |:------------------------ |:--------------------- |:-------------------------------------------| | `database`, `db` | string | The database name | name: Common parameters x-traitTag: true - name: Data I/O endpoints description: | Write and query data stored in InfluxDB. - description: | InfluxDB HTTP API endpoints use standard HTTP request and response headers. The following table shows common headers used by many InfluxDB API endpoints. Some endpoints may use other headers that perform functions more specific to those endpoints--for example, the `POST /api/v2/write` endpoint accepts the `Content-Encoding` header to indicate the compression applied to line protocol in the request body. | Header | Value type | Description | |:------------------------ |:--------------------- |:-------------------------------------------| | `Accept` | string | The content type that the client can understand. | | `Authorization` | string | The authorization scheme and credential. | | `Content-Length` | integer | The size of the entity-body, in bytes, sent to the database. | | `Content-Type` | string | The format of the data in the request body. | name: Headers x-traitTag: true - name: Ping - description: | Query data stored in a database. - HTTP clients can query the v1 [`/query` endpoint](/influxdb/clustered/api/#operation/GetLegacyQuery) using **InfluxQL** and retrieve data in **CSV** or **JSON** format. - The `/api/v2/query` endpoint can't query InfluxDB Clustered. - _Flight + gRPC_ clients can query using **SQL** or **InfluxQL** and retrieve data in **Arrow** format. #### Related guides - [Get started querying InfluxDB](/influxdb/clustered/get-started/query/) - [Execute queries](/influxdb/clustered/query-data/execute-queries/) name: Query - description: | See the [**Get Started**](/influxdb/clustered/get-started/) tutorial to get up and running authenticating with tokens, writing to databases, and querying data. [**InfluxDB API client libraries and Flight clients**](/influxdb/clustered/reference/client-libraries/) are available to integrate InfluxDB APIs with your application. name: Quick start x-traitTag: true - description: | InfluxDB HTTP API endpoints use standard HTTP status codes for success and failure responses. The response body may include additional details. For details about a specific operation's response, see **Responses** and **Response Samples** for that operation. API operations may return the following HTTP status codes: |  Code  | Status | Description | |:-----------:|:------------------------ |:--------------------- | | `200` | Success | | | `204` | Success. No content | InfluxDB doesn't return data for the request. For example, a successful write request returns `204` status code, acknowledging that data is written and queryable. | | `400` | Bad request | InfluxDB can't parse the request due to an incorrect parameter or bad syntax. If line protocol in the request body is malformed. The response body contains the first malformed line and indicates what was expected. For partial writes, the number of points written and the number of points rejected are also included. | | `401` | Unauthorized | May indicate one of the following: | | `404` | Not found | Requested resource was not found. `message` in the response body provides details about the requested resource. | | `405` | Method not allowed | The API path doesn't support the HTTP method used in the request--for example, you send a `POST` request to an endpoint that only allows `GET`. | | `413` | Request entity too large | Request payload exceeds the size limit. | | `422` | Unprocessable entity | Request data is invalid. `code` and `message` in the response body provide details about the problem. | | `429` | Too many requests | API token is temporarily over the request quota. The `Retry-After` header describes when to try the request again. | | `500` | Internal server error | | | `503` | Service unavailable | Server is temporarily unavailable to process the request. The `Retry-After` header describes when to try the request again. | name: Response codes x-traitTag: true - name: System information endpoints - name: Usage - description: | Write time series data to [databases](/influxdb/clustered/admin/databases/) using InfluxDB v1 or v2 endpoints. name: Write paths: /ping: get: description: | Retrieves the status and InfluxDB version of the instance. Use this endpoint to monitor uptime for the InfluxDB instance. The response returns a HTTP `204` status code to inform you the instance is available. This endpoint doesn't require authentication. operationId: GetPing responses: '204': description: | Success. Headers contain InfluxDB version information. headers: X-Influxdb-Build: description: | The type of InfluxDB build. schema: type: string X-Influxdb-Version: description: | The version of InfluxDB. schema: type: integer 4xx: description: | #### InfluxDB Cloud - Doesn't return this error. security: - {} servers: [] summary: Get the status of the instance tags: - Ping head: description: | Returns the status and InfluxDB version of the instance. Use this endpoint to monitor uptime for the InfluxDB instance. The response returns a HTTP `204` status code to inform you the instance is available. This endpoint doesn't require authentication. operationId: HeadPing responses: '204': description: | Success. Headers contain InfluxDB version information. headers: X-Influxdb-Build: description: The type of InfluxDB build. schema: type: string X-Influxdb-Version: description: | The version of InfluxDB. schema: type: integer 4xx: description: | #### InfluxDB Cloud - Doesn't return this error. security: - {} servers: [] summary: Get the status of the instance tags: - Ping /api/v2/write: post: description: | Writes data to a database. Use this endpoint to send data in [line protocol](/influxdb/clustered/reference/syntax/line-protocol/) format to InfluxDB. InfluxDB does the following when you send a write request: 1. Validates the request 2. If successful, attempts to [ingest the data](/influxdb/clustered/reference/internals/durability/#data-ingest); _error_ otherwise. 3. If successful, responds with _success_ (HTTP `204` status code), acknowledging that the data is written and queryable; _error_ otherwise. To ensure that InfluxDB Cloud handles writes in the order you request them, wait for a success response (HTTP `2xx` status code) before you send the next request. #### Related guides - [Get started writing data](/influxdb/clustered/get-started/write/) - [Write data](/influxdb/clustered/write-data/) - [Best practices for writing data](/influxdb/clustered/write-data/best-practices/) - [Troubleshoot issues writing data](/influxdb/clustered/write-data/troubleshoot/) operationId: PostWrite parameters: - $ref: '#/components/parameters/TraceSpan' - description: | The compression applied to the line protocol in the request payload. To send a gzip payload, pass `Content-Encoding: gzip` header. in: header name: Content-Encoding schema: default: identity description: | Content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data. enum: - gzip - identity type: string - description: | The format of the data in the request body. To send a line protocol payload, pass `Content-Type: text/plain; charset=utf-8`. in: header name: Content-Type schema: default: text/plain; charset=utf-8 description: | `text/plain` is the content type for line protocol. `UTF-8` is the default character set. enum: - text/plain - text/plain; charset=utf-8 type: string - description: | The size of the entity-body, in bytes, sent to InfluxDB. If the length is greater than the `max body` configuration option, the server responds with status code `413`. in: header name: Content-Length schema: description: The length in decimal number of octets. type: integer - description: | The content type that the client can understand. Writes only return a response body if they fail--for example, due to a formatting problem or quota limit. - Returns only `application/json` for format and limit errors. - Returns only `text/html` for some quota limit errors. #### Related guides - [Troubleshoot issues writing data](/influxdb/clustered/write-data/troubleshoot/) in: header name: Accept schema: default: application/json description: Error content type. enum: - application/json type: string - description: | Ignored. An organization name or ID. InfluxDB ignores this parameter; authorizes the request using the specified database token and writes data to the specified cluster database. in: query name: org required: true schema: description: The organization name or ID. type: string - description: | Ignored. An organization ID. InfluxDB ignores this parameter; authorizes the request using the specified database token and writes data to the specified cluster database. in: query name: orgID schema: type: string - description: | A database name or ID. InfluxDB writes all points in the batch to the specified database. in: query name: bucket required: true schema: description: The database name or ID. type: string - description: The precision for unix timestamps in the line protocol batch. in: query name: precision schema: $ref: '#/components/schemas/WritePrecision' requestBody: content: text/plain: examples: plain-utf8: value: | airSensors,sensor_id=TLM0201 temperature=73.97038159354763,humidity=35.23103248356096,co=0.48445310567793615 1630424257000000000 airSensors,sensor_id=TLM0202 temperature=75.30007505999716,humidity=35.651929918691714,co=0.5141876544505826 1630424257000000000 schema: format: byte type: string description: | In the request body, provide data in [line protocol format](/influxdb/clustered/reference/syntax/line-protocol/). To send compressed data, do the following: 1. Use [gzip](https://www.gzip.org/) to compress the line protocol data. 2. In your request, send the compressed data and the `Content-Encoding: gzip` header. #### Related guides - [Best practices for optimizing writes](/influxdb/clustered/write-data/best-practices/optimize-writes/) required: true responses: '204': description: | Success. Data is written and queryable. '400': content: application/json: examples: measurementSchemaFieldTypeConflict: summary: field type conflict thrown by an explicit database schema value: code: invalid message: 'failed to parse line protocol: error writing line 2: Unable to insert iox::column_type::field::integer type into column temp with type iox::column_type::field::string' schema: $ref: '#/components/schemas/LineProtocolError' description: | Bad request. The response body contains detail about the error. InfluxDB returns this error if the line protocol data in the request is malformed or contains a database schema conflict. The response body contains the first malformed line in the data, and indicates what was expected. '401': $ref: '#/components/responses/AuthorizationError' '404': $ref: '#/components/responses/ResourceNotFoundError' '413': content: application/json: examples: dataExceedsSizeLimitOSS: summary: InfluxDB OSS response value: | {"code":"request too large","message":"unable to read data: points batch is too large"} schema: $ref: '#/components/schemas/LineProtocolLengthError' text/html: examples: dataExceedsSizeLimit: summary: InfluxDB Cloud response value: | 413 Request Entity Too Large

413 Request Entity Too Large


nginx
schema: type: string description: | The request payload is too large. InfluxDB rejected the batch and did not write any data. InfluxDB returns this error if the payload exceeds the size limit. '429': description: | Too many requests. #### InfluxDB Cloud - Returns this error if a **read** or **write** request exceeds your plan's [adjustable service quotas](/influxdb/clustered/account-management/limits/#adjustable-service-quotas) or if a **delete** request exceeds the maximum [global limit](/influxdb/clustered/account-management/limits/#global-limits). - For rate limits that reset automatically, returns a `Retry-After` header that describes when to try the write again. - For limits that can't reset (for example, **cardinality limit**), doesn't return a `Retry-After` header. Rates (data-in (writes), queries (reads), and deletes) accrue within a fixed five-minute window. Once a rate limit is exceeded, InfluxDB returns an error response until the current five-minute window resets. headers: Retry-After: description: Non-negative decimal integer indicating seconds to wait before retrying the request. schema: format: int32 type: integer '500': $ref: '#/components/responses/InternalServerError' '503': description: | Service unavailable. - Returns this error if the server is temporarily unavailable to accept writes. - Returns a `Retry-After` header that describes when to try the write again. headers: Retry-After: description: Non-negative decimal integer indicating seconds to wait before retrying the request. schema: format: int32 type: integer default: $ref: '#/components/responses/GeneralServerError' summary: Write data tags: - Data I/O endpoints - Write /query: get: description: Queries InfluxDB using InfluxQL with InfluxDB v1 request and response formats. operationId: GetLegacyQuery parameters: - $ref: '#/components/parameters/TraceSpan' - in: header name: Accept schema: default: application/json description: | Media type that the client can understand. **Note**: With `application/csv`, query results include [**unix timestamps**](/influxdb/clustered/reference/glossary/#unix-timestamp) instead of [RFC3339 timestamps](/influxdb/clustered/reference/glossary/#rfc3339-timestamp). enum: - application/json - application/csv - text/csv - application/x-msgpack type: string - description: The content encoding (usually a compression algorithm) that the client can understand. in: header name: Accept-Encoding schema: default: identity description: The content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data. enum: - gzip - identity type: string - in: header name: Content-Type schema: enum: - application/json type: string - description: The InfluxDB 1.x username to authenticate the request. in: query name: u schema: type: string - description: The InfluxDB 1.x password to authenticate the request. in: query name: p schema: type: string - description: | The [database](/influxdb/clustered/admin/databases/) to query data from. in: query name: db required: true schema: type: string - description: | The retention policy to query data from. For more information, see [InfluxQL DBRP naming convention](/influxdb/clustered/admin/databases/create/#influxql-dbrp-naming-convention). in: query name: rp schema: type: string - description: The InfluxQL query to execute. To execute multiple queries, delimit queries with a semicolon (`;`). in: query name: q required: true schema: type: string - description: | A unix timestamp precision. Formats timestamps as [unix (epoch) timestamps](/influxdb/clustered/reference/glossary/#unix-timestamp) the specified precision instead of [RFC3339 timestamps](/influxdb/clustered/reference/glossary/#rfc3339-timestamp) with nanosecond precision. in: query name: epoch schema: enum: - ns - u - ยต - ms - s - m - h type: string responses: '200': content: application/csv: schema: $ref: '#/components/schemas/InfluxqlCsvResponse' application/json: schema: $ref: '#/components/schemas/InfluxqlJsonResponse' examples: influxql-chunk_size_2: value: | {"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag"],"values":[["2016-05-19T18:37:55Z",90,"1"],["2016-05-19T18:37:56Z",90,"1"]],"partial":true}],"partial":true}]} {"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag"],"values":[["2016-05-19T18:37:57Z",90,"1"],["2016-05-19T18:37:58Z",90,"1"]]}]}]} application/x-msgpack: schema: format: binary type: string text/csv: schema: $ref: '#/components/schemas/InfluxqlCsvResponse' description: Query results headers: Content-Encoding: description: Lists encodings (usually compression algorithms) that have been applied to the response payload. schema: default: identity description: | The content coding: - `gzip`: compressed data - `identity`: unmodified, uncompressed data. enum: - gzip - identity type: string Trace-Id: description: The trace ID, if generated, of the request. schema: description: Trace ID of a request. type: string '429': description: | #### InfluxDB Cloud: - returns this error if a **read** or **write** request exceeds your plan's [adjustable service quotas](/influxdb/clustered/account-management/limits/#adjustable-service-quotas) or if a **delete** request exceeds the maximum [global limit](/influxdb/clustered/account-management/limits/#global-limits) - returns `Retry-After` header that describes when to try the write again. headers: Retry-After: description: A non-negative decimal integer indicating the seconds to delay after the response is received. schema: format: int32 type: integer default: content: application/json: schema: $ref: '#/components/schemas/Error' description: Error processing query summary: Query using the InfluxDB v1 HTTP API tags: - Query /write: post: operationId: PostLegacyWrite parameters: - $ref: '#/components/parameters/TraceSpan' - description: The InfluxDB 1.x username to authenticate the request. in: query name: u schema: type: string - description: The InfluxDB 1.x password to authenticate the request. in: query name: p schema: type: string - description: database to write to. If none exists, InfluxDB creates a database with a default 3-day retention policy. in: query name: db required: true schema: type: string - description: Retention policy name. in: query name: rp schema: type: string - description: Write precision. in: query name: precision schema: type: string - description: When present, its value indicates to the database that compression is applied to the line protocol body. in: header name: Content-Encoding schema: default: identity description: Specifies that the line protocol in the body is encoded with gzip or not encoded with identity. enum: - gzip - identity type: string requestBody: content: text/plain: schema: type: string description: Line protocol body required: true responses: '204': description: Write data is correctly formatted and accepted for writing to the database. '400': description: | Data from the batch was rejected and not written. The response body indicates if a partial write occurred or all data was rejected. If a partial write occurred, then some points from the batch are written and queryable. The response body contains details about the [rejected points](/influxdb/clustered/write-data/troubleshoot/#troubleshoot-rejected-points), up to 100 points. content: application/json: examples: rejectedAllPoints: summary: Rejected all points value: code: invalid line: 2 message: 'no data written, errors encountered on line(s): error message for first rejected point error message for second rejected point error message for Nth rejected point (up to 100 rejected points)' partialWriteErrorWithRejectedPoints: summary: Partial write rejects some points value: code: invalid line: 2 message: 'partial write has occurred, errors encountered on line(s): error message for first rejected point error message for second rejected point error message for Nth rejected point (up to 100 rejected points)' schema: $ref: '#/components/schemas/LineProtocolError' '401': content: application/json: schema: $ref: '#/components/schemas/Error' description: Token doesn't have sufficient permissions to write to this database or the database doesn't exist. '403': content: application/json: schema: $ref: '#/components/schemas/Error' description: No token was sent and they are required. '413': content: application/json: schema: $ref: '#/components/schemas/LineProtocolLengthError' description: Write has been rejected because the payload is too large. Error message returns max size supported. All data in body was rejected and not written. '429': description: Token is temporarily over quota. The Retry-After header describes when to try the write again. headers: Retry-After: description: A non-negative decimal integer indicating the seconds to delay after the response is received. schema: format: int32 type: integer '503': description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again. headers: Retry-After: description: A non-negative decimal integer indicating the seconds to delay after the response is received. schema: format: int32 type: integer default: content: application/json: schema: $ref: '#/components/schemas/Error' description: Internal server error description: | Writes data to a database. Use this InfluxDB v1-compatible endpoint to send data in [line protocol](/influxdb/clustered/reference/syntax/line-protocol/) format to InfluxDB using v1 API parameters and authorization. InfluxDB does the following when you send a write request: 1. Validates the request 2. If successful, attempts to [ingest the data](/influxdb/clustered/reference/internals/durability/#data-ingest); _error_ otherwise. 3. If successful, responds with _success_ (HTTP `204` status code), acknowledging that the data is written and queryable; _error_ otherwise. To ensure that InfluxDB handles writes in the order you request them, wait for a success response (HTTP `2xx` status code) before you send the next request. #### Related guides - [Write data with the InfluxDB API](/influxdb/clustered/get-started/write/) - [Optimize writes to InfluxDB](/influxdb/clustered/write-data/best-practices/optimize-writes/) - [Troubleshoot issues writing data](/influxdb/clustered/write-data/troubleshoot/) summary: Write data using the InfluxDB v1 HTTP API tags: - Write components: parameters: TraceSpan: description: OpenTracing span context example: baggage: key: value span_id: '1' trace_id: '1' in: header name: Zap-Trace-Span required: false schema: type: string responses: AuthorizationError: content: application/json: examples: tokenNotAuthorized: summary: Token is not authorized to access a resource value: code: unauthorized message: unauthorized access schema: properties: code: description: | The HTTP status code description. Default is `unauthorized`. enum: - unauthorized readOnly: true type: string message: description: A human-readable message that may contain detail about the error. readOnly: true type: string description: | Unauthorized. The error may indicate one of the following: * The `Authorization: Token` header is missing or malformed. * The API token value is missing from the header. * The token doesn't have sufficient permissions to write to or query the database. BadRequestError: content: application/json: examples: orgProvidedNotFound: summary: The org or orgID passed doesn't own the token passed in the header value: code: invalid message: 'failed to decode request body: organization not found' schema: $ref: '#/components/schemas/Error' description: | Bad request. The response body contains details about the error. GeneralServerError: content: application/json: schema: $ref: '#/components/schemas/Error' description: Non 2XX error response from server. InternalServerError: content: application/json: schema: $ref: '#/components/schemas/Error' description: | Internal server error. The server encountered an unexpected situation. ResourceNotFoundError: content: application/json: examples: bucket-not-found: summary: database name not found value: code: not found message: database "air_sensor" not found org-not-found: summary: Organization name not found value: code: not found message: organization name "my-org" not found orgID-not-found: summary: Organization ID not found value: code: not found message: organization not found schema: $ref: '#/components/schemas/Error' description: | Not found. A requested resource was not found. The response body contains the requested resource type and the name value (if you passed it)--for example: - `"organization name \"my-org\" not found"` - `"organization not found"`: indicates you passed an ID that did not match an organization. ServerError: content: application/json: schema: $ref: '#/components/schemas/Error' description: Non 2XX error response from server. schemas: AddResourceMemberRequestBody: properties: id: description: | The ID of the user to add to the resource. type: string name: description: | The name of the user to add to the resource. type: string required: - id type: object AnalyzeQueryResponse: properties: errors: items: properties: character: type: integer column: type: integer line: type: integer message: type: string type: object type: array type: object BadStatement: description: A placeholder for statements for which no correct statement nodes can be created properties: text: description: Raw source text type: string type: $ref: '#/components/schemas/NodeType' type: object BooleanLiteral: description: Represents boolean values properties: type: $ref: '#/components/schemas/NodeType' value: type: boolean type: object ConstantVariableProperties: properties: type: enum: - constant type: string values: items: type: string type: array DBRP: properties: bucketID: description: | A database ID. Identifies the database used as the target for the translation. type: string database: description: | A database name. Identifies the InfluxDB v1 database. type: string default: description: | If set to `true`, this DBRP mapping is the default retention policy for the database (specified by the `database` property's value). type: boolean id: description: | The resource ID that InfluxDB uses to uniquely identify the database retention policy (DBRP) mapping. readOnly: true type: string links: $ref: '#/components/schemas/Links' orgID: description: | An organization ID. Identifies the [organization](/influxdb/clustered/reference/glossary/#organization) that owns the mapping. type: string retention_policy: description: | A [retention policy](/influxdb/v1.8/concepts/glossary/#retention-policy-rp) name. Identifies the InfluxDB v1 retention policy mapping. type: string virtual: description: Indicates an autogenerated, virtual mapping based on the database name. Currently only available in OSS. type: boolean required: - id - orgID - bucketID - database - retention_policy - default type: object DBRPCreate: properties: bucketID: description: | A database ID. Identifies the database used as the target for the translation. type: string database: description: | A database name. Identifies the InfluxDB v1 database. type: string default: description: | Set to `true` to use this DBRP mapping as the default retention policy for the database (specified by the `database` property's value). type: boolean org: description: | An organization name. Identifies the [organization](/influxdb/clustered/reference/glossary/#organization) that owns the mapping. type: string orgID: description: | An organization ID. Identifies the [organization](/influxdb/clustered/reference/glossary/#organization) that owns the mapping. type: string retention_policy: description: | A [retention policy](/influxdb/v1.8/concepts/glossary/#retention-policy-rp) name. Identifies the InfluxDB v1 retention policy mapping. type: string required: - bucketID - database - retention_policy type: object DBRPGet: properties: content: $ref: '#/components/schemas/DBRP' required: true type: object DBRPUpdate: properties: default: description: | Set to `true` to use this DBRP mapping as the default retention policy for the database (specified by the `database` property's value). To remove the default mapping, set to `false`. type: boolean retention_policy: description: | A [retention policy](/influxdb/v1.8/concepts/glossary/#retention-policy-rp) name. Identifies the InfluxDB v1 retention policy mapping. type: string DBRPs: properties: content: items: $ref: '#/components/schemas/DBRP' type: array DateTimeLiteral: description: Represents an instant in time with nanosecond precision in [RFC3339Nano date/time format](/influxdb/clustered/reference/glossary/#rfc3339nano-timestamp). properties: type: $ref: '#/components/schemas/NodeType' value: format: date-time type: string type: object DecimalPlaces: description: Indicates whether decimal places should be enforced, and how many digits it should show. properties: digits: description: The number of digits after decimal to display format: int32 type: integer isEnforced: description: Indicates whether decimal point setting should be enforced type: boolean type: object DeletePredicateRequest: description: The delete predicate request. properties: predicate: description: | An expression in [delete predicate syntax](/influxdb/clustered/reference/syntax/delete-predicate/). example: tag1="value1" and (tag2="value2" and tag3!="value3") type: string start: description: | A timestamp ([RFC3339 date/time format](/influxdb/clustered/reference/glossary/#rfc3339-timestamp)). The earliest time to delete from. format: date-time type: string stop: description: | A timestamp ([RFC3339 date/time format](/influxdb/clustered/reference/glossary/#rfc3339-timestamp)). The latest time to delete from. format: date-time type: string required: - start - stop type: object Dialect: description: | Options for tabular data output. Default output is [annotated CSV](/influxdb/clustered/reference/syntax/annotated-csv/#csv-response-format) with headers. For more information about tabular data **dialect**, see [W3 metadata vocabulary for tabular data](https://www.w3.org/TR/2015/REC-tabular-metadata-20151217/#dialect-descriptions). properties: annotations: description: | Annotation rows to include in the results. An _annotation_ is metadata associated with an object (column) in the data model. #### Related guides - See [Annotated CSV annotations](/influxdb/clustered/reference/syntax/annotated-csv/#annotations) for examples and more information. For more information about **annotations** in tabular data, see [W3 metadata vocabulary for tabular data](https://www.w3.org/TR/2015/REC-tabular-data-model-20151217/#columns). items: enum: - group - datatype - default type: string type: array uniqueItems: true commentPrefix: default: '#' description: The character prefixed to comment strings. Default is a number sign (`#`). maxLength: 1 minLength: 0 type: string dateTimeFormat: default: RFC3339 description: | The format for timestamps in results. Default is [`RFC3339` date/time format](/influxdb/clustered/reference/glossary/#rfc3339-timestamp). To include nanoseconds in timestamps, use `RFC3339Nano`. #### Example formatted date/time values | Format | Value | |:------------|:----------------------------| | `RFC3339` | `"2006-01-02T15:04:05Z07:00"` | | `RFC3339Nano` | `"2006-01-02T15:04:05.999999999Z07:00"` | enum: - RFC3339 - RFC3339Nano type: string delimiter: default: ',' description: The separator used between cells. Default is a comma (`,`). maxLength: 1 minLength: 1 type: string header: default: true description: If true, the results contain a header row. type: boolean type: object Duration: description: A pair consisting of length of time and the unit of time measured. It is the atomic unit from which all duration literals are composed. properties: magnitude: type: integer type: $ref: '#/components/schemas/NodeType' unit: type: string type: object DurationLiteral: description: Represents the elapsed time between two instants as an int64 nanosecond count with syntax of golang's time.Duration properties: type: $ref: '#/components/schemas/NodeType' values: description: Duration values items: $ref: '#/components/schemas/Duration' type: array type: object Error: properties: code: $ref: '#/components/schemas/ErrorCode' description: code is the machine-readable error code. enum: - internal error - not implemented - not found - conflict - invalid - unprocessable entity - empty value - unavailable - forbidden - too many requests - unauthorized - method not allowed - request too large - unsupported media type readOnly: true type: string err: description: Stack of errors that occurred during processing of the request. Useful for debugging. readOnly: true type: string message: description: Human-readable message. readOnly: true type: string op: description: Describes the logical code operation when the error occurred. Useful for debugging. readOnly: true type: string required: - code ErrorCode: description: code is the machine-readable error code. enum: - internal error - not implemented - not found - conflict - invalid - unprocessable entity - empty value - unavailable - forbidden - too many requests - unauthorized - method not allowed - request too large - unsupported media type readOnly: true type: string Field: properties: alias: description: Alias overrides the field name in the returned response. Applies only if type is `func` type: string args: description: Args are the arguments to the function items: $ref: '#/components/schemas/Field' type: array type: description: '`type` describes the field type. `func` is a function. `field` is a field reference.' enum: - func - field - integer - number - regex - wildcard type: string value: description: value is the value of the field. Meaning of the value is implied by the `type` key type: string type: object File: description: Represents a source from a single file type: object Flags: additionalProperties: true type: object FloatLiteral: description: Represents floating point numbers according to the double representations defined by the IEEE-754-1985 properties: type: $ref: '#/components/schemas/NodeType' value: type: number type: object InfluxqlCsvResponse: description: CSV Response to InfluxQL Query example: | name,tags,time,test_field,test_tag test_measurement,,1603740794286107366,1,tag_value test_measurement,,1603740870053205649,2,tag_value test_measurement,,1603741221085428881,3,tag_value type: string InfluxqlJsonResponse: description: | The JSON response for an InfluxQL query. A response contains the collection of results for a query. `results` is an array of resultset objects. If the response is chunked, the `transfer-encoding` response header is set to `chunked` and each resultset object is sent in a separate JSON object. properties: results: description: | A resultset object that contains the `statement_id` and the `series` array. Except for `statement_id`, all properties are optional and omitted if empty. If a property is not present, it is assumed to be `null`. items: properties: error: type: string partial: description: | True if the resultset is not complete--the response data is chunked; otherwise, false or omitted. type: boolean series: description: | An array of series objects--the results of the query. A series of rows shares the same group key returned from the execution of a statement. If a property is not present, it is assumed to be `null`. items: properties: columns: description: An array of column names items: type: string type: array name: description: The name of the series type: string partial: description: | True if the series is not complete--the response data is chunked; otherwise, false or omitted. type: boolean tags: additionalProperties: type: string description: | A map of tag key-value pairs. If a tag key is not present, it is assumed to be `null`. type: object values: description: | An array of rows, where each row is an array of values. items: items: {} type: array type: array type: object type: array statement_id: description: | An integer that represents the statement's position in the query. If statement results are buffered in memory, `statement_id` is used to combine statement results. type: integer type: object oneOf: - required: - statement_id - error - required: - statement_id - series type: array type: object IntegerLiteral: description: Represents integer numbers properties: type: $ref: '#/components/schemas/NodeType' value: type: string type: object IsOnboarding: properties: allowed: description: | If `true`, the InfluxDB instance hasn't had initial setup; `false` otherwise. type: boolean type: object Label: properties: id: readOnly: true type: string name: type: string orgID: readOnly: true type: string properties: additionalProperties: type: string description: | Key-value pairs associated with this label. To remove a property, send an update with an empty value (`""`) for the key. example: color: ffb3b3 description: this is a description type: object type: object LabelCreateRequest: properties: name: type: string orgID: type: string properties: additionalProperties: type: string description: | Key-value pairs associated with this label. To remove a property, send an update with an empty value (`""`) for the key. example: color: ffb3b3 description: this is a description type: object required: - orgID - name type: object LabelMapping: description: A _label mapping_ contains a `label` ID to attach to a resource. properties: labelID: description: | A label ID. Specifies the label to attach. type: string required: - labelID type: object LabelResponse: properties: label: $ref: '#/components/schemas/Label' links: $ref: '#/components/schemas/Links' type: object LabelUpdate: properties: name: type: string properties: additionalProperties: description: | Key-value pairs associated with this label. To remove a property, send an update with an empty value (`""`) for the key. type: string example: color: ffb3b3 description: this is a description type: object type: object Labels: items: $ref: '#/components/schemas/Label' type: array LabelsResponse: properties: labels: $ref: '#/components/schemas/Labels' links: $ref: '#/components/schemas/Links' type: object LanguageRequest: description: Flux query to be analyzed. properties: query: description: | The Flux query script to be analyzed. type: string required: - query type: object LatLonColumn: description: Object type for key and column definitions properties: column: description: Column to look up Lat/Lon type: string key: description: Key to determine whether the column is tag/field type: string required: - key - column type: object Limit: description: These are org limits similar to those configured in/by quartz. properties: bucket: properties: maxBuckets: type: integer maxRetentionDuration: description: Max database retention duration in nanoseconds. 0 is unlimited. type: integer required: - maxBuckets - maxRetentionDuration type: object check: properties: maxChecks: type: integer required: - maxChecks type: object dashboard: properties: maxDashboards: type: integer required: - maxDashboards type: object features: properties: allowDelete: description: allow delete predicate endpoint type: boolean type: object notificationEndpoint: properties: blockedNotificationEndpoints: description: comma separated list of notification endpoints example: http,pagerduty type: string required: - blockNotificationEndpoints type: object notificationRule: properties: blockedNotificationRules: description: comma separated list of notification rules example: http,pagerduty type: string maxNotifications: type: integer required: - maxNotifications - blockNotificationRules type: object orgID: type: string rate: properties: cardinality: description: Allowed organization total cardinality. 0 is unlimited. type: integer concurrentDeleteRequests: description: Allowed organization concurrent outstanding delete requests. type: integer concurrentReadRequests: description: Allowed concurrent queries. 0 is unlimited. type: integer concurrentWriteRequests: description: Allowed concurrent writes. 0 is unlimited. type: integer deleteRequestsPerSecond: description: Allowed organization delete request rate. type: integer queryTime: description: Query Time in nanoseconds type: integer readKBs: description: Query limit in kb/sec. 0 is unlimited. type: integer writeKBs: description: Write limit in kb/sec. 0 is unlimited. type: integer required: - readKBs - queryTime - concurrentReadRequests - writeKBs - concurrentWriteRequests - cardinality type: object stack: properties: enabled: type: boolean required: - enabled type: object task: properties: maxTasks: type: integer required: - maxTasks type: object timeout: properties: queryUnconditionalTimeoutSeconds: type: integer queryidleWriteTimeoutSeconds: type: integer required: - queryUnconditionalTimeoutSeconds - queryidleWriteTimeoutSeconds type: object required: - rate - bucket - task - dashboard - check - notificationRule - notificationEndpoint type: object LineProtocolError: properties: code: description: Code is the machine-readable error code. enum: - internal error - not found - conflict - invalid - empty value - unavailable readOnly: true type: string err: description: Stack of errors that occurred during processing of the request. Useful for debugging. readOnly: true type: string line: description: First line in the request body that contains malformed data. format: int32 readOnly: true type: integer message: description: Human-readable message. readOnly: true type: string op: description: Describes the logical code operation when the error occurred. Useful for debugging. readOnly: true type: string required: - code LineProtocolLengthError: properties: code: description: Code is the machine-readable error code. enum: - invalid readOnly: true type: string message: description: Human-readable message. readOnly: true type: string required: - code - message Link: description: URI of resource. format: uri readOnly: true type: string Links: description: | URI pointers for additional paged results. properties: next: $ref: '#/components/schemas/Link' prev: $ref: '#/components/schemas/Link' self: $ref: '#/components/schemas/Link' required: - self type: object LogEvent: properties: message: description: A description of the event that occurred. example: Halt and catch fire readOnly: true type: string runID: description: The ID of the task run that generated the event. readOnly: true type: string time: description: The time ([RFC3339Nano date/time format](/influxdb/clustered/reference/glossary/#rfc3339nano-timestamp)) that the event occurred. example: 2006-01-02T15:04:05.999999999Z07:00 format: date-time readOnly: true type: string type: object Logs: properties: events: items: $ref: '#/components/schemas/LogEvent' readOnly: true type: array type: object NodeType: description: Type of AST node type: string OnboardingRequest: properties: bucket: type: string limit: $ref: '#/components/schemas/Limit' org: type: string password: type: string retentionPeriodHrs: deprecated: true type: integer retentionPeriodSeconds: type: integer username: type: string required: - username - org - bucket type: object Organization: properties: createdAt: format: date-time readOnly: true type: string defaultStorageType: description: Discloses whether the organization uses TSM or IOx. enum: - tsm - iox type: string description: type: string id: readOnly: true type: string links: example: buckets: /api/v2/buckets?org=myorg dashboards: /api/v2/dashboards?org=myorg labels: /api/v2/orgs/1/labels members: /api/v2/orgs/1/members owners: /api/v2/orgs/1/owners secrets: /api/v2/orgs/1/secrets self: /api/v2/orgs/1 tasks: /api/v2/tasks?org=myorg properties: buckets: $ref: '#/components/schemas/Link' dashboards: $ref: '#/components/schemas/Link' labels: $ref: '#/components/schemas/Link' members: $ref: '#/components/schemas/Link' owners: $ref: '#/components/schemas/Link' secrets: $ref: '#/components/schemas/Link' self: $ref: '#/components/schemas/Link' tasks: $ref: '#/components/schemas/Link' readOnly: true type: object name: type: string status: default: active description: If inactive, the organization is inactive. enum: - active - inactive type: string updatedAt: format: date-time readOnly: true type: string required: - name Organizations: properties: links: $ref: '#/components/schemas/Links' orgs: items: $ref: '#/components/schemas/Organization' type: array type: object Package: description: Represents a complete package source tree. properties: files: description: Package files items: $ref: '#/components/schemas/File' type: array package: description: Package name type: string path: description: Package import path type: string type: $ref: '#/components/schemas/NodeType' type: object PackageClause: description: Defines a package identifier type: object Params: properties: params: additionalProperties: enum: - any - bool - duration - float - int - string - time - uint type: string description: | The `params` keys and value type defined in the script. type: object type: object PasswordResetBody: properties: password: type: string required: - password PatchBucketRequest: description: | An object that contains updated database properties to apply. properties: description: description: | A description of the bucket. type: string name: description: | The name of the bucket. type: string retentionRules: $ref: '#/components/schemas/PatchRetentionRules' type: object PatchOrganizationRequest: description: | An object that contains updated organization properties to apply. properties: description: description: | The description of the organization. type: string name: description: | The name of the organization. type: string type: object PatchRetentionRule: properties: everySeconds: default: 2592000 description: | The number of seconds to keep data. Default duration is `2592000` (30 days). `0` represents infinite retention. example: 86400 format: int64 minimum: 0 type: integer shardGroupDurationSeconds: description: | The [shard group duration](/influxdb/clustered/reference/glossary/#shard). The number of seconds that each shard group covers. #### InfluxDB Cloud - Doesn't use `shardGroupDurationsSeconds`. #### Related guides - InfluxDB [shards and shard groups](/influxdb/clustered/reference/internals/shards/) format: int64 type: integer type: default: expire enum: - expire type: string required: - everySeconds type: object PatchRetentionRules: description: Updates to rules to expire or retain data. No rules means no updates. items: $ref: '#/components/schemas/PatchRetentionRule' type: array PipeLiteral: description: Represents a specialized literal value, indicating the left hand value of a pipe expression properties: type: $ref: '#/components/schemas/NodeType' type: object Ready: properties: started: example: '2019-03-13T10:09:33.891196-04:00' format: date-time type: string status: enum: - ready type: string up: example: 14m45.911966424s type: string type: object RegexpLiteral: description: Expressions begin and end with `/` and are regular expressions with syntax accepted by RE2 properties: type: $ref: '#/components/schemas/NodeType' value: type: string type: object RetentionRule: properties: everySeconds: default: 2592000 description: | The duration in seconds for how long data will be kept in the database. The default duration is 2592000 (30 days). 0 represents infinite retention. example: 86400 format: int64 minimum: 0 type: integer shardGroupDurationSeconds: description: | The shard group duration. The duration or interval (in seconds) that each shard group covers. #### InfluxDB Cloud - Does not use `shardGroupDurationsSeconds`. format: int64 type: integer type: default: expire enum: - expire type: string required: - everySeconds type: object RetentionRules: description: | Retention rules to expire or retain data. The InfluxDB `/api/v2` API uses `RetentionRules` to configure the [retention period](/influxdb/clustered/reference/glossary/#retention-period). #### InfluxDB Cloud - `retentionRules` is required. items: $ref: '#/components/schemas/RetentionRule' type: array SecretKeys: properties: secrets: items: type: string type: array type: object SecretKeysResponse: allOf: - $ref: '#/components/schemas/SecretKeys' - properties: links: properties: org: type: string self: type: string readOnly: true type: object type: object Secrets: additionalProperties: type: string example: apikey: abc123xyz StringLiteral: description: Expressions begin and end with double quote marks properties: type: $ref: '#/components/schemas/NodeType' value: type: string type: object Token: properties: token: type: string type: object UnsignedIntegerLiteral: description: Represents integer numbers properties: type: $ref: '#/components/schemas/NodeType' value: type: string type: object WritePrecision: enum: - ms - s - us - ns type: string securitySchemes: BasicAuthentication: description: | ### Basic authentication scheme Use the `Authorization` header with the `Basic` scheme to authenticate v1 API `/write` and `/query` requests. When authenticating requests, InfluxDB Clustered checks that the `password` part of the decoded credential is an authorized [database token](/influxdb/clustered/admin/tokens/#database-tokens). InfluxDB Clustered ignores the `username` part of the decoded credential. ### Syntax ```http Authorization: Basic ``` Replace the following: - **`[USERNAME]`**: an optional string value (ignored by InfluxDB Clustered). - **`DATABASE_TOKEN`**: a [database token](/influxdb/clustered/admin/tokens/#database-tokens). - Encode the `[USERNAME]:DATABASE_TOKEN` credential using base64 encoding, and then append the encoded string to the `Authorization: Basic` header. ### Example The following example shows how to use cURL with the `Basic` authentication scheme and a [database token](/influxdb/clustered/admin/tokens/#database-tokens): ```sh ####################################### # Use Basic authentication with a database token # to query the InfluxDB v1 HTTP API ####################################### # Use the --user option with `--user username:DATABASE_TOKEN` syntax ####################################### curl --get "http://cluster-id.a.influxdb.io/query" \ --user "":"DATABASE_TOKEN" \ --data-urlencode "db=DATABASE_NAME" \ --data-urlencode "q=SELECT * FROM MEASUREMENT" ``` Replace the following: - **`DATABASE_NAME`**: your InfluxDB Clustered database - **`DATABASE_TOKEN`**: a [database token](/influxdb/clustered/admin/tokens/#database-tokens) with sufficient permissions to the database scheme: basic type: http QuerystringAuthentication: type: apiKey in: query name: u=&p= description: | Use the Querystring authentication scheme with InfluxDB 1.x API parameters to provide credentials through the query string. ### Query string authentication In the URL, pass the `p` query parameter to authenticate `/write` and `/query` requests. When authenticating requests, InfluxDB Clustered checks that `p` (_password_) is an authorized database token and ignores the `u` (_username_) parameter. ### Syntax ```http https://cluster-id.a.influxdb.io/query/?[u=any]&p=DATABASE_TOKEN https://cluster-id.a.influxdb.io/write/?[u=any]&p=DATABASE_TOKEN ``` ### Example The following example shows how to use cURL with query string authentication and a [database token](/influxdb/clustered/admin/tokens/#database-tokens). ```sh ####################################### # Use an InfluxDB 1.x compatible username and password # to query the InfluxDB v1 HTTP API ####################################### # Use authentication query parameters: # ?p=DATABASE_TOKEN ####################################### curl --get "https://cluster-id.a.influxdb.io/query" \ --data-urlencode "p=DATABASE_TOKEN" \ --data-urlencode "db=DATABASE_NAME" \ --data-urlencode "q=SELECT * FROM MEASUREMENT" ``` Replace the following: - **`DATABASE_NAME`**: your InfluxDB Clustered database - **`DATABASE_TOKEN`**: a [database token](/influxdb/clustered/admin/tokens/#database-tokens) with sufficient permissions to the database BearerAuthentication: type: http scheme: bearer bearerFormat: JWT description: | Use the OAuth Bearer authentication scheme to authenticate to the InfluxDB API. In your API requests, send an `Authorization` header. For the header value, provide the word `Bearer` followed by a space and a database token. ### Syntax ```http Authorization: Bearer INFLUX_TOKEN ``` ### Example ```sh ######################################################## # Use the Bearer token authentication scheme with /api/v2/write # to write data. ######################################################## curl --request post "https://cluster-id.a.influxdb.io/api/v2/write?bucket=DATABASE_NAME&precision=s" \ --header "Authorization: Bearer DATABASE_TOKEN" \ --data-binary 'home,room=kitchen temp=72 1463683075' ``` For examples and more information, see the following: - [Authenticate API requests](/influxdb/clustered/primers/api/v2/#authenticate-api-requests) - [Manage tokens](/influxdb/clustered/admin/tokens/) TokenAuthentication: description: | Use the Token authentication scheme to authenticate to the InfluxDB API. In your API requests, send an `Authorization` header. For the header value, provide the word `Token` followed by a space and a database token. The word `Token` is case-sensitive. ### Syntax ```http Authorization: Token INFLUX_API_TOKEN ``` ### Example ```sh ######################################################## # Use the Token authentication scheme with /api/v2/write # to write data. ######################################################## curl --request post "https://cluster-id.a.influxdb.io/api/v2/write?bucket=DATABASE_NAME&precision=s" \ --header "Authorization: Token DATABASE_TOKEN" \ --data-binary 'home,room=kitchen temp=72 1463683075' ``` ### Related guides - [Authenticate API requests](/influxdb/clustered/primers/api/v2/#authenticate-api-requests) - [Manage tokens](/influxdb/clustered/admin/tokens/) in: header name: Authorization type: apiKey x-tagGroups: - name: Using the InfluxDB HTTP API tags: - Quick start - Authentication - Headers - Pagination - Response codes - System information endpoints - name: All endpoints tags: - Ping - Query - Write