feature(dedicated): InfluxDB Flight responses and errors (Closes #5114): (#5115)

* feature(dedicated): InfluxDB Flight responses and errors (Closes #5114):

- Describe Flight+gRPC high-level request semantics.
- Relationship between gRPC, Flight, InfluxDB
- List error codes using by InfluxDB, how they translate to Flight and gRPC statuses.
- Example errors, what they mean, possible causes

Co-authored by: Jay Clifford<@Jayclifford345>

* Update content/influxdb/cloud-dedicated/query-data/troubleshoot.md

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* fix(v3): Apply suggestions from @alamb. Refine organization, add stream and RecordBatch sections.

* fix(v3): improve stream, schema, batch

* fix(v3): toc

* fix(v3): troubleshoot flight (Flight responses and errors when querying v3 #5114):

- gRPC server-side streaming and nomenclature.

* fix(v3): Fix evaluation and prototyping #5120

* fix(v3): rename, update v3 Python reference, fix params (closes Flight responses and errors when querying v3 #5114)

* fix(v3): file name

* Update content/influxdb/cloud-dedicated/query-data/execute-queries/troubleshoot.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-dedicated/query-data/execute-queries/troubleshoot.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-serverless/query-data/execute-queries/troubleshoot.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* Update content/influxdb/cloud-serverless/query-data/execute-queries/troubleshoot.md

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>

* fix(v3): add PR suggestions

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
pull/5124/head
Jason Stirnaman 2023-09-05 13:00:29 -05:00 committed by GitHub
parent 8208824e34
commit 4197c01712
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 922 additions and 224 deletions

View File

@ -80,6 +80,7 @@ tsm|TSM
uint|UINT
uinteger
unescaped
unprocessable
unix
upsample
upsert

View File

@ -36,7 +36,7 @@ best practices for building an application prototype on Cloud Serverless.
- [Use SQL or InfluxQL as your Query Language](#use-sql-or-influxql-as-your-query-language)
- [Stay within the schema limits of InfluxDB Cloud Serverless](#stay-within-the-schema-limits-of-influxdb-cloud-serverless)
- [Keep test and production data separate](#keep-test-and-production-data-separate)
<!-- END TOC -->
## Key differences between InfluxDB Cloud Serverless and Cloud Dedicated
@ -138,7 +138,7 @@ or use 3rd-party tools like Grafana or Prefect--for example:
In addition to the token management UI differences mentioned previously
(there is a UI and API for this with Cloud Serverless, with InfluxDB Cloud
Dedicated you use `influxctl`), there are also differences in the granularity
of token permissions---InfluxDB Cloud Dedicated has a few more permission options.
of token permissions---InfluxDB Cloud Dedicated has a few more permission options.
| Function | InfluxDB Cloud Serverless | InfluxDB Cloud Dedicated |
| :------------------- | :------------------------ | :----------------------- |
@ -194,7 +194,7 @@ The easiest way to avoid using features in InfluxDB Cloud Serverless that don
exist in Cloud Dedicated is to avoid using the Cloud Serverless UI, except when
managing tokens and buckets.
In order to maintain compatibility with Cloud Dedicated, specifically avoid using the following
InfluxDB Cloud Serverless features:
InfluxDB Cloud Serverless features:
- The v2 query API and the Flux language
- Administrative APIs
@ -213,7 +213,7 @@ Avoid Flux since it cant be used with InfluxDB Cloud Dedicated.
If you stay within InfluxDB Cloud Serverless limits for tables (measurements)
and columns (time, fields, and tags) within a table, then you wont have any
problems with limits in InfluxDB Cloud Dedicated.
Cloud Dedicated also provides more flexibility by letting you configure limits.
Cloud Dedicated also provides more flexibility by letting you configure limits.
| Description | Limit |
| :--------------------------- | ----: |

View File

@ -5,7 +5,7 @@ list_title: Use the v1 query API and InfluxQL
description: >
Use the InfluxDB v1 HTTP query API to query data in InfluxDB Cloud Dedicated
with InfluxQL.
weight: 401
weight: 301
menu:
influxdb_cloud_dedicated:
parent: Execute queries

View File

@ -0,0 +1,274 @@
---
title: Understand and troubleshoot Flight responses
description: >
Understand responses and troubleshoot errors encountered when querying InfluxDB with Flight+gRPC and Arrow Flight clients.
weight: 401
menu:
influxdb_cloud_dedicated:
name: Understand Flight responses
parent: Execute queries
influxdb/cloud-dedicated/tags: [query, sql, influxql]
---
Learn how to handle responses and troubleshoot errors encountered when querying {{% cloud-name %}} with Flight+gRPC and Arrow Flight clients.
<!-- TOC -->
- [InfluxDB Flight responses](#influxdb-flight-responses)
- [Stream](#stream)
- [Schema](#schema)
- [Example](#example)
- [RecordBatch](#recordbatch)
- [InfluxDB status and error codes](#influxdb-status-and-error-codes)
- [Troubleshoot errors](#troubleshoot-errors)
- [Internal Error: Received RST_STREAM](#internal-error-received-rst_stream)
- [Internal Error: stream terminated by RST_STREAM with NO_ERROR](#internal-error-stream-terminated-by-rst_stream-with-no_error)
- [Invalid Argument: Invalid ticket](#invalid-argument-invalid-ticket)
- [Unauthenticated: Unauthenticated](#unauthenticated-unauthenticated)
- [Unauthorized: Permission denied](#unauthorized-permission-denied)
- [FlightUnavailableError: Could not get default pem root certs](#flightunavailableerror-could-not-get-default-pem-root-certs)
## InfluxDB Flight responses
{{% cloud-name %}} provides an InfluxDB-specific Arrow Flight remote procedure calls (RPC) and Flight SQL service that uses gRPC, a high performance RPC framework, to transport data in Arrow format.
Flight defines a set of [RPC methods](https://arrow.apache.org/docs/format/Flight.html#rpc-methods-and-request-patterns) that servers and clients can use to exchange information.
Flight SQL uses Flight RPC and defines additional methods to query database metadata, execute queries, and manipulate prepared statements.
To learn more about Flight SQL, see [Introducing Apache Arrow Flight SQL: Accelerating Database Access](https://arrow.apache.org/blog/2022/02/16/introducing-arrow-flight-sql/).
To query data or retrieve information about data stored in {{% cloud-name %}}, a Flight client (for example, `influx3` or an InfluxDB v3 client library) sends a request that calls an InfluxDB Flight RPC or Flight SQL service method.
For example, if you call the `influxdb3-python` Python client library `InfluxDBClient3.query()` method, the client in turn calls the `pyarrow.flight.FlightClient.do_get()` method that passes a Flight ticket containing your credentials and query to InfluxDB's Flight [`DoGet(FlightCallOptions, Ticket)` method](https://arrow.apache.org/docs/cpp/api/flight.html#_CPPv4N5arrow6flight12FlightClient5DoGetERK17FlightCallOptionsRK6Ticket).
InfluxDB responds with one of the following:
- A [stream](#stream) in Arrow IPC streaming format
- An [error status code](#influxdb-error-codes) and an optional `details` field that contains the status and a message that describes the error
### Stream
InfluxDB provides Flight RPC methods and implements server-side streaming for clients to retrieve and download data.
In a gRPC server-side streaming scenario, a client sends an RPC call in a request to a server.
Because the server can return a _stream_ of multiple responses to the client, the client request contains an identifier that the client and server use to keep track of the request and associated responses.
As the server sends responses, they are associated with the corresponding stream on the client side.
An Arrow Flight service, such as InfluxDB, sends a stream in [Arrow IPC streaming format](https://arrow.apache.org/docs/format/Columnar.html#serialization-and-interprocess-communication-ipc) that defines the structure of the stream and each response, or _message_, in the stream.
Flight client libraries, such as `pyarrow.flight` and the Go Arrow Flight package, implement an Arrow interface for retrieving the data, schema, and metadata from the stream.
After {{% cloud-name %}} successfully processes a query, it sends a stream that contains the following:
1. A [Schema](#schema) that applies to all record batches in the stream
2. [RecordBatch](#recordbatch) messages with query result data
3. The [request status](#influxdb-status-and-error-codes) (`OK`)
4. Optional: trailing metadata
### Schema
An InfluxDB Flight response stream contains a [Flight schema](https://arrow.apache.org/docs/format/Columnar.html#schema-message) that describes the data type and InfluxDB data element type (timestamp, tag, or field) for columns in the data set.
All data chunks, or record batches, in the same stream have the same schema.
Data transformation tools can use the schema when converting Arrow data to other formats and back to Arrow.
#### Example
Given the following query:
```sql
SELECT co, delete, hum, room, temp, time
FROM home
WHERE time >= now() - INTERVAL '90 days'
ORDER BY time
```
The Python client library outputs the following schema representation:
```py
Schema:
co: int64
-- field metadata --
iox::column::type: 'iox::column_type::field::integer'
delete: string
-- field metadata --
iox::column::type: 'iox::column_type::tag'
hum: double
-- field metadata --
iox::column::type: 'iox::column_type::field::float'
room: string
-- field metadata --
iox::column::type: 'iox::column_type::tag'
temp: double
-- field metadata --
iox::column::type: 'iox::column_type::field::float'
time: timestamp[ns] not null
-- field metadata --
iox::column::type: 'iox::column_type::timestamp'
```
Using PyArrow, you can access the schema through the [`FlightStreamReader.schema`](https://arrow.apache.org/docs/python/generated/pyarrow.flight.FlightStreamReader.html#pyarrow.flight.FlightStreamReader) attribute.
See [`InfluxDBClient3.query()` examples](/influxdb/cloud-dedicated/reference/client-libraries/v3/python/#influxdbclient3query) for retrieving the schema.
### RecordBatch
[`RecordBatch` messages](https://arrow.apache.org/docs/format/Columnar.html#recordbatch-message) in the {{% cloud-name %}} response stream contain query result data in Arrow format.
When the Flight client receives a stream, it reads each record batch from the stream until there are no more messages to read.
The client considers the request complete when it has received all the messages.
Flight clients and InfluxDB v3 client libraries provide methods for reading record batches, or "data chunks," from a stream.
The InfluxDB v3 Python client library uses the [`pyarrow.flight.FlightStreamReader`](https://arrow.apache.org/docs/python/generated/pyarrow.flight.FlightStreamReader.html#pyarrow.flight.FlightStreamReader) class and provides the following reader methods:
- `all`: Read all record batches into a `pyarrow.Table`.
- `pandas`: Read all record batches into a `pandas.DataFrame`.
- `chunk`: Read the next batch and metadata, if available.
- `reader`: Convert the `FlightStreamReader` instance into a `RecordBatchReader`.
Flight clients implement Flight interfaces, however client library classes, methods, and implementations may differ for each language and library.
### InfluxDB status and error codes
In gRPC, every call returns a status object that contains an integer code and a string message.
During a request, the gRPC client and server may each return a status--for example:
- The server fails to process the query; responds with status `internal error` and gRPC status `13`.
- The request is missing a database token; the server responds with status `unauthenticated` and gRPC status `16`.
- The server responds with a stream, but the client loses the connection due to a network failure and returns status `unavailable` (gRPC status `???`).
gRPC defines the integer [status codes](https://grpc.github.io/grpc/core/status_8h.html) and definitions for servers and clients and
Arrow Flight defines a `FlightStatusDetail` class and the [error codes](https://arrow.apache.org/docs/format/Flight.html#error-handling) that a Flight RPC service may implement.
While Flight defines the status codes available for servers, a server can choose which status to return for an RPC call.
In error responses, the status `details` field contains an error code that clients can use to determine if the error should be displayed to users (for example, if the client should retry the request).
{{< expand-wrapper >}}
{{% expand "View InfluxDB, Flight, and gRPC status codes" %}}
The following table describes InfluxDB status codes and, if they can appear in gRPC requests, their corresponding gRPC and Flight codes:
| InfluxDB status code | Used for gRPC | gRPC code | Flight code | Description |
|:---------------------|:--------------|:----------|:-----------------|:------------------------------------------------------------------|
| OK | ✓ | 0 | OK | |
| Conflict | ✓ | | | |
| Internal | ✓ | 13 | INTERNAL | An error internal to the service implementation occurred. |
| Invalid | ✓ | 3 | INVALID_ARGUMENT | The client passed an invalid argument to the RPC (for example, bad SQL syntax or a null value as the database name). |
| NotFound | ✓ | 5 | NOT_FOUND | The requested resource (action, data stream) wasn't found. |
| NotImplemented | ✓ | 12 | UNIMPLEMENTED | The RPC is not implemented. |
| RequestCanceled | ✓ | 1 | CANCELLED | The operation was cancelled (either by the client or the server). |
| TooLarge | ✓ | | | |
| Unauthenticated | ✓ | 16 | UNAUTHENTICATED | The client isn't authenticated (credentials are missing or invalid). |
| Unauthorized | ✓ | 7, 16 | UNAUTHORIZED | The client doesn't have permissions for the requested operation (credentials aren't sufficient for the request). |
| Unavailable | ✓ | | UNAVAILABLE | The server isn't available. May be emitted by the client for connectivity reasons. |
| Unknown | ✓ | 2 | UNKNOWN | An unknown error. The default if no other error applies. |
| UnprocessableEntity | | | | |
| EmptyValue | | | | |
| Forbidden | | | | |
| TooManyRequests | | | | |
| MethodNotAllowed | | | | |
| UpstreamServer | | | | |
<!-- Reference: influxdb_iox/service_grpc_influxrpc/src/service#InfluxCode -->
_For a list of gRPC codes that servers and clients may return, see [Status codes and their use in gRPC](https://grpc.github.io/grpc/core/md_doc_statuscodes.html) in the GRPC Core documentation._
{{% /expand %}}
{{< /expand-wrapper >}}
### Troubleshoot errors
#### Internal Error: Received RST_STREAM
**Example**:
```sh
Flight returned internal error, with message: Received RST_STREAM with error code 2. gRPC client debug context: UNKNOWN:Error received from peer ipv4:34.196.233.7:443 {grpc_message:"Received RST_STREAM with error code 2"}
```
**Potential reasons**:
- The connection to the server was reset unexpectedly.
- Network issues between the client and server.
- Server might have closed the connection due to an internal error.
- The client exceeded the server's maximum number of concurrent streams.
<!-- END -->
#### Internal Error: stream terminated by RST_STREAM with NO_ERROR
**Example**:
```sh
pyarrow._flight.FlightInternalError: Flight returned internal error, with message: stream terminated by RST_STREAM with error code: NO_ERROR. gRPC client debug context: UNKNOWN:Error received from peer ipv4:3.123.149.45:443 {created_time:"2023-07-26T14:12:44.992317+02:00", grpc_status:13, grpc_message:"stream terminated by RST_STREAM with error code: NO_ERROR"}. Client context: OK
```
**Potential Reasons**:
- The server terminated the stream, but there wasn't any specific error associated with it.
- Possible network disruption, even if it's temporary.
- The server might have reached its maximum capacity or other internal limits.
<!-- END -->
#### Invalid Argument: Invalid ticket
**Example**:
```sh
pyarrow.lib.ArrowInvalid: Flight returned invalid argument error, with message: Invalid ticket. Error: Invalid ticket. gRPC client debug context: UNKNOWN:Error received from peer ipv4:54.158.68.83:443 {created_time:"2023-08-31T17:56:42.909129-05:00", grpc_status:3, grpc_message:"Invalid ticket. Error: Invalid ticket"}. Client context: IOError: Server never sent a data message. Detail: Internal
```
**Potential Reasons**:
- The request is missing the database name or some other required metadata value.
- The request contains bad query syntax.
<!-- END -->
#### Unauthenticated: Unauthenticated
**Example**:
```sh
Flight returned unauthenticated error, with message: unauthenticated. gRPC client debug context: UNKNOWN:Error received from peer ipv4:34.196.233.7:443 {grpc_message:"unauthenticated", grpc_status:16, created_time:"2023-08-28T15:38:33.380633-05:00"}. Client context: IOError: Server never sent a data message. Detail: Internal
```
**Potential reason**:
- Token is missing from the request.
- The specified token doesn't exist for the specified organization.
<!-- END -->
#### Unauthorized: Permission denied
**Example**:
```sh
pyarrow._flight.FlightUnauthorizedError: Flight returned unauthorized error, with message: Permission denied. gRPC client debug context: UNKNOWN:Error received from peer ipv4:54.158.68.83:443 {grpc_message:"Permission denied", grpc_status:7, created_time:"2023-08-31T17:51:08.271009-05:00"}. Client context: IOError: Server never sent a data message. Detail: Internal
```
**Potential reason**:
- The specified token doesn't have read permission for the specified database.
<!-- END -->
#### FlightUnavailableError: Could not get default pem root certs
**Example**:
If unable to locate a root certificate for _gRPC+TLS_, the Flight client returns errors similar to the following:
```sh
UNKNOWN:Failed to load file... filename:"/usr/share/grpc/roots.pem",
children:[UNKNOWN:No such file or directory
...
Could not get default pem root certs...
pyarrow._flight.FlightUnavailableError: Flight returned unavailable error,
with message: empty address list: . gRPC client debug context:
UNKNOWN:empty address list
...
```
**Potential reason**:
- Non-POSIX-compliant systems (such as Windows) need to specify the root certificates in SslCredentialsOptions for the gRPC client, since the defaults are only configured for POSIX filesystems.
[Specify the root certificate path](#specify-the-root-certificate-path) for the Flight gRPC client.
For more information about gRPC SSL/TLS client-server authentication, see [Using client-side SSL/TLS](https://grpc.io/docs/guides/auth/#using-client-side-ssltls) in the [gRPC.io Authentication guide](https://grpc.io/docs/guides/auth/).

View File

@ -3,7 +3,7 @@ title: Use visualization tools to query data
list_title: Use visualization tools
description: >
Use visualization tools and SQL or InfluxQL to query data stored in InfluxDB.
weight: 401
weight: 301
menu:
influxdb_cloud_dedicated:
parent: Execute queries

View File

@ -8,7 +8,7 @@ menu:
name: Python
parent: v3 client libraries
identifier: influxdb3-python
influxdb/cloud-dedicated/tags: [python, gRPC, SQL, Flight SQL, client libraries]
influxdb/cloud-dedicated/tags: [python, gRPC, SQL, client libraries]
weight: 201
aliases:
- /influxdb/cloud-dedicated/reference/client-libraries/v3/pyinflux3/
@ -47,6 +47,19 @@ The `influxdb3-python` Python client library wraps the Apache Arrow `pyarrow.fli
in a convenient InfluxDB v3 interface for executing SQL and InfluxQL queries, requesting
server metadata, and retrieving data from {{% cloud-name %}} using the Flight protocol with gRPC.
<!-- TOC -->
- [Installation](#installation)
- [Importing the module](#importing-the-module)
- [API reference](#api-reference)
- [Classes](#classes)
- [Class InfluxDBClient3](#class-influxdbclient3)
- [Class Point](#class-point)
- [Class WriteOptions](#class-writeoptions)
- [Functions](#functions)
- [Constants](#constants)
- [Exceptions](#exceptions)
## Installation
Install the client library and dependencies using `pip`:
@ -72,9 +85,11 @@ Import specific class methods from the module:
from influxdb_client_3 import InfluxDBClient3, Point, WriteOptions
```
- [`influxdb_client_3.InfluxDBClient3`](#class-influxdbclient3): an interface for [initializing a client](#initialization) and interacting with InfluxDB
- [`influxdb_client_3.Point`](#class-point): an interface for constructing a time series data point
- [`influxdb_client_3.WriteOptions`](#class-writeoptions): an interface for configuring write options for the client
- [`influxdb_client_3.InfluxDBClient3`](#class-influxdbclient3): a class for interacting with InfluxDB
- `influxdb_client_3.Point`: a class for constructing a time series data
point
- `influxdb_client_3.WriteOptions`: a class for configuring client
write options.
## API reference
@ -113,52 +128,49 @@ Initializes and returns an `InfluxDBClient3` instance with the following:
- A singleton _write client_ configured for writing to the database.
- A singleton _Flight client_ configured for querying the database.
### Attributes
### Parameters
- **`_org`** (str): The organization name (for {{% cloud-name %}}, set this to an empty string (`""`)).
- **`_database`** (str): The database to use for writing and querying.
- **`_write_client_options`** (dict): Options passed to the write client for writing to InfluxDB.
- **org** (str): The organization name (for {{% cloud-name %}}, set this to an empty string (`""`)).
- **database** (str): The database to use for writing and querying.
- **write_client_options** (dict): Options to use when writing to InfluxDB.
If `None`, writes are [synchronous](#synchronous-writing).
- **`_flight_client_options`** (dict): Options passed to the Flight client for querying InfluxDB.
- **flight_client_options** (dict): Options to use when querying InfluxDB.
#### Batch writing
In batching mode, the client adds the record or records to a batch, and then schedules the batch for writing to InfluxDB.
The client writes the batch to InfluxDB after reaching `_write_client_options.batch_size` or `_write_client_options.flush_interval`.
If a write fails, the client reschedules the write according to the `_write_client_options` retry options.
The client writes the batch to InfluxDB after reaching `write_client_options.batch_size` or `write_client_options.flush_interval`.
If a write fails, the client reschedules the write according to the `write_client_options` retry options.
To use batching mode, pass an instance of `WriteOptions` to the `InfluxDBClient3.write_client_options` argument--for example:
To use batching mode, pass `WriteOptions` as key-value pairs to the client `write_client_options` parameter--for example:
1. Instantiate `WriteOptions()` with defaults or with
`WriteOptions.write_type=WriteType.batching`.
```py
from influxdb_client_3 import WriteOptions
# Initialize batch writing default options (batch size, flush, and retry).
# Returns an influxdb_client.client.write_api.WriteOptions object.
# Create a WriteOptions instance for batch writes with batch size, flush, and retry defaults.
write_options = WriteOptions()
```
2. Call the [`write_client_options()` function](#function-write_client_optionskwargs) to create an options object that uses `write_options` from the preceding step.
2. Pass `write_options` from the preceding step to the `write_client_options` function.
```py
from influxdb_client_3 import write_client_options
# Create a dict of keyword arguments from WriteOptions
wco = write_client_options(WriteOptions=write_options)
```
3. Initialize the client, setting the `write_client_options` argument to the `wco` object from the preceding step.
The output is a dict with `WriteOptions` key-value pairs.
3. Initialize the client, setting the `write_client_options` argument to `wco` from the preceding step.
{{< tabs-wrapper >}}
{{% code-placeholders "DATABASE_(NAME|TOKEN)" %}}
```py
from influxdb_client_3 import InfluxDBClient3
with InfluxDBClient3(token="DATABASE_TOKEN", host="cluster-id.influxdb.io",
org="", database="DATABASE_NAME",
write_client_options=wco) as client:
with InfluxDBClient3(token="DATABASE_TOKEN",
host="{{< influxdb/host >}}",
org="", database="DATABASE_NAME",
write_client_options=wco) as client:
client.write(record=points)
```
@ -184,7 +196,7 @@ Given `_write_client_options=None`, the client uses synchronous mode when writin
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(token="DATABASE_TOKEN",
host="cluster-id.influxdb.io",
host="{{< influxdb/host >}}",
org="",
database="DATABASE_NAME")
```
@ -192,14 +204,15 @@ client = InfluxDBClient3(token="DATABASE_TOKEN",
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_TOKEN`{{% /code-placeholder-key %}}: an {{% cloud-name %}} [database token](/influxdb/cloud-dedicated/admin/tokens/) with read permissions on the databases you want to query
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of your {{% cloud-name %}} [database](/influxdb/cloud-dedicated/admin/databases/)
- {{% code-placeholder-key %}}`DATABASE_TOKEN`{{% /code-placeholder-key %}}: an {{% cloud-name %}} [database token](/influxdb/cloud-dedicated/admin/tokens/) with read permissions on the specified database
##### Initialize a client for batch writing
The following example shows how to initialize a client for writing and querying the database.
When writing data, the client uses batch mode with default options and
invokes the callback function for the response.
invokes the callback function defined for the response status (`success`, `error`, or `retry`).
{{% code-placeholders "DATABASE_NAME|DATABASE_TOKEN" %}}
```py
@ -232,7 +245,7 @@ invokes the callback function for the response.
retry_callback=retry,
WriteOptions=write_options)
with InfluxDBClient3(token="DATABASE_TOKEN", host="cluster-id.influxdb.io",
with InfluxDBClient3(token="DATABASE_TOKEN", host="{{< influxdb/host >}}",
org="", database="DATABASE_NAME",
write_client_options=wco) as client:
@ -242,36 +255,28 @@ invokes the callback function for the response.
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_TOKEN`{{% /code-placeholder-key %}}:
Your InfluxDB token with READ permissions on the databases you want to query.
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
The name of your InfluxDB database.
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of your {{% cloud-name %}} [database](/influxdb/cloud-dedicated/admin/databases/)
- {{% code-placeholder-key %}}`DATABASE_TOKEN`{{% /code-placeholder-key %}}: an {{% cloud-name %}} [database token](/influxdb/cloud-dedicated/admin/tokens/) with read permissions on the specified database
### InfluxDBClient3 instance methods
<!-- TOC -->
- [InfluxDBClient3.write](#influxdbclient3write)
- [InfluxDBClient3.write_file](#influxdbclient3write_file)
- [InfluxDBClient3.query](#influxdbclient3query)
- [InfluxDBClient3.close](#influxdbclient3close)
### InfluxDBClient3.write
Writes a record or a list of records to InfluxDB.
A record can be a `Point` object, a dict that represents a point, a line protocol string, or a `DataFrame`.
The client can write using [_batching_ mode](#batch-writing) or [_synchronous_ mode](#synchronous-writing).
##### Attributes
- **`write_precision=`**: `"ms"`, `"s"`, `"us"`, `"ns"`. Default is `"ns"`.
#### Syntax
```py
write(self, record=None, **kwargs)
```
#### Parameters
- **`record`**: A record or list of records to write. A record can be a `Point` object, a dict that represents a point, a line protocol string, or a `DataFrame`.
- **`write_precision=`**: `"ms"`, `"s"`, `"us"`, `"ns"`. Default is `"ns"`.
#### Examples
##### Write a line protocol string
@ -280,11 +285,11 @@ write(self, record=None, **kwargs)
{{% code-placeholders "DATABASE_NAME|DATABASE_TOKEN" %}}
```py
from influxdb_client_3 import InfluxDBClient3
points = "home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000"
client = InfluxDBClient3(token="DATABASE_TOKEN", host="cluster-id.influxdb.io",
database="DATABASE_NAME", org="")
database="DATABASE_NAME")
client.write(record=points, write_precision="s")
```
@ -293,7 +298,9 @@ client.write(record=points, write_precision="s")
##### Write data using points
The following example shows how to create a [`Point`](#class-point), and then write the
The `influxdb_client_3.Point` class provides an interface for constructing a data
point for a measurement and setting fields, tags, and the timestamp for the point.
The following example shows how to create a `Point` object, and then write the
data to InfluxDB.
```py
@ -307,7 +314,7 @@ client.write(point)
`InfluxDBClient3` can serialize a dictionary object into line protocol.
If you pass a `dict` to `InfluxDBClient3.write`, the client expects the `dict` to have the
following _point_ data structure:
following _point_ attributes:
- **measurement** (str): the measurement name
- **tags** (dict): a dictionary of tag key-value pairs
@ -320,22 +327,22 @@ data to InfluxDB.
{{% influxdb/custom-timestamps %}}
{{% code-placeholders "DATABASE_NAME|DATABASE_TOKEN" %}}
```py
from influxdb_client_3 import InfluxDBClient3
from influxdb_client_3 import InfluxDBClient3
# Using point dictionary structure
points = {
"measurement": "home",
"tags": {"room": "Kitchen", "sensor": "K001"},
"fields": {"temp": 72.2, "hum": 36.9, "co": 4},
"time": 1641067200
}
client = InfluxDBClient3(token="DATABASE_TOKEN",
host="cluster-id.influxdb.io",
database="DATABASE_NAME",
org="")
client.write(record=points, write_precision="s")
# Using point dictionary structure
points = {
"measurement": "home",
"tags": {"room": "Kitchen", "sensor": "K001"},
"fields": {"temp": 72.2, "hum": 36.9, "co": 4},
"time": 1641067200
}
client = InfluxDBClient3(token="DATABASE_TOKEN",
host="{{< influxdb/host >}}",
database="DATABASE_NAME",
org="")
client.write(record=points, write_precision="s")
```
{{% /code-placeholders %}}
{{% /influxdb/custom-timestamps %}}
@ -352,20 +359,18 @@ The client can write using [_batching_ mode](#batch-writing) or [_synchronous_ m
write_file(self, file, measurement_name=None, tag_columns=[],
timestamp_column='time', **kwargs)
```
##### Attributes
#### Parameters
- **`file`**: A file containing records to write to InfluxDB.
The following file formats and filename extensions are supported.
The filename must end with one of the supported extensions.
For more information about encoding and formatting data, see the documentation for each supported format.
- **`file`** (str): A path to a file containing records to write to InfluxDB.
The filename must end with one of the following supported extensions.
For more information about encoding and formatting data, see the documentation for each supported format:
| Supported format | File name extension |
|:---------------------------------------------------------------------------|:--------------------|
| [Feather](https://arrow.apache.org/docs/python/feather.html) | `.feather` |
| [Parquet](https://arrow.apache.org/docs/python/parquet.html) | `.parquet` |
| [Comma-separated values](https://arrow.apache.org/docs/python/csv.html) | `.csv` |
| [JSON](https://pandas.pydata.org/docs/reference/api/pandas.read_json.html) | `.json` |
| [ORC](https://arrow.apache.org/docs/python/orc.html) | `.orc` |
- `.feather`: [Feather](https://arrow.apache.org/docs/python/feather.html)
- `.parquet`: [Parquet](https://arrow.apache.org/docs/python/parquet.html)
- `.csv`: [Comma-separated values](https://arrow.apache.org/docs/python/csv.html)
- `.json`: [JSON](https://pandas.pydata.org/docs/reference/api/pandas.read_json.html)
- `.orc`: [ORC](https://arrow.apache.org/docs/python/orc.html)
- **`measurement_name`**: Defines the measurement name for records in the file.
The specified value takes precedence over `measurement` and `iox::measurement` columns in the file.
If no value is specified for the parameter, and a `measurement` column exists in the file, the `measurement` column value is used for the measurement name.
@ -418,7 +423,7 @@ wco = write_client_options(success_callback=callback.success,
WriteOptions=write_options
)
with InfluxDBClient3(token="DATABASE_TOKEN", host="cluster-id.influxdb.io",
with InfluxDBClient3(token="DATABASE_TOKEN", host="{{< influxdb/host >}}",
org="", database="DATABASE_NAME",
_write_client_options=wco) as client:
@ -438,23 +443,70 @@ Returns all data in the query result as an Arrow table.
#### Syntax
```py
query(self, query, language="sql")
query(self, query, language="sql", mode="all", **kwargs )
```
#### Parameters
- **`query`** (str): the SQL or InfluxQL to execute.
- **`language`** (str): the query language used in the `query` parameter--`"sql"` or `"influxql"`. Default is `"sql"`.
- **`mode`** (str): specifies what the [`pyarrow.flight.FlightStreamReader`](https://arrow.apache.org/docs/python/generated/pyarrow.flight.FlightStreamReader.html#pyarrow.flight.FlightStreamReader) will return.
Default is `"all"`.
- `all`: Read the entire contents of the stream and return it as a `pyarrow.Table`.
- `chunk`: Read the next message (a `FlightStreamChunk`) and return `data` and `app_metadata`.
Returns `null` if there are no more messages.
- `pandas`: Read the contents of the stream and return it as a `pandas.DataFrame`.
- `reader`: Convert the `FlightStreamReader` into a [`pyarrow.RecordBatchReader`](https://arrow.apache.org/docs/python/generated/pyarrow.RecordBatchReader.html#pyarrow-recordbatchreader).
- `schema`: Return the schema for all record batches in the stream.
#### Examples
##### Query using SQL
```py
query = "select * from measurement"
reader = client.query(query=query)
table = client.query("SELECT * FROM measurement WHERE time >= now() - INTERVAL '90 days'")
# Filter columns.
print(table.select(['room', 'temp']))
# Use PyArrow to aggregate data.
print(table.group_by('hum').aggregate([]))
```
##### Query using InfluxQL
```py
query = "select * from measurement"
reader = client.query(query=query, language="influxql")
query = "SELECT * FROM measurement WHERE time >= -90d"
table = client.query(query=query, language="influxql")
# Filter columns.
print(table.select(['room', 'temp']))
```
##### Read all data from the stream and return a pandas DataFrame
```py
query = "SELECT * FROM measurement WHERE time >= now() - INTERVAL '90 days'"
pd = client.query(query=query, mode="pandas")
# Print the pandas DataFrame formatted as a Markdown table.
print(pd.to_markdown())
```
##### View the schema for all batches in the stream
```py
table = client.query('''
SELECT *
FROM measurement
WHERE time >= now() - INTERVAL '90 days''''
)
# Get the schema attribute value.
print(table.schema)
```
##### Retrieve the result schema and no data
```py
query = "SELECT * FROM measurement WHERE time >= now() - INTERVAL '90 days'"
schema = client.query(query=query, mode="schema")
print(schema)
```
### InfluxDBClient3.close
@ -560,8 +612,7 @@ fh.close()
client = InfluxDBClient3(
token="DATABASE_TOKEN",
host="cluster-id.influxdb.io",
org="",
host="{{< influxdb/host >}}",
database="DATABASE_NAME",
flight_client_options=flight_client_options(
tls_root_certs=cert))
@ -575,29 +626,4 @@ client = InfluxDBClient3(
## Exceptions
- `influxdb_client_3.InfluxDBError`: Exception class raised for InfluxDB-related errors
- [`pyarrow._flight.FlightUnavailableError`](#flightunavailableerror-could-not-get-default-pem-root-certs): Exception class raised for Flight gRPC errors
### Query exceptions
#### FlightUnavailableError: Could not get default pem root certs
[Specify the root certificate path](#specify-the-root-certificate-path) for the Flight gRPC client.
Non-POSIX-compliant systems (such as Windows) need to specify the root certificates in SslCredentialsOptions for the gRPC client, since the defaults are only configured for POSIX filesystems.
If unable to locate a root certificate for _gRPC+TLS_, the Flight client returns errors similar to the following:
```sh
UNKNOWN:Failed to load file... filename:"/usr/share/grpc/roots.pem",
children:[UNKNOWN:No such file or directory
...
Could not get default pem root certs...
pyarrow._flight.FlightUnavailableError: Flight returned unavailable error,
with message: empty address list: . gRPC client debug context:
UNKNOWN:empty address list
...
```
For more information about gRPC SSL/TLS client-server authentication, see [Using client-side SSL/TLS](https://grpc.io/docs/guides/auth/#using-client-side-ssltls) in the [gRPC.io Authentication guide](https://grpc.io/docs/guides/auth/).
- `influxdb_client_3.InfluxDBError`: Exception class raised for InfluxDB-related errors

View File

@ -36,7 +36,7 @@ best practices for building an application prototype on Cloud Serverless.
- [Use SQL or InfluxQL as your Query Language](#use-sql-or-influxql-as-your-query-language)
- [Stay within the schema limits of InfluxDB Cloud Serverless](#stay-within-the-schema-limits-of-influxdb-cloud-serverless)
- [Keep test and production data separate](#keep-test-and-production-data-separate)
<!-- END TOC -->
## Key differences between InfluxDB Cloud Serverless and Cloud Dedicated
@ -62,7 +62,7 @@ able to evaluate the Cloud Dedicated administrative features directly.
InfluxDB Cloud Serverless was an upgrade that introduced the InfluxDB 3.0 storage
engine to InfluxDatas original InfluxDB Cloud (TSM) multi-tenant solution.
InfluxDB Cloud utilizes the Time-Structured Merge Tree (TSM) storage engine in
which databases were referred to as "buckets".
which databases were referred to as _buckets_.
Cloud Serverless still uses this term.
InfluxDB Cloud Dedicated has only ever used the InfluxDB 3.0 storage engine.
@ -125,8 +125,7 @@ If you use InfluxDB Cloud Serverless as an evaluation platform for
InfluxDB Cloud Dedicated, dont utilize these features as they aren't available
on InfluxDB Cloud Dedicated.
With InfluxDB Cloud Dedicated, you can build custom task and alerting solutions
or use 3rd-party tools like Grafana or Prefect--for example:
With InfluxDB Cloud Dedicated, you can build custom task and alerting solutions or use third-party tools like Grafana or Prefect--for example:
- [Send alerts using data in InfluxDB Cloud Serverless](/influxdb/cloud-serverless/process-data/send-alerts/)
- [Downsample data](/influxdb/cloud-serverless/process-data/downsample/)
@ -138,7 +137,7 @@ or use 3rd-party tools like Grafana or Prefect--for example:
In addition to the token management UI differences mentioned previously
(there is a UI and API for this with Cloud Serverless, with InfluxDB Cloud
Dedicated you use `influxctl`), there are also differences in the granularity
of token permissions---InfluxDB Cloud Dedicated has a few more permission options.
of token permissions---InfluxDB Cloud Dedicated has a few more permission options.
| Function | InfluxDB Cloud Serverless | InfluxDB Cloud Dedicated |
| :------------------- | :------------------------ | :----------------------- |
@ -194,13 +193,13 @@ The easiest way to avoid using features in InfluxDB Cloud Serverless that don
exist in Cloud Dedicated is to avoid using the Cloud Serverless UI, except when
managing tokens and buckets.
In order to maintain compatibility with Cloud Dedicated, specifically avoid using the following
InfluxDB Cloud Serverless features:
InfluxDB Cloud Serverless features:
- The v2 query API and the Flux language
- Administrative APIs
- Tasks and alerts from the Cloud Serverless UI (instead use one of the options
mentioned in _[Tasks and alerts differences](#tasks-and-alerts-differences)_).
- InfluxDB dashboards and visualization tools (use 3rd-party visualization tools)
- InfluxDB dashboards and visualization tools (use third-party visualization tools)
### Use SQL or InfluxQL as your Query Language
@ -213,7 +212,7 @@ Avoid Flux since it cant be used with InfluxDB Cloud Dedicated.
If you stay within InfluxDB Cloud Serverless limits for tables (measurements)
and columns (time, fields, and tags) within a table, then you wont have any
problems with limits in InfluxDB Cloud Dedicated.
Cloud Dedicated also provides more flexibility by letting you configure limits.
Cloud Dedicated also provides more flexibility by letting you configure limits.
| Description | Limit |
| :--------------------------- | ----: |

View File

@ -5,7 +5,7 @@ list_title: Use the v1 query API and InfluxQL
description: >
Use the InfluxDB v1 HTTP query API to query data in InfluxDB Cloud Serverless
with InfluxQL.
weight: 401
weight: 301
menu:
influxdb_cloud_serverless:
parent: Execute queries

View File

@ -0,0 +1,289 @@
---
title: Understand and troubleshoot Flight responses
description: >
Understand responses and troubleshoot errors encountered when querying InfluxDB with Flight+gRPC and Arrow Flight clients.
weight: 401
menu:
influxdb_cloud_serverless:
name: Understand Flight responses
parent: Execute queries
influxdb/cloud-serverless/tags: [query, sql, influxql]
---
Learn how to handle responses and troubleshoot errors encountered when querying {{% cloud-name %}} with Flight+gRPC and Arrow Flight clients.
<!-- TOC -->
- [InfluxDB Flight responses](#influxdb-flight-responses)
- [Stream](#stream)
- [Schema](#schema)
- [Example](#example)
- [RecordBatch](#recordbatch)
- [InfluxDB status and error codes](#influxdb-status-and-error-codes)
- [Troubleshoot errors](#troubleshoot-errors)
- [Internal Error: Received RST_STREAM](#internal-error-received-rst_stream)
- [Internal Error: stream terminated by RST_STREAM with NO_ERROR](#internal-error-stream-terminated-by-rst_stream-with-no_error)
- [Invalid Argument Error: bucket <BUCKET_ID> not found](#invalid-argument-error-bucket-bucket_id-not-found)
- [Invalid Argument: Invalid ticket](#invalid-argument-invalid-ticket)
- [Unauthenticated: Unauthenticated](#unauthenticated-unauthenticated)
- [Unauthenticated: read:<BUCKET_ID> is unauthorized](#unauthenticated-readbucket_id-is-unauthorized)
- [FlightUnavailableError: Could not get default pem root certs](#flightunavailableerror-could-not-get-default-pem-root-certs)
## InfluxDB Flight responses
{{% cloud-name %}} provides an InfluxDB-specific Arrow Flight remote procedure calls (RPC) and Flight SQL service that uses gRPC, a high performance RPC framework, to transport data in Arrow format.
Flight defines a set of [RPC methods](https://arrow.apache.org/docs/format/Flight.html#rpc-methods-and-request-patterns) that servers and clients can use to exchange information.
Flight SQL uses Flight RPC and defines additional methods to query database metadata, execute queries, and manipulate prepared statements.
To learn more about Flight SQL, see [Introducing Apache Arrow Flight SQL: Accelerating Database Access](https://arrow.apache.org/blog/2022/02/16/introducing-arrow-flight-sql/).
To query data or retrieve information about data stored in {{% cloud-name %}}, a Flight client (for example, `influx3` or an InfluxDB v3 client library) sends a request that calls an InfluxDB Flight RPC or Flight SQL service method.
For example, if you call the `influxdb3-python` Python client library `InfluxDBClient3.query()` method, the client in turn calls the `pyarrow.flight.FlightClient.do_get()` method that passes a Flight ticket containing your credentials and query to InfluxDB's Flight [`DoGet(FlightCallOptions, Ticket)` method](https://arrow.apache.org/docs/cpp/api/flight.html#_CPPv4N5arrow6flight12FlightClient5DoGetERK17FlightCallOptionsRK6Ticket).
InfluxDB responds with one of the following:
- A [stream](#stream) in Arrow IPC streaming format
- An [error status code](#influxdb-error-codes) and an optional `details` field that contains the status and a message that describes the error
### Stream
InfluxDB provides Flight RPC methods and implements server-side streaming for clients to retrieve and download data.
In a gRPC server-side streaming scenario, a client sends an RPC call in a request to a server.
Because the server can return a _stream_ of multiple responses to the client, the client request contains an identifier that the client and server use to keep track of the request and associated responses.
As the server sends responses, they are associated with the corresponding stream on the client side.
An Arrow Flight service, such as InfluxDB, sends a stream in [Arrow IPC streaming format](https://arrow.apache.org/docs/format/Columnar.html#serialization-and-interprocess-communication-ipc) that defines the structure of the stream and each response, or _message_, in the stream.
Flight client libraries, such as `pyarrow.flight` and the Go Arrow Flight package, implement an Arrow interface for retrieving the data, schema, and metadata from the stream.
After {{% cloud-name %}} successfully processes a query, it sends a stream that contains the following:
1. A [Schema](#schema) that applies to all record batches in the stream
2. [RecordBatch](#recordbatch) messages with query result data
3. The [request status](#influxdb-status-and-error-codes) (`OK`)
4. Optional: trailing metadata
### Schema
An InfluxDB Flight response stream contains a [Flight schema](https://arrow.apache.org/docs/format/Columnar.html#schema-message) that describes the data type and InfluxDB data element type (timestamp, tag, or field) for columns in the data set.
All data chunks, or record batches, in the same stream have the same schema.
Data transformation tools can use the schema when converting Arrow data to other formats and back to Arrow.
#### Example
Given the following query:
```sql
SELECT co, delete, hum, room, temp, time
FROM home
WHERE time >= now() - INTERVAL '90 days'
ORDER BY time
```
The Python client library outputs the following schema representation:
```py
Schema:
co: int64
-- field metadata --
iox::column::type: 'iox::column_type::field::integer'
delete: string
-- field metadata --
iox::column::type: 'iox::column_type::tag'
hum: double
-- field metadata --
iox::column::type: 'iox::column_type::field::float'
room: string
-- field metadata --
iox::column::type: 'iox::column_type::tag'
temp: double
-- field metadata --
iox::column::type: 'iox::column_type::field::float'
time: timestamp[ns] not null
-- field metadata --
iox::column::type: 'iox::column_type::timestamp'
```
Using PyArrow, you can access the schema through the [`FlightStreamReader.schema`](https://arrow.apache.org/docs/python/generated/pyarrow.flight.FlightStreamReader.html#pyarrow.flight.FlightStreamReader) attribute.
See [`InfluxDBClient3.query()` examples](/influxdb/cloud-serverless/reference/client-libraries/v3/python/#influxdbclient3query) for retrieving the schema.
### RecordBatch
[`RecordBatch` messages](https://arrow.apache.org/docs/format/Columnar.html#recordbatch-message) in the {{% cloud-name %}} response stream contain query result data in Arrow format.
When the Flight client receives a stream, it reads each record batch from the stream until there are no more messages to read.
The client considers the request complete when it has received all the messages.
Flight clients and InfluxDB v3 client libraries provide methods for reading record batches, or "data chunks," from a stream.
The InfluxDB v3 Python client library uses the [`pyarrow.flight.FlightStreamReader`](https://arrow.apache.org/docs/python/generated/pyarrow.flight.FlightStreamReader.html#pyarrow.flight.FlightStreamReader) class and provides the following reader methods:
- `all`: Read all record batches into a `pyarrow.Table`.
- `pandas`: Read all record batches into a `pandas.DataFrame`.
- `chunk`: Read the next batch and metadata, if available.
- `reader`: Convert the `FlightStreamReader` instance into a `RecordBatchReader`.
Flight clients implement Flight interfaces, however client library classes, methods, and implementations may differ for each language and library.
### InfluxDB status and error codes
In gRPC, every call returns a status object that contains an integer code and a string message.
During a request, the gRPC client and server may each return a status--for example:
- The server fails to process the query; responds with status `internal error` and gRPC status `13`.
- The request is missing an API token; the server responds with status `unauthenticated` and gRPC status `16`.
- The server responds with a stream, but the client loses the connection due to a network failure and returns status `unavailable` (gRPC status `???`).
gRPC defines the integer [status codes](https://grpc.github.io/grpc/core/status_8h.html) and definitions for servers and clients and
Arrow Flight defines a `FlightStatusDetail` class and the [error codes](https://arrow.apache.org/docs/format/Flight.html#error-handling) that a Flight RPC service may implement.
While Flight defines the status codes available for servers, a server can choose which status to return for an RPC call.
In error responses, the status `details` field contains an error code that clients can use to determine if the error should be displayed to users (for example, if the client should retry the request).
{{< expand-wrapper >}}
{{% expand "View InfluxDB, Flight, and gRPC status codes" %}}
The following table describes InfluxDB status codes and, if they can appear in gRPC requests, their corresponding gRPC and Flight codes:
| InfluxDB status code | Used for gRPC | gRPC code | Flight code | Description |
|:---------------------|:--------------|:----------|:-----------------|:------------------------------------------------------------------|
| OK | ✓ | 0 | OK | |
| Conflict | ✓ | | | |
| Internal | ✓ | 13 | INTERNAL | An error internal to the service implementation occurred. |
| Invalid | ✓ | 3 | INVALID_ARGUMENT | The client passed an invalid argument to the RPC (for example, bad SQL syntax or a null value as the database (bucket) name). |
| NotFound | ✓ | 5 | NOT_FOUND | The requested resource (action, data stream) wasn't found. |
| NotImplemented | ✓ | 12 | UNIMPLEMENTED | The RPC is not implemented. |
| RequestCanceled | ✓ | 1 | CANCELLED | The operation was cancelled (either by the client or the server). |
| TooLarge | ✓ | | | |
| Unauthenticated | ✓ | 16 | UNAUTHENTICATED | The client isn't authenticated (credentials are missing or invalid). |
| Unauthorized | ✓ | 7, 16 | UNAUTHORIZED | The client doesn't have permissions for the requested operation (credentials aren't sufficient for the request). |
| Unavailable | ✓ | | UNAVAILABLE | The server isn't available. May be emitted by the client for connectivity reasons. |
| Unknown | ✓ | 2 | UNKNOWN | An unknown error. The default if no other error applies. |
| UnprocessableEntity | | | | |
| EmptyValue | | | | |
| Forbidden | | | | |
| TooManyRequests | | | | |
| MethodNotAllowed | | | | |
| UpstreamServer | | | | |
<!-- Reference: influxdb_iox/service_grpc_influxrpc/src/service#InfluxCode -->
_For a list of gRPC codes that servers and clients may return, see [Status codes and their use in gRPC](https://grpc.github.io/grpc/core/md_doc_statuscodes.html) in the GRPC Core documentation._
{{% /expand %}}
{{< /expand-wrapper >}}
### Troubleshoot errors
#### Internal Error: Received RST_STREAM
**Example**:
```sh
Flight returned internal error, with message: Received RST_STREAM with error code 2. gRPC client debug context: UNKNOWN:Error received from peer ipv4:34.196.233.7:443 {grpc_message:"Received RST_STREAM with error code 2"}
```
**Potential reasons**:
- The connection to the server was reset unexpectedly.
- Network issues between the client and server.
- Server might have closed the connection due to an internal error.
- The client exceeded the server's maximum number of concurrent streams.
<!-- END -->
#### Internal Error: stream terminated by RST_STREAM with NO_ERROR
**Example**:
```sh
pyarrow._flight.FlightInternalError: Flight returned internal error, with message: stream terminated by RST_STREAM with error code: NO_ERROR. gRPC client debug context: UNKNOWN:Error received from peer ipv4:3.123.149.45:443 {created_time:"2023-07-26T14:12:44.992317+02:00", grpc_status:13, grpc_message:"stream terminated by RST_STREAM with error code: NO_ERROR"}. Client context: OK
```
**Potential Reasons**:
- The server terminated the stream, but there wasn't any specific error associated with it.
- Possible network disruption, even if it's temporary.
- The server might have reached its maximum capacity or other internal limits.
<!-- END -->
#### Invalid Argument Error: bucket <BUCKET_ID> not found
**Example**:
```sh
ArrowInvalid: Flight returned invalid argument error, with message: bucket "otel5" not found. gRPC client debug context: UNKNOWN:Error received from peer ipv4:3.123.149.45:443 {grpc_message:"bucket \"otel5\" not found", grpc_status:3, created_time:"2023-08-09T16:37:30.093946+01:00"}. Client context: IOError: Server never sent a data message. Detail: Internal
```
**Potential Reasons**:
- The specified bucket doesn't exist.
<!-- END -->
#### Invalid Argument: Invalid ticket
**Example**:
```sh
pyarrow.lib.ArrowInvalid: Flight returned invalid argument error, with message: Invalid ticket. Error: Invalid ticket. gRPC client debug context: UNKNOWN:Error received from peer ipv4:54.158.68.83:443 {created_time:"2023-08-31T17:56:42.909129-05:00", grpc_status:3, grpc_message:"Invalid ticket. Error: Invalid ticket"}. Client context: IOError: Server never sent a data message. Detail: Internal
```
**Potential Reasons**:
- The request is missing the bucket name or some other required metadata value.
- The request contains bad query syntax.
<!-- END -->
##### Unauthenticated: Unauthenticated
**Example**:
```sh
Flight returned unauthenticated error, with message: unauthenticated. gRPC client debug context: UNKNOWN:Error received from peer ipv4:34.196.233.7:443 {grpc_message:"unauthenticated", grpc_status:16, created_time:"2023-08-28T15:38:33.380633-05:00"}. Client context: IOError: Server never sent a data message. Detail: Internal
```
**Potential reason**:
- Token is missing from the request.
- The specified token doesn't exist for the specified organization.
<!-- END -->
#### Unauthenticated: read:<BUCKET_ID> is unauthorized
**Example**:
```sh
Flight returned unauthenticated error, with message: read:orgs/28d1f2f565460a6c/buckets/756fa4f8c8ba6913 is unauthorized. gRPC client debug context: UNKNOWN:Error received from peer ipv4:54.174.236.48:443 {grpc_message:"read:orgs/28d1f2f565460a6c/buckets/756fa4f8c8ba6913 is unauthorized", grpc_status:16, created_time:"2023-08-28T15:42:04.462655-05:00"}. Client context: IOError: Server never sent a data message. Detail: Internal
```
**Potential reason**:
- The specified token doesn't have read permission for the specified bucket.
<!-- END -->
#### FlightUnavailableError: Could not get default pem root certs
**Example**:
If unable to locate a root certificate for _gRPC+TLS_, the Flight client returns errors similar to the following:
```sh
UNKNOWN:Failed to load file... filename:"/usr/share/grpc/roots.pem",
children:[UNKNOWN:No such file or directory
...
Could not get default pem root certs...
pyarrow._flight.FlightUnavailableError: Flight returned unavailable error,
with message: empty address list: . gRPC client debug context:
UNKNOWN:empty address list
...
```
**Potential reason**:
- Non-POSIX-compliant systems (such as Windows) need to specify the root certificates in SslCredentialsOptions for the gRPC client, since the defaults are only configured for POSIX filesystems.
[Specify the root certificate path](#specify-the-root-certificate-path) for the Flight gRPC client.
For more information about gRPC SSL/TLS client-server authentication, see [Using client-side SSL/TLS](https://grpc.io/docs/guides/auth/#using-client-side-ssltls) in the [gRPC.io Authentication guide](https://grpc.io/docs/guides/auth/).

View File

@ -3,7 +3,7 @@ title: Use visualization tools to query data
list_title: Use visualization tools
description: >
Use visualization tools and SQL or InfluxQL to query data stored in InfluxDB.
weight: 401
weight: 301
menu:
influxdb_cloud_serverless:
parent: Execute queries

View File

@ -9,7 +9,7 @@ menu:
parent: v3 client libraries
identifier: influxdb3-python
weight: 201
influxdb/cloud-serverless/tags: [python, gRPC, SQL, Flight SQL, client libraries]
influxdb/cloud-serverless/tags: [python, gRPC, SQL, client libraries]
aliases:
- /influxdb/cloud-serverless/reference/client-libraries/v3/pyinflux3/
list_code_example: >
@ -42,11 +42,24 @@ InfluxDB client libraries provide configurable batch writing of data to {{% clou
Client libraries can be used to construct line protocol data, transform data from other formats
to line protocol, and batch write line protocol data to InfluxDB HTTP APIs.
InfluxDB v3 client libraries can query {{% cloud-name %}} using SQL.
InfluxDB v3 client libraries can query {{% cloud-name %}} using SQL or InfluxQL.
The `influxdb3-python` Python client library wraps the Apache Arrow `pyarrow.flight` client
in a convenient InfluxDB v3 interface for executing SQL queries, requesting
server metadata, and retrieving data from {{% cloud-name %}} using the Flight protocol with gRPC.
<!-- TOC -->
- [Installation](#installation)
- [Importing the module](#importing-the-module)
- [API reference](#api-reference)
- [Classes](#classes)
- [Class InfluxDBClient3](#class-influxdbclient3)
- [Class Point](#class-point)
- [Class WriteOptions](#class-writeoptions)
- [Functions](#functions)
- [Constants](#constants)
- [Exceptions](#exceptions)
## Installation
Install the client library and dependencies using `pip`:
@ -72,12 +85,11 @@ Import specific class methods from the module:
from influxdb_client_3 import InfluxDBClient3, Point, WriteOptions
```
- [`influxdb_client_3.InfluxDBClient3` class](#class-influxdbclient3): an interface for [initializing
a client](#initialization) and interacting with InfluxDB
- `influxdb_client_3.Point` class: an interface for constructing a time series data
- [`influxdb_client_3.InfluxDBClient3`](#class-influxdbclient3): a class for interacting with InfluxDB
- `influxdb_client_3.Point`: a class for constructing a time series data
point
- `influxdb_client_3.WriteOptions` class: an interface for configuring
write options `influxdb_client_3.InfluxDBClient3` for the client.
- `influxdb_client_3.WriteOptions`: a class for configuring client
write options.
## API reference
@ -108,7 +120,7 @@ Provides an interface for interacting with InfluxDB APIs for writing and queryin
```py
__init__(self, host=None, org=None, database=None, token=None,
_write_client_options=None, _flight_client_options=None, **kwargs)
write_client_options=None, flight_client_options=None, **kwargs)
```
Initializes and returns an `InfluxDBClient3` instance with the following:
@ -116,46 +128,51 @@ Initializes and returns an `InfluxDBClient3` instance with the following:
- A singleton _write client_ configured for writing to the database.
- A singleton _Flight client_ configured for querying the database.
### Attributes
### Parameters
- **org** (str): The organization name (for {{% cloud-name %}}, set this to an empty string (`""`)).
- **database** (str): The database to use for writing and querying.
- **_write_client_options** (dict): Options to use when writing to InfluxDB.
- **write_client_options** (dict): Options to use when writing to InfluxDB.
If `None`, writes are [synchronous](#synchronous-writing).
- **_flight_client_options** (dict): Options to use when querying InfluxDB.
- **flight_client_options** (dict): Options to use when querying InfluxDB.
#### Batch writing
In batching mode, the client adds the record or records to a batch, and then schedules the batch for writing to InfluxDB.
The client writes the batch to InfluxDB after reaching `_write_client_options.batch_size` or `_write_client_options.flush_interval`.
If a write fails, the client reschedules the write according to the `_write_client_options` retry options.
The client writes the batch to InfluxDB after reaching `write_client_options.batch_size` or `write_client_options.flush_interval`.
If a write fails, the client reschedules the write according to the `write_client_options` retry options.
To use batching mode, pass an instance of `WriteOptions` for the `InfluxDBClient3._write_client_options` argument--for example:
To use batching mode, pass `WriteOptions` as key-value pairs to the client `write_client_options` parameter--for example:
1. Instantiate `WriteOptions()` with defaults or with
`WriteOptions.write_type=WriteType.batching`.
```py
# Batching with all batch size, flush, and retry defaults
# Create a WriteOptions instance for batch writes with batch size, flush, and retry defaults.
write_options = WriteOptions()
```
2. Call the a `write_client_options` function to create an options object that uses `write_options` from the preceding step.
2. Pass `write_options` from the preceding step to the `write_client_options` function.
```py
wco = write_client_options(WriteOptions=write_options)
```
3. Initialize the client, setting the `_write_client_options` argument to the `wco` object from the preceding step.
The output is a dict with `WriteOptions` key-value pairs.
3. Initialize the client, setting the `write_client_options` argument to `wco` from the preceding step.
{{< tabs-wrapper >}}
{{% code-placeholders "BUCKET_(NAME|TOKEN)|API_TOKEN" %}}
```py
with InfluxDBClient3(token="API_TOKEN", host="cloud2.influxdata.com",
org="", database="BUCKET_NAME",
_write_client_options=wco) as client:
from influxdb_client_3 import InfluxDBClient3
client.write(record=points)
with InfluxDBClient3(token="API_TOKEN",
host="{{< influxdb/host >}}",
org="", database="BUCKET_NAME",
write_client_options=wco) as client:
client.write(record=points)
```
{{% /code-placeholders %}}
{{< /tabs-wrapper >}}
@ -165,19 +182,21 @@ with InfluxDBClient3(token="API_TOKEN", host="cloud2.influxdata.com",
In synchronous mode, the client sends write requests immediately (not batched)
and doesn't retry failed writes.
To use synchronous mode, set `_write_client_options=None` or `_write_client_options.write_type=WriteType.synchronous`.
To use synchronous mode, set `write_client_options=None` or `write_client_options.write_type=WriteType.synchronous`.
### Examples
##### Initialize a client
#### Initialize a client
The following example initializes a client for writing and querying the bucket.
Given `_write_client_options=None`, the client will use synchronous mode when writing data.
Given `write_client_options=None`, the client will use synchronous mode when writing data.
{{% code-placeholders "BUCKET_(NAME|TOKEN)|API_TOKEN" %}}
```py
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(token="API_TOKEN",
host="cloud2.influxdata.com",
host="{{< influxdb/host >}}",
org="",
database="BUCKET_NAME")
```
@ -185,21 +204,23 @@ client = InfluxDBClient3(token="API_TOKEN",
Replace the following:
- {{% code-placeholder-key %}}`API_TOKEN`{{% /code-placeholder-key %}}:
Your InfluxDB token with READ permissions on the databases you want to query.
- {{% code-placeholder-key %}}`BUCKET_NAME`{{% /code-placeholder-key %}}:
The name of your InfluxDB database.
- {{% code-placeholder-key %}}`BUCKET_NAME`{{% /code-placeholder-key %}}: the name of your {{% cloud-name %}} [bucket](/influxdb/cloud-serverless/admin/buckets/)
- {{% code-placeholder-key %}}`API_TOKEN`{{% /code-placeholder-key %}}: an {{% cloud-name %}} [API token](/influxdb/cloud-serverless/admin/tokens/) with read permissions on the specified bucket
##### Initialize a client for batch writing
#### Initialize a client for batch writing
The following example shows how to initialize a client for writing and querying the database.
When writing data, the client will use batch mode with default options and will
invoke the callback function for the response.
When writing data, the client uses batch mode with default options and
invokes the callback function defined for the response status (`success`, `error`, or `retry`).
{{% code-placeholders "BUCKET_NAME|API_TOKEN" %}}
```py
import influxdb_client_3 as InfluxDBClient3
from influxdb_client_3 import write_client_options, WriteOptions, InfluxDBError
from influxdb_client_3 import Point,
InfluxDBClient3,
write_client_options,
WriteOptions,
InfluxDBError
points = [Point("home").tag("room", "Kitchen").field("temp", 25.3),
Point("home").tag("room", "Living Room").field("temp", 18.4)]
@ -224,9 +245,9 @@ invoke the callback function for the response.
retry_callback=retry,
WriteOptions=write_options)
with InfluxDBClient3(token="API_TOKEN", host="cloud2.influxdata.com",
with InfluxDBClient3(token="API_TOKEN", host="{{< influxdb/host >}}",
org="ignored", database="BUCKET_NAME",
_write_client_options=wco) as client:
write_client_options=wco) as client:
client.write(record=points)
```
@ -234,36 +255,28 @@ invoke the callback function for the response.
Replace the following:
- {{% code-placeholder-key %}}`API_TOKEN`{{% /code-placeholder-key %}}:
Your InfluxDB token with READ permissions on the databases you want to query.
- {{% code-placeholder-key %}}`BUCKET_NAME`{{% /code-placeholder-key %}}:
The name of your InfluxDB database.
- {{% code-placeholder-key %}}`BUCKET_NAME`{{% /code-placeholder-key %}}: the name of your {{% cloud-name %}} [bucket](/influxdb/cloud-serverless/admin/buckets/)
- {{% code-placeholder-key %}}`API_TOKEN`{{% /code-placeholder-key %}}: an {{% cloud-name %}} [API token](/influxdb/cloud-serverless/admin/tokens/) with read permissions on the specified bucket
### Instance methods
<!-- TOC -->
- [InfluxDBClient3.write](#influxdbclient3write)
- [InfluxDBClient3.write_file](#influxdbclient3write_file)
- [InfluxDBClient3.query](#influxdbclient3query)
- [InfluxDBClient3.close](#influxdbclient3close)
### InfluxDBClient3.write
Writes a record or a list of records to InfluxDB.
A record can be a `Point` object, a dict that represents a point, a line protocol string, or a `DataFrame`.
The client can write using [_batching_ mode](#batch-writing) or [_synchronous_ mode](#synchronous-writing).
##### Attributes
- **`write_precision=`**: `"ms"`, `"s"`, `"us"`, `"ns"`. Default is `"ns"`.
#### Syntax
```py
write(self, record=None, **kwargs)
```
#### Parameters
- **`record`**: A record or list of records to write. A record can be a `Point` object, a dict that represents a point, a line protocol string, or a `DataFrame`.
- **`write_precision=`**: `"ms"`, `"s"`, `"us"`, `"ns"`. Default is `"ns"`.
#### Examples
##### Write a line protocol string
@ -271,10 +284,12 @@ write(self, record=None, **kwargs)
{{% influxdb/custom-timestamps %}}
{{% code-placeholders "BUCKET_NAME|API_TOKEN" %}}
```py
from influxdb_client_3 import InfluxDBClient3
points = "home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000"
client = InfluxDBClient3(token="API_TOKEN", host="cloud2.influxdata.com",
database="BUCKET_NAME", org="ignored")
client = InfluxDBClient3(token="API_TOKEN", host="{{< influxdb/host >}}",
database="BUCKET_NAME")
client.write(record=points, write_precision="s")
```
@ -289,7 +304,9 @@ The following example shows how to create a `Point` object, and then write the
data to InfluxDB.
```py
from influxdb_client_3 import Point, InfluxDBClient3
point = Point("home").tag("room", "Kitchen").field("temp", 72)
...
client.write(point)
```
@ -297,7 +314,7 @@ client.write(point)
`InfluxDBClient3` can serialize a dictionary object into line protocol.
If you pass a `dict` to `InfluxDBClient3.write`, the client expects the `dict` to have the
following _point_ data structure:
following _point_ attributes:
- **measurement** (str): the measurement name
- **tags** (dict): a dictionary of tag key-value pairs
@ -310,20 +327,21 @@ data to InfluxDB.
{{% influxdb/custom-timestamps %}}
{{% code-placeholders "BUCKET_NAME|API_TOKEN" %}}
```py
# Using point dictionary structure
points = {
"measurement": "home",
"tags": {"room": "Kitchen", "sensor": "K001"},
"fields": {"temp": 72.2, "hum": 36.9, "co": 4},
"time": 1641067200
}
client = InfluxDBClient3(token="API_TOKEN",
host="cloud2.influxdata.com",
database="BUCKET_NAME",
org="")
client.write(record=points, write_precision="s")
from influxdb_client_3 import InfluxDBClient3
# Using point dictionary structure
points = {
"measurement": "home",
"tags": {"room": "Kitchen", "sensor": "K001"},
"fields": {"temp": 72.2, "hum": 36.9, "co": 4},
"time": 1641067200
}
client = InfluxDBClient3(token="API_TOKEN",
host="{{< influxdb/host >}}",
database="BUCKET_NAME",
org="")
client.write(record=points, write_precision="s")
```
{{% /code-placeholders %}}
{{% /influxdb/custom-timestamps %}}
@ -340,21 +358,18 @@ The client can write using [_batching_ mode](#batch-writing) or [_synchronous_ m
write_file(self, file, measurement_name=None, tag_columns=[],
timestamp_column='time', **kwargs)
```
##### Attributes
#### Parameters
- **`file`**: A file containing records to write to InfluxDB.
The following file formats and file name extensions are supported.
The file name must end with one of the supported extensions.
For more information about encoding and formatting data, see the documentation for each supported format.
- **`file`** (str): A path to a file containing records to write to InfluxDB.
The filename must end with one of the following supported extensions.
For more information about encoding and formatting data, see the documentation for each supported format:
| Supported format | File name extension |
|:---------------------------------------------------------------------------|:--------------------|
| [Feather](https://arrow.apache.org/docs/python/feather.html) | `.feather` |
| [Parquet](https://arrow.apache.org/docs/python/parquet.html) | `.parquet` |
| [Comma-separated values](https://arrow.apache.org/docs/python/csv.html) | `.csv` |
| [JSON](https://pandas.pydata.org/docs/reference/api/pandas.read_json.html) | `.json` |
| [ORC](https://arrow.apache.org/docs/python/orc.html) | `.orc` |
- `.feather`: [Feather](https://arrow.apache.org/docs/python/feather.html)
- `.parquet`: [Parquet](https://arrow.apache.org/docs/python/parquet.html)
- `.csv`: [Comma-separated values](https://arrow.apache.org/docs/python/csv.html)
- `.json`: [JSON](https://pandas.pydata.org/docs/reference/api/pandas.read_json.html)
- `.orc`: [ORC](https://arrow.apache.org/docs/python/orc.html)
- **`measurement_name`**: Defines the measurement name for records in the file.
The specified value takes precedence over `measurement` and `iox::measurement` columns in the file.
If no value is specified for the parameter, and a `measurement` column exists in the file, the `measurement` column value is used for the measurement name.
@ -407,9 +422,9 @@ wco = write_client_options(success_callback=callback.success,
WriteOptions=write_options
)
with InfluxDBClient3(token="API_TOKEN", host="cloud2.influxdata.com",
with InfluxDBClient3(token="API_TOKEN", host="{{< influxdb/host >}}",
org="", database="BUCKET_NAME",
_write_client_options=wco) as client:
write_client_options=wco) as client:
client.write_file(file='./out.csv', timestamp_column='time',
tag_columns=["provider", "machineID"])
@ -427,27 +442,70 @@ Returns all data in the query result as an Arrow table.
#### Syntax
```py
query(self, query, language="sql")
query(self, query, language="sql", mode="all", **kwargs )
```
#### Parameters
- **`query`** (str): the SQL or InfluxQL to execute.
- **`language`** (str): the query language used in the `query` parameter--`"sql"` or `"influxql"`. Default is `"sql"`.
- **`mode`** (str): specifies what the [`pyarrow.flight.FlightStreamReader`](https://arrow.apache.org/docs/python/generated/pyarrow.flight.FlightStreamReader.html#pyarrow.flight.FlightStreamReader) will return.
Default is `"all"`.
- `all`: Read the entire contents of the stream and return it as a `pyarrow.Table`.
- `chunk`: Read the next message (a `FlightStreamChunk`) and return `data` and `app_metadata`.
Returns `null` if there are no more messages.
- `pandas`: Read the contents of the stream and return it as a `pandas.DataFrame`.
- `reader`: Convert the `FlightStreamReader` into a [`pyarrow.RecordBatchReader`](https://arrow.apache.org/docs/python/generated/pyarrow.RecordBatchReader.html#pyarrow-recordbatchreader).
- `schema`: Return the schema for all record batches in the stream.
#### Examples
##### Query using SQL
```py
query = "select * from measurement"
reader = client.query(query=query, language="sql")
table = reader.read_all()
print(table.to_pandas().to_markdown())
table = client.query("SELECT * FROM measurement WHERE time >= now() - INTERVAL '90 days'")
# Filter columns.
print(table.select(['room', 'temp']))
# Use PyArrow to aggregate data.
print(table.group_by('hum').aggregate([]))
```
##### Query using InfluxQL
```py
query = "select * from measurement"
reader = client.query(query=query, language="influxql")
table = reader.read_all()
print(table.to_pandas().to_markdown())
query = "SELECT * FROM measurement WHERE time >= -90d"
table = client.query(query=query, language="influxql")
# Filter columns.
print(table.select(['room', 'temp']))
```
##### Read all data from the stream and return a pandas DataFrame
```py
query = "SELECT * FROM measurement WHERE time >= now() - INTERVAL '90 days'"
pd = client.query(query=query, mode="pandas")
# Print the pandas DataFrame formatted as a Markdown table.
print(pd.to_markdown())
```
##### View the schema for all batches in the stream
```py
table = client.query('''
SELECT *
FROM measurement
WHERE time >= now() - INTERVAL '90 days''''
)
# Get the schema attribute value.
print(table.schema)
```
##### Retrieve the result schema and no data
```py
query = "SELECT * FROM measurement WHERE time >= now() - INTERVAL '90 days'"
schema = client.query(query=query, mode="schema")
print(schema)
```
### InfluxDBClient3.close
@ -473,7 +531,16 @@ client.close()
influxdb_client_3.Point
```
A time series data point.
Provides an interface for constructing a time series data
point for a measurement, and setting fields, tags, and timestamp.
The following example shows how to create a `Point`, and then write the
data to InfluxDB.
```py
point = Point("home").tag("room", "Kitchen").field("temp", 72)
client.write(point)
```
## Class WriteOptions
@ -503,17 +570,59 @@ __init__(self, write_type: WriteType = WriteType.batching,
## Functions
- `influxdb_client_3.write_client_options(**kwargs)`
Returns a dictionary of write client options.
- [influxdb_client_3.write_client_options](#function-write_client_optionskwargs)
- [influxdb_client_3.flight_client_options](#function-flight_client_optionskwargs)
- `influxdb_client_3.flight_client_options(**kwargs)`
Returns a dictionary of flight client options.
### Function write_client_options(**kwargs)
```py
influxdb_client_3.write_client_options(kwargs)
```
- Takes the following parameters:
- `kwargs`: keyword arguments for `WriteApi`
- Returns a dictionary of write client options.
### Function flight_client_options(**kwargs)
```py
influxdb_client_3.flight_client_options(kwargs)
```
- Takes the following parameters:
- `kwargs`: keyword arguments for `FlightClient`
- Returns a dictionary of Flight client options.
#### Examples
##### Specify the root certificate path
```py
from influxdb_client_3 import InfluxDBClient3, flight_client_options
import certifi
fh = open(certifi.where(), "r")
cert = fh.read()
fh.close()
client = InfluxDBClient3(
token="DATABASE_TOKEN",
host="{{< influxdb/host >}}",
database="DATABASE_NAME",
flight_client_options=flight_client_options(
tls_root_certs=cert))
```
## Constants
- `influxdb_client_3.SYNCHRONOUS`: Represents synchronous write mode.
- `influxdb_client_3.WritePrecision`: Enum class representing write precision options.
- `influxdb_client_3.ASYNCHRONOUS`: Represents asynchronous write mode
- `influxdb_client_3.SYNCHRONOUS`: Represents synchronous write mode
- `influxdb_client_3.WritePrecision`: Enum class that represents write precision
## Exceptions
- `influxdb_client_3.InfluxDBError`: Exception raised for InfluxDB-related errors.
- `influxdb_client_3.InfluxDBError`: Exception raised for InfluxDB-related errors