diff --git a/content/influxdb/cloud-dedicated/query-data/_index.md b/content/influxdb/cloud-dedicated/query-data/_index.md index 7f0767595..15cc05efb 100644 --- a/content/influxdb/cloud-dedicated/query-data/_index.md +++ b/content/influxdb/cloud-dedicated/query-data/_index.md @@ -2,18 +2,14 @@ title: Query data in InfluxDB Cloud seotitle: Query data stored in InfluxDB Cloud description: > - Learn to query data stored in InfluxDB using SQL, InfluxQL, and Flux using tools - like the InfluxDB user interface and the 'influx' command line interface. + Learn to query data stored in InfluxDB using SQL and InfluxQL. menu: influxdb_cloud_dedicated: name: Query data weight: 4 -influxdb/cloud-dedicated/tags: [query, flux] +influxdb/cloud-dedicated/tags: [query] --- Learn to query data stored in InfluxDB. - - {{< children >}} diff --git a/content/influxdb/cloud-dedicated/reference/glossary.md b/content/influxdb/cloud-dedicated/reference/glossary.md index 3e931af86..d89ebd470 100644 --- a/content/influxdb/cloud-dedicated/reference/glossary.md +++ b/content/influxdb/cloud-dedicated/reference/glossary.md @@ -373,16 +373,6 @@ Related entries: ## G -### group key - -In [Flux](/{{< latest "flux" >}}/), the group key determines the schema and -contents of tables in Flux output. -A group key is a list of columns for which every row in the table has the same value. -Columns with unique values in each row are not part of the group key. - -Related entries: -[primary key](#primary-key) - ### gzip gzip is a type of data compression that compress chunks of data, which is diff --git a/content/influxdb/cloud-dedicated/write-data/best-practices/schema-design.md b/content/influxdb/cloud-dedicated/write-data/best-practices/schema-design.md index e70071ee6..669714f55 100644 --- a/content/influxdb/cloud-dedicated/write-data/best-practices/schema-design.md +++ b/content/influxdb/cloud-dedicated/write-data/best-practices/schema-design.md @@ -266,7 +266,6 @@ matching or regular expressions to evaluate the `sensor` tag: {{% code-tabs %}} [SQL](#) [InfluxQL](#) -[Flux](#) {{% /code-tabs %}} {{% code-tab-content %}} @@ -281,18 +280,6 @@ SELECT * FROM home WHERE sensor LIKE '%id-1726ZA%' SELECT * FROM home WHERE sensor =~ /id-1726ZA/ ``` -{{% /code-tab-content %}} -{{% code-tab-content %}} - -```js -import "experimental/iox" - -iox.from(bucket: "example-bucket") - |> range(start: -1y) - |> filter(fn: (r) => r._measurement == "home") - |> filter(fn: (r) => r.sensor =~ /id-1726ZA/) -``` - {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} @@ -329,7 +316,6 @@ simple equality expression: {{< code-tabs-wrapper >}} {{% code-tabs %}} [SQL & InfluxQL](#) -[Flux](#) {{% /code-tabs %}} {{% code-tab-content %}} @@ -337,18 +323,6 @@ simple equality expression: SELECT * FROM home WHERE sensor_id = '1726ZA' ``` -{{% /code-tab-content %}} -{{% code-tab-content %}} - -```js -import "experimental/iox" - -iox.from(bucket: "example-bucket") - |> range(start: -1y) - |> filter(fn: (r) => r._measurement == "home") - |> filter(fn: (r) => r.sensor_id == "1726ZA") -``` - {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} diff --git a/content/influxdb/cloud-dedicated/write-data/migrate-data/_index.md b/content/influxdb/cloud-dedicated/write-data/migrate-data/_index.md index 94db6c7f4..0203c4c6a 100644 --- a/content/influxdb/cloud-dedicated/write-data/migrate-data/_index.md +++ b/content/influxdb/cloud-dedicated/write-data/migrate-data/_index.md @@ -65,7 +65,7 @@ in more regions around the world. #### Are you reliant on Flux queries and Flux tasks? -**You should not migrate**. InfluxDB Cloud Dedicated does not support Flux. +**You should not migrate**. {{% cloud-name %}} doesn't support Flux. --- diff --git a/content/influxdb/cloud-serverless/admin/buckets/create-bucket.md b/content/influxdb/cloud-serverless/admin/buckets/create-bucket.md index 2e9df0e50..8bd98518f 100644 --- a/content/influxdb/cloud-serverless/admin/buckets/create-bucket.md +++ b/content/influxdb/cloud-serverless/admin/buckets/create-bucket.md @@ -57,19 +57,6 @@ There are two places you can create a bucket in the UI. - **Older than** to choose a specific retention period. 5. Click **Create** to create the bucket. -### Create a bucket in the Data Explorer - -1. In the navigation menu on the left, select **Explore* (**Data Explorer**). - -{{< nav-icon "data-explorer" >}} - -2. In the **From** panel in the Flux Builder, select `+ Create Bucket`. -3. Enter a **Name** for the bucket. -4. Select when to **Delete Data**: - - **Never** to retain data forever. - - **Older than** to choose a specific retention period. -5. Click **Create** to create the bucket. - {{% /tab-content %}} diff --git a/content/influxdb/cloud-serverless/get-started/query.md b/content/influxdb/cloud-serverless/get-started/query.md index 1696bbe07..2661720e1 100644 --- a/content/influxdb/cloud-serverless/get-started/query.md +++ b/content/influxdb/cloud-serverless/get-started/query.md @@ -773,138 +773,6 @@ RECORD BATCH {{% /influxdb/custom-timestamps %}} {{% /tab-content %}} -{{% tab-content %}} - - -The [`influx query` command](/influxdb/cloud-serverless/reference/cli/influx/query/) -uses the InfluxDB `/api/v2/query` endpoint to query InfluxDB. -This endpoint only accepts Flux queries. To use SQL with the `influx` CLI, wrap -your SQL query in Flux and use [`iox.sql()`](/flux/v0.x/stdlib/experimental/iox/) -to query the InfluxDB IOx storage engine with SQL. -Provide the following: - -- **Bucket name** with the `bucket` parameter -- **SQL query** with the `query` parameter - -{{< expand-wrapper >}} -{{% expand "View `iox.sql()` Flux example" %}} -```js -import "experimental/iox" - -iox.sql( - bucket: "example-bucket", - query: "SELECT * FROM measurement'" -) -``` -{{% /expand %}} -{{< /expand-wrapper >}} - -1. If you haven't already, [download, install, and configure the `influx` CLI](/influxdb/cloud-serverless/tools/influx-cli/). -2. Use the [`influx query` command](/influxdb/cloud-serverless/reference/cli/influx/query/) - to query InfluxDB using Flux. - - **Provide the following**: - - - String-encoded Flux query that uses `iox.sql()` to query the InfluxDB IOx - storage engine with SQL. - - [Connection and authentication credentials](/influxdb/cloud-serverless/get-started/setup/?t=influx+CLI#configure-authentication-credentials) - -{{% influxdb/custom-timestamps %}} -```sh -influx query " -import \"experimental/iox\" - -iox.sql( - bucket: \"get-started\", - query: \" - SELECT - * - FROM - home - WHERE - time >= '2022-01-01T08:00:00Z' - AND time <= '2022-01-01T20:00:00Z' - \", -)" -``` -{{% /influxdb/custom-timestamps %}} - - -{{% /tab-content %}} -{{% tab-content %}} - - -To query data from InfluxDB using SQL and the InfluxDB HTTP API, send a request -to the InfluxDB API [`/api/v2/query` endpoint](/influxdb/cloud-serverless/api/#operation/PostQuery) -using the `POST` request method. - -{{< api-endpoint endpoint="http://localhost:8086/api/v2/query" method="post" api-ref="/influxdb/cloud-serverless/api/#operation/PostQuery" >}} - -The `/api/v2/query` endpoint only accepts Flux queries. -To query data with SQL, wrap your SQL query in Flux and use [`iox.sql()`](/flux/v0.x/stdlib/experimental/iox/) -to query the InfluxDB IOx storage engine with SQL. -Provide the following: - -- **Bucket name** with the `bucket` parameter -- **SQL query** with the `query` parameter - -{{< expand-wrapper >}} -{{% expand "View `iox.sql()` Flux example" %}} -```js -import "experimental/iox" - -iox.sql( - bucket: "example-bucket", - query: "SELECT * FROM measurement'" -) -``` -{{% /expand %}} -{{< /expand-wrapper >}} - -Include the following with your request: - -- **Headers**: - - **Authorization**: Token - - **Content-Type**: application/vnd.flux - - **Accept**: application/csv - - _(Optional)_ **Accept-Encoding**: gzip -- **Request body**: Flux query as plain text. In the Flux query, use `iox.sql()` - and provide your bucket name and your SQL query. - -The following example uses cURL and the InfluxDB API to query data with Flux: - -{{% influxdb/custom-timestamps %}} -```sh -curl --request POST \ -"$INFLUX_HOST/api/v2/query" \ - --header "Authorization: Token $INFLUX_TOKEN" \ - --header "Content-Type: application/vnd.flux" \ - --header "Accept: application/csv" \ - --data " - import \"experimental/iox\" - - iox.sql( - bucket: \"get-started\", - query: \" - SELECT - * - FROM - home - WHERE - time >= '2022-01-01T08:00:00Z' - AND time <= '2022-01-01T20:00:00Z' - \", - )" -``` -{{% /influxdb/custom-timestamps %}} - -{{% note %}} -The InfluxDB `/api/v2/query` endpoint returns query results in -[annotated CSV](/influxdb/cloud-serverless/reference/syntax/annotated-csv/). -{{% /note %}} - - -{{% /tab-content %}} {{< /tabs-wrapper >}} ### Query results diff --git a/content/influxdb/cloud-serverless/query-data/_index.md b/content/influxdb/cloud-serverless/query-data/_index.md index a25e9e86d..7cfeb12da 100644 --- a/content/influxdb/cloud-serverless/query-data/_index.md +++ b/content/influxdb/cloud-serverless/query-data/_index.md @@ -2,20 +2,16 @@ title: Query data in InfluxDB Cloud seotitle: Query data stored in InfluxDB Cloud description: > - Learn to query data stored in InfluxDB using SQL, InfluxQL, and Flux using tools - like the InfluxDB user interface and the 'influx' command line interface. + Learn to query data stored in InfluxDB using SQL and InfluxQL. menu: influxdb_cloud_serverless: name: Query data weight: 4 -influxdb/cloud-serverless/tags: [query, flux] +influxdb/cloud-serverless/tags: [query] aliases: - /influxdb/cloud-serverless/query-data/execute-queries/influx-api/ --- Learn to query data stored in InfluxDB. - - {{< children >}} diff --git a/content/influxdb/cloud-serverless/query-data/sql/execute-queries/data-explorer.md b/content/influxdb/cloud-serverless/query-data/sql/execute-queries/data-explorer.md index a08910fa3..813e6d1ab 100644 --- a/content/influxdb/cloud-serverless/query-data/sql/execute-queries/data-explorer.md +++ b/content/influxdb/cloud-serverless/query-data/sql/execute-queries/data-explorer.md @@ -19,7 +19,7 @@ Build, execute, and visualize your queries in InfluxDB UI's **Data Explorer**. -Query using saved scripts, the SQL builder, the Flux builder, or by manually editing the query. +Query using saved scripts, the SQL builder, or by manually editing the query. Choose between **visualization types** for your query. ## Query data with SQL and the Data Explorer @@ -71,88 +71,3 @@ After you **Run** your query, Data Explorer displays the results. - Click {{< caps >}}Table{{< /caps >}} for a paginated tabular view of all rows and columns. - Click {{< caps >}}Graph{{< /caps >}} to select a *visualization type* and options. - Click {{< caps >}}CSV{{< /caps >}} to download query results in a comma-delimited file. - -## Query data with Flux and the Data Explorer - -Flux is a functional data scripting language designed for querying, -analyzing, and acting on time series data. -See [how to use Flux and SQL to query data](/influxdb/cloud-serverless/query-data/flux-sql/). - -1. In the navigation menu on the left, click **Data Explorer**. - - {{< nav-icon "data-explorer" >}} - -2. Activate the **Switch to old Data Explorer** toggle to display the Flux builder. By default, the Cloud IOx UI displays the **Schema Browser** and the **SQL** script editor for creating queries. - - ![Data Explorer with Flux](/img/influxdb/2-0-data-explorer.png) - -3. Use the bottom panel to create a Flux query: - - Select a bucket to define your data source or select `+ Create Bucket` to add a new bucket. - - Edit your time range with the [time range option](#select-time-range) in the dropdown menu. - - Add filters to narrow your data by selecting attributes or columns in the dropdown menu. - - Select **Group** from the **Filter** dropdown menu to group data into tables. For more about how grouping data in Flux works, see [group()](/flux/v0.x/stdlib/universe/group/). -3. Alternatively, click **Script Editor** to manually edit the query. - To switch back to the query builder, click **Query Builder**. Note that your updates from the Script Editor will not be saved. -4. Use the **Functions** list to review the available Flux functions. - Click a function from the list to add it to your query. -5. Click **Submit** (or press `Control+Enter`) to run your query. You can then preview your graph in the above pane. - To cancel your query while it's running, click **Cancel**. -6. To work on multiple queries at once, click the {{< icon "plus" >}} to add another tab. - - Click the eye icon on a tab to hide or show a query's visualization. - - Click the name of the query in the tab to rename it. - -### Visualize your query - -- Select an available **visualization type** from the dropdown menu: - - {{< img-hd src="/img/influxdb/2-0-visualizations-dropdown.png" title="Visualization dropdown" />}} - -## Control your dashboard cell - -To open the cell editor overlay, click the gear icon in the upper right of a cell and select **Configure**. - The cell editor overlay opens. - -### View raw data - -Toggle the **View Raw Data** {{< icon "toggle" >}} option to see your data in table format instead of a graph. Scroll through raw data using arrows, or click page numbers to find specific tables. [Group keys](/influxdb/cloud-serverless/reference/glossary/#group-key) and [data types](/influxdb/cloud-serverless/reference/glossary/#data-type) are easily identifiable at the top of each column underneath the headings. Use this option when data can't be visualized using a visualization type. - - {{< img-hd src="/img/influxdb/cloud-controls-view-raw-data.png" alt="View raw data" />}} - -### Save as CSV - -Click the CSV icon to save the cells contents as a CSV file. - -### Manually refresh dashboard - -Click the refresh button ({{< icon "refresh" >}}) to manually refresh the dashboard's data. - -### Select time range - -1. Select from the time range options in the dropdown menu. - - {{< img-hd src="/img/influxdb/2-0-controls-time-range.png" alt="Select time range" />}} - -2. Select **Custom Time Range** to enter a custom time range with precision up to nanoseconds. -The default time range is 5m. - -> The custom time range uses the selected timezone (local time or UTC). - -### Query Builder or Script Editor - -Click **Query Builder** to use the builder to create a Flux query. Click **Script Editor** to manually edit the query. - -#### Keyboard shortcuts - -In **Script Editor** mode, the following keyboard shortcuts are available: - -| Key | Description | -|--------------------------------|---------------------------------------------| -| `Control + /` (`⌘ + /` on Mac) | Comment/uncomment current or selected lines | -| `Control + Enter` | Submit query | - -## Save your query as a dashboard cell or task - -- Click **Save as** in the upper right, and then: - - To add your query to a dashboard, click **Dashboard Cell**. - - To save your query as a task, click **Task**. - - To save your query as a variable, click **Variable**. diff --git a/content/influxdb/cloud-serverless/query-data/sql/execute-queries/flux-sql.md b/content/influxdb/cloud-serverless/query-data/sql/execute-queries/flux-sql.md deleted file mode 100644 index 4c82d8d1c..000000000 --- a/content/influxdb/cloud-serverless/query-data/sql/execute-queries/flux-sql.md +++ /dev/null @@ -1,577 +0,0 @@ ---- -title: Use Flux and SQL to query data -description: > - Leverage both the performance of SQL and the flexibility of Flux to query and - process your time series data. -menu: - influxdb_cloud_serverless: - name: Use Flux & SQL - parent: Execute SQL queries -weight: 204 -aliases: - - /influxdb/cloud-serverless/query-data/flux-sql/ -related: - - /influxdb/cloud-serverless/get-started/query/ - - /influxdb/cloud-serverless/query-data/sql/ -influxdb/cloud-serverless/tags: [sql, flux, query] -list_code_example: | - ```js - import "experimental/iox" - - query = " - SELECT * - FROM home - WHERE - time >= '2022-01-01T08:00:00Z' - AND time < '2022-01-01T20:00:00Z' - " - - iox.sql(bucket: "get-started", query: query) - ``` ---- - -InfluxDB Cloud Serverless supports both [Flux](/flux/v0.x/) and -[SQL](/influxdb/cloud-serverless/reference/sql/) query languages. -Flux is a full-featured data scripting language that provides a wide range of -functionality and flexibility. SQL is a proven and performant relational query language. - -This guide walks through leveraging the performance of SQL and the flexibility of -Flux when querying your time series data. - -{{% note %}} -#### Sample data - -The query examples below use the -[Get started sample data](/influxdb/cloud-serverless/get-started/write/#write-line-protocol-to-influxdb). -{{% /note %}} - -- [Performance and flexibility](#performance-and-flexibility) -- [What to do in SQL versus Flux?](#what-to-do-in-sql-versus-flux?) -- [Use SQL and Flux together](#use-sql-and-flux-together) - - [Helper functions for SQL in Flux](#helper-functions-for-sql-in-flux) - - [SQL results structure](#sql-results-structure) -- [Process SQL results with Flux](#process-sql-results-with-flux) - - [Group by tags](#group-by-tags) - - [Rename the `time` column to `_time`](#rename-the-time-column-to-_time) - - [Unpivot your data](#unpivot-your-data) - - [Example SQL query with further Flux processing](#example-sql-query-with-further-flux-processing) - -## Performance and flexibility - -Flux was designed and optimized for the -[TSM data model](/influxdb/v2.6/reference/internals/storage-engine/#time-structured-merge-tree-tsm), -which is fundamentally different from IOx. -Because of this, Flux is less performant when querying an IOx-powered bucket. -However, as a full-featured scripting language, Flux gives you the flexibility -to perform a wide range of data processing operations such as statistical -analysis, alerting, HTTP API interactions, and other operations that aren't -supported in SQL. -By using Flux and SQL together, you can benefit from both the performance of SQL -and the flexibility of Flux. - -## What to do in SQL versus Flux? - -We recommend doing as much of your query as possible in SQL for the most -performant queries. -Do any further processing in Flux. - -For optimal performance, the following chain of Flux functions can and should be -performed in SQL: - -{{< flex >}} -{{% flex-content %}} -#### Flux -```js -from(...) - |> range(...) - |> filter(...) - |> aggregateWindow(...) -``` -{{% /flex-content %}} - -{{% flex-content %}} -#### SQL -```sql -SELECT - DATE_BIN(...) AS _time, - avg(...) AS ..., -FROM measurement -WHERE - time >= ... - AND time < ... -GROUP BY _time -ORDER BY _time -``` -{{% /flex-content %}} -{{< /flex >}} - -#### Example Flux versus SQL queries - -{{< expand-wrapper >}} -{{% expand "View example basic queries" %}} - -{{% influxdb/custom-timestamps %}} -##### Flux -```js -from(bucket: "get-started") - |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T20:00:00Z) - |> filter(fn: (r) => r._measurement == "home") -``` - -##### SQL -```sql -SELECT * -FROM home -WHERE time >= '2022-01-01T08:00:00Z' AND time < '2022-01-01T20:00:00Z' -``` -{{% /influxdb/custom-timestamps %}} - -_For more information about performing basic queries with SQL, see -[Perform a basic SQL query](/influxdb/cloud-serverless/query-data/sql/basic-query/)._ - -{{% /expand %}} - -{{% expand "View example aggregate queries" %}} - -{{% influxdb/custom-timestamps %}} -##### Flux -```js -from(bucket: "get-started") - |> range(start: 2022-01-01T08:00:00Z, stop: 2022-01-01T20:00:00Z) - |> filter(fn: (r) => r._measurement == "home") - |> filter(fn: (r) => r._field == "temp" or r._field == "hum") - |> aggregateWindow(every: 2h, fn: mean) -``` - -##### SQL -```sql -SELECT - DATE_BIN(INTERVAL '2 hours', time, '1970-01-01T00:00:00Z'::TIMESTAMP) AS _time, - room, - avg(temp) AS temp, - avg(hum) AS hum, -FROM home -WHERE - time >= '2022-01-01T08:00:00Z' - AND time < '2022-01-01T20:00:00Z' -GROUP BY room, _time -ORDER BY _time -``` -{{% /influxdb/custom-timestamps %}} - -_For more information about performing aggregate queries with SQL, see -[Aggregate data with SQL](/influxdb/cloud-serverless/query-data/sql/aggregate-select/)._ - -{{% /expand %}} -{{< /expand-wrapper >}} - -## Use SQL and Flux together - -To use SQL and Flux together and benefit from the strengths of both query languages, -build a **Flux query** that uses the [`iox.sql()` function](/flux/v0.x/stdlib/experimental/iox/sql/) -to execute a SQL query. -The SQL query should return the base data set for your query. -If this data needs further processing that can't be done in SQL, those operations -can be done with native Flux. - -{{% note %}} -#### Supported by any InfluxDB 2.x client - -The process below uses the `/api/v2/query` endpoint and can be used to execute -SQL queries against an InfluxDB IOx-powered bucket with an HTTP API request or -with all existing InfluxDB 2.x clients including, but not limited to, the following: - -- InfluxDB 2.x client libraries -- Grafana and Grafana Cloud InfluxDB data source -- Flux VS code extensions -- InfluxDB OSS 2.x dashboards -{{% /note %}} - -1. Import the [`experimental/iox` package](/flux/v0.x/stdlib/experimental/iox/). -2. Use [`iox.sql()`](/flux/v0.x/stdlib/experimental/iox/sql/) to execute a SQL - query. Include the following parameters: - - - **bucket**: InfluxDB bucket to query - - **query**: SQL query to execute - -{{% influxdb/custom-timestamps %}} -```js -import "experimental/iox" - -query = " -SELECT * -FROM home -WHERE - time >= '2022-01-01T08:00:00Z' - AND time < '2022-01-01T20:00:00Z' -" - -iox.sql(bucket: "get-started", query: query) -``` -{{% /influxdb/custom-timestamps %}} - -{{% note %}} -#### Escape double quotes in your SQL query - -If your SQL query uses **double-quoted (`""`) identifiers**, you must escape the -double quotes in your SQL query string. - -{{< expand-wrapper >}} -{{% expand "View example" %}} -```js -import "experimental/iox" - -query = " -SELECT * -FROM \"home\" -WHERE - \"time\" >= '2022-01-01T08:00:00Z' - AND \"time\" < '2022-01-01T20:00:00Z' -" - -iox.sql(bucket: "get-started", query: query) -``` -{{% /expand %}} -{{< /expand-wrapper >}} -{{% /note %}} - -### Helper functions for SQL in Flux - -The Flux `experimental/iox` package provides the following helper functions for -use with SQL queries in Flux: - -- [iox.sqlInterval()](#ioxsqlinterval) - -#### iox.sqlInterval() - -[`iox.sqlInterval()`](/flux/v0.x/stdlib/experimental/iox/sqlinterval/) converts -a Flux [duration value](/flux/v0.x/data-types/basic/duration/) to a SQL -interval string. For example, `2d12h` converts to `2 days 12 hours`. -This is especially useful when using a Flux duration to downsample data in SQL. - -{{< expand-wrapper >}} -{{% expand "View `iox.sqlInterval()` example" %}} -```js -import "experimental/iox" - -windowPeriod = 2h - -query = " -SELECT - DATE_BIN(INTERVAL '${iox.sqlInterval(d: windowPeriod)}', time, 0::TIMESTAMP) AS _time, - room, - avg(temp) AS temp, - avg(hum) AS hum -FROM home -WHERE - time >= '2022-01-01T08:00:00Z' - AND time < '2022-01-01T20:00:00Z' -GROUP BY room, _time -ORDER BY room, _time -" - -iox.sql(bucket: "get-started", query: query) -``` -{{% /expand %}} -{{< /expand-wrapper >}} - -### SQL results structure - -`iox.sql()` returns a single table containing all the queried data. -Each row has a column for each queried field, tag, and time. -In the context of Flux, SQL results are ungrouped. This is important to understand -if you further process SQL results with Flux. - -The [example query above](#use-sql-and-flux-together) returns: - -{{% influxdb/custom-timestamps %}} - -| co | hum | room | temp | time | -| --: | ---: | :---------- | ---: | :------------------- | -| 0 | 35.9 | Kitchen | 21 | 2022-01-01T08:00:00Z | -| 0 | 36.2 | Kitchen | 23 | 2022-01-01T09:00:00Z | -| 0 | 36.1 | Kitchen | 22.7 | 2022-01-01T10:00:00Z | -| 0 | 36 | Kitchen | 22.4 | 2022-01-01T11:00:00Z | -| 0 | 35.9 | Living Room | 21.1 | 2022-01-01T08:00:00Z | -| 0 | 35.9 | Living Room | 21.4 | 2022-01-01T09:00:00Z | -| 0 | 36 | Living Room | 21.8 | 2022-01-01T10:00:00Z | -| 0 | 36 | Living Room | 22.2 | 2022-01-01T11:00:00Z | - -{{% /influxdb/custom-timestamps %}} - -## Process SQL results with Flux - -With your base data set returned from `iox.sql()`, you can further process your -data with Flux to perform actions such as complex data transformations, alerting, -HTTP requests, etc. - -{{% note %}} -#### For the best performance, limit SQL results - -All data returned by `iox.sql()` is loaded into memory and processed there. -To maximize the overall performance of your Flux query, try to return as little -data as possible from your SQL query. -This can by done by downsampling data in your SQL query or by limiting the -queried time range. -{{% /note %}} - -1. [Group by tags](#group-by-tags) -1. [Rename the `time` column to `_time`](#rename-the-time-column-to-_time) -1. [Unpivot your data](#unpivot-your-data) - -### Group by tags - -The Flux `from()` functions returns results grouped by measurement, tag, and field key -and much of the Flux language is designed around this data model. -Because SQL results are ungrouped, to structure results the way many Flux -functions expect, use [`group()`](/flux/v0.x/stdlib/universe/group/) to group by -all of your queried tag columns. - -{{% note %}} -Measurements are not stored as a column in the InfluxDB IOx storage engine and -are not returned by SQL. -{{% /note %}} - -The [Get started sample data](#sample-data) only includes one tag: `room`. - -```js -import "experimental/iox" - -iox.sql(...) - |> group(columns: ["room"]) -``` - -_`group()` does not guarantee sort order, so you likely need to use -[`sort()`](/flux/v0.x/stdlib/universe/sort/) to re-sort your data time **after** -performing other transformations._ - -### Rename the `time` column to `_time` - -Many Flux functions expect or require a column named `_time` (with a leading underscore). -The IOx storage engine stores each point's timestamp in the `time` column (no leading underscore). -Depending on which Flux functions you use, you may need to rename the `time` -column to `_time`. - -Rename the `time` column in your SQL query with an `AS` clause _**(recommended for performance)**_ -or in Flux with the [`rename()` function](/flux/v0.x/stdlib/universe/rename/). - -{{< code-tabs-wrapper >}} -{{% code-tabs "small" %}} -[SQL](#) -[Flux](#) -{{% /code-tabs %}} -{{% code-tab-content %}} -```sql -SELECT time AS _time -FROM "get-started" -``` -{{% /code-tab-content %}} -{{% code-tab-content %}} -```js -// ... - |> rename(columns: {time: "_time"}) -``` -{{% /code-tab-content %}} -{{< /code-tabs-wrapper >}} - -### Unpivot your data - -In the context of Flux, data is considered "pivoted" when each field has its own -column. Flux generally expects a `_field` column that contains the the field key -and a `_value` column that contains the field. SQL returns each field as a column. -Depending on your use case and the type of processing you need to do in Flux, -you may need to "unpivot" your data. - -{{< expand-wrapper >}} -{{% expand "View examples of pivoted and unpivoted data" %}} -{{% influxdb/custom-timestamps %}} - -##### Pivoted data (SQL data model) - -| _time | room | temp | hum | -| :------------------- | :------ | ---: | ---: | -| 2022-01-01T08:00:00Z | Kitchen | 21 | 35.9 | -| 2022-01-01T09:00:00Z | Kitchen | 23 | 36.2 | -| 2022-01-01T10:00:00Z | Kitchen | 22.7 | 36.1 | - -##### Unpivoted data (Flux data model) - -| _time | room | _field | _value | -| :------------------- | :------ | :----- | -----: | -| 2022-01-01T08:00:00Z | Kitchen | hum | 35.9 | -| 2022-01-01T09:00:00Z | Kitchen | hum | 36.2 | -| 2022-01-01T10:00:00Z | Kitchen | hum | 36.1 | - -| _time | room | _field | _value | -| :------------------- | :------ | :----- | -----: | -| 2022-01-01T08:00:00Z | Kitchen | temp | 21 | -| 2022-01-01T09:00:00Z | Kitchen | temp | 23 | -| 2022-01-01T10:00:00Z | Kitchen | temp | 22.7 | - -{{% /influxdb/custom-timestamps %}} -{{% /expand %}} -{{< /expand-wrapper >}} - -{{% note %}} -#### Unpivoting data may not be necessary - -Depending on your use case, unpivoting the SQL results may not be necessary. -For Flux queries that already pivot fields into columns, using SQL to return -pivoted results will greatly improve the performance of your query. -{{% /note %}} - -To unpivot SQL results: - -1. Import the `experimental` package. -2. [Ensure you have a `_time` column](#rename-the-time-column-to-_time). -3. Use [`experimental.unpivot()`](/flux/v0.x/stdlib/experimental/unpivot/) to unpivot your data. - -```js -import "experimental" -import "experimental/iox" - -iox.sql(...) - |> group(columns: ["room"]) - |> experimental.unpivot() -``` - -{{% note %}} -`unpivot()` treats columns _not_ in the [group key](/flux/v0.x/get-started/data-model/#group-key) -(other than `_time` and `_measurement`) as fields. Be sure to [group by tags](#group-by-tags) -_before_ unpivoting data. -{{% /note %}} - -### Example SQL query with further Flux processing - -{{% influxdb/custom-timestamps %}} -```js -import "experimental" -import "experimental/iox" - -query = " -SELECT - time AS _time, - room, - temp, - hum, - co -FROM home -WHERE - time >= '2022-01-01T08:00:00Z' - AND time <= '2022-01-01T20:00:00Z' -" - -iox.sql(bucket: "get-started", query: query) - |> group(columns: ["room"]) - |> experimental.unpivot() -``` -{{% /influxdb/custom-timestamps %}} - -{{< expand-wrapper >}} -{{% expand "View processed query results" %}} - -{{% influxdb/custom-timestamps %}} - -| _time | room | _field | _value | -| :------------------- | :---------- | :----- | -----: | -| 2022-01-01T08:00:00Z | Kitchen | co | 0 | -| 2022-01-01T09:00:00Z | Kitchen | co | 0 | -| 2022-01-01T10:00:00Z | Kitchen | co | 0 | -| 2022-01-01T11:00:00Z | Kitchen | co | 0 | -| 2022-01-01T12:00:00Z | Kitchen | co | 0 | -| 2022-01-01T13:00:00Z | Kitchen | co | 1 | -| 2022-01-01T14:00:00Z | Kitchen | co | 1 | -| 2022-01-01T15:00:00Z | Kitchen | co | 3 | -| 2022-01-01T16:00:00Z | Kitchen | co | 7 | -| 2022-01-01T17:00:00Z | Kitchen | co | 9 | -| 2022-01-01T18:00:00Z | Kitchen | co | 18 | -| 2022-01-01T19:00:00Z | Kitchen | co | 22 | -| 2022-01-01T20:00:00Z | Kitchen | co | 26 | - -| _time | room | _field | _value | -| :------------------- | :---------- | :----- | -----: | -| 2022-01-01T08:00:00Z | Living Room | co | 0 | -| 2022-01-01T09:00:00Z | Living Room | co | 0 | -| 2022-01-01T10:00:00Z | Living Room | co | 0 | -| 2022-01-01T11:00:00Z | Living Room | co | 0 | -| 2022-01-01T12:00:00Z | Living Room | co | 0 | -| 2022-01-01T13:00:00Z | Living Room | co | 0 | -| 2022-01-01T14:00:00Z | Living Room | co | 0 | -| 2022-01-01T15:00:00Z | Living Room | co | 1 | -| 2022-01-01T16:00:00Z | Living Room | co | 4 | -| 2022-01-01T17:00:00Z | Living Room | co | 5 | -| 2022-01-01T18:00:00Z | Living Room | co | 9 | -| 2022-01-01T19:00:00Z | Living Room | co | 14 | -| 2022-01-01T20:00:00Z | Living Room | co | 17 | - -| _time | room | _field | _value | -| :------------------- | :---------- | :----- | -----: | -| 2022-01-01T08:00:00Z | Kitchen | hum | 35.9 | -| 2022-01-01T09:00:00Z | Kitchen | hum | 36.2 | -| 2022-01-01T10:00:00Z | Kitchen | hum | 36.1 | -| 2022-01-01T11:00:00Z | Kitchen | hum | 36 | -| 2022-01-01T12:00:00Z | Kitchen | hum | 36 | -| 2022-01-01T13:00:00Z | Kitchen | hum | 36.5 | -| 2022-01-01T14:00:00Z | Kitchen | hum | 36.3 | -| 2022-01-01T15:00:00Z | Kitchen | hum | 36.2 | -| 2022-01-01T16:00:00Z | Kitchen | hum | 36 | -| 2022-01-01T17:00:00Z | Kitchen | hum | 36 | -| 2022-01-01T18:00:00Z | Kitchen | hum | 36.9 | -| 2022-01-01T19:00:00Z | Kitchen | hum | 36.6 | -| 2022-01-01T20:00:00Z | Kitchen | hum | 36.5 | - -| _time | room | _field | _value | -| :------------------- | :---------- | :----- | -----: | -| 2022-01-01T08:00:00Z | Living Room | hum | 35.9 | -| 2022-01-01T09:00:00Z | Living Room | hum | 35.9 | -| 2022-01-01T10:00:00Z | Living Room | hum | 36 | -| 2022-01-01T11:00:00Z | Living Room | hum | 36 | -| 2022-01-01T12:00:00Z | Living Room | hum | 35.9 | -| 2022-01-01T13:00:00Z | Living Room | hum | 36 | -| 2022-01-01T14:00:00Z | Living Room | hum | 36.1 | -| 2022-01-01T15:00:00Z | Living Room | hum | 36.1 | -| 2022-01-01T16:00:00Z | Living Room | hum | 36 | -| 2022-01-01T17:00:00Z | Living Room | hum | 35.9 | -| 2022-01-01T18:00:00Z | Living Room | hum | 36.2 | -| 2022-01-01T19:00:00Z | Living Room | hum | 36.3 | -| 2022-01-01T20:00:00Z | Living Room | hum | 36.4 | - -| _time | room | _field | _value | -| :------------------- | :---------- | :----- | -----: | -| 2022-01-01T08:00:00Z | Kitchen | temp | 21 | -| 2022-01-01T09:00:00Z | Kitchen | temp | 23 | -| 2022-01-01T10:00:00Z | Kitchen | temp | 22.7 | -| 2022-01-01T11:00:00Z | Kitchen | temp | 22.4 | -| 2022-01-01T12:00:00Z | Kitchen | temp | 22.5 | -| 2022-01-01T13:00:00Z | Kitchen | temp | 22.8 | -| 2022-01-01T14:00:00Z | Kitchen | temp | 22.8 | -| 2022-01-01T15:00:00Z | Kitchen | temp | 22.7 | -| 2022-01-01T16:00:00Z | Kitchen | temp | 22.4 | -| 2022-01-01T17:00:00Z | Kitchen | temp | 22.7 | -| 2022-01-01T18:00:00Z | Kitchen | temp | 23.3 | -| 2022-01-01T19:00:00Z | Kitchen | temp | 23.1 | -| 2022-01-01T20:00:00Z | Kitchen | temp | 22.7 | - -| _time | room | _field | _value | -| :------------------- | :---------- | :----- | -----: | -| 2022-01-01T08:00:00Z | Living Room | temp | 21.1 | -| 2022-01-01T09:00:00Z | Living Room | temp | 21.4 | -| 2022-01-01T10:00:00Z | Living Room | temp | 21.8 | -| 2022-01-01T11:00:00Z | Living Room | temp | 22.2 | -| 2022-01-01T12:00:00Z | Living Room | temp | 22.2 | -| 2022-01-01T13:00:00Z | Living Room | temp | 22.4 | -| 2022-01-01T14:00:00Z | Living Room | temp | 22.3 | -| 2022-01-01T15:00:00Z | Living Room | temp | 22.3 | -| 2022-01-01T16:00:00Z | Living Room | temp | 22.4 | -| 2022-01-01T17:00:00Z | Living Room | temp | 22.6 | -| 2022-01-01T18:00:00Z | Living Room | temp | 22.8 | -| 2022-01-01T19:00:00Z | Living Room | temp | 22.5 | -| 2022-01-01T20:00:00Z | Living Room | temp | 22.2 | - -{{% /influxdb/custom-timestamps %}} -{{% /expand %}} -{{< /expand-wrapper >}} - -With the SQL results restructured into the Flux data model, you can do any further -processing with Flux. For more information about Flux, see the -[Flux documentation](/flux/v0.x/). diff --git a/content/influxdb/cloud-serverless/reference/cli/influx/query/_index.md b/content/influxdb/cloud-serverless/reference/cli/influx/query/_index.md index 747cf5bfa..d2245bd80 100644 --- a/content/influxdb/cloud-serverless/reference/cli/influx/query/_index.md +++ b/content/influxdb/cloud-serverless/reference/cli/influx/query/_index.md @@ -1,8 +1,8 @@ --- title: influx query description: > - The `influx query` command executes a literal Flux query provided as a string - or a literal Flux query contained in a file by specifying the file prefixed with an '@' sign. + The `influx query` command and `/api/v2/query` API endpoint don't work with InfluxDB Cloud Serverless. + Use [SQL](/influxdb/cloud-serverless/query-data/sql/execute-queries/) or [InfluxQL](/influxdb/cloud-serverless/query-data/influxql/) to query an InfluxDB Cloud Serverless bucket. menu: influxdb_cloud_serverless: name: influx query @@ -10,21 +10,20 @@ menu: weight: 101 influxdb/cloud-serverless/tags: [query] related: - - /influxdb/cloud/query-data/ - - /influxdb/cloud/query-data/execute-queries/influx-query/ + - /influxdb/cloud-serverless/query-data/ + - /influxdb/cloud-serverless/query-data/sql/execute-queries/ + - /influxdb/cloud-serverless/query-data/influxql/execute-queries/ - /influxdb/cloud-serverless/reference/cli/influx/#provide-required-authentication-credentials, influx CLI—Provide required authentication credentials - /influxdb/cloud-serverless/reference/cli/influx/#provide-required-authentication-credentials, influx CLI—Provide required authentication credentials metadata: [influx CLI 2.0.0+] updated_in: CLI v2.0.5 +prepend: + block: warn + content: | + #### Command not supported + + The `influx query` command and the InfluxDB `/api/v2/query` API endpoint it uses + don't work with {{% cloud-name %}}. + + Use [SQL](/influxdb/cloud-serverless/query-data/sql/execute-queries/) or [InfluxQL](/influxdb/cloud-serverless/query-data/influxql/execute-queries/) tools to query a {{% cloud-name %}} bucket. --- - -{{% note %}} -#### Use SQL and Flux together - -The `influx query` command and the InfluxDB `/api/v2/query` API endpoint it uses -only support Flux queries. To query an InfluxDB Cloud Serverless bucket powered -by IOx with SQL, use the `iox.sql()` Flux function. For more information, see -[Use Flux and SQL to query data](/influxdb/cloud-serverless/query-data/flux-sql/). -{{% /note %}} - -{{< duplicate-oss >}} diff --git a/content/influxdb/cloud-serverless/reference/cli/influx/transpile/_index.md b/content/influxdb/cloud-serverless/reference/cli/influx/transpile/_index.md index 09cf58772..4b6691e96 100644 --- a/content/influxdb/cloud-serverless/reference/cli/influx/transpile/_index.md +++ b/content/influxdb/cloud-serverless/reference/cli/influx/transpile/_index.md @@ -11,12 +11,7 @@ prepend: content: | ### Removed in influx CLI v2.0.5 The `influx transpile` command was removed in **v2.0.5** of the `influx` CLI. - [Use InfluxQL to query InfluxDB](/influxdb/cloud/query-data/influxql/). - For information about manually converting InfluxQL queries to Flux, see: - - - [Get started with Flux](/flux/v0.x/get-started/) - - [Query data with Flux](/influxdb/cloud/query-data/flux/) - - [Migrate continuous queries to Flux tasks](/influxdb/cloud/upgrade/v1-to-cloud/migrate-cqs/) + Use [SQL](/influxdb/cloud-serverless/query-data/sql/execute-queries/) or [InfluxQL](/influxdb/cloud-serverless/query-data/influxql/execute-queries/) tools to query a {{% cloud-name %}} bucket. --- {{< duplicate-oss >}} diff --git a/content/influxdb/cloud-serverless/reference/flux.md b/content/influxdb/cloud-serverless/reference/flux.md deleted file mode 100644 index 7c0a19802..000000000 --- a/content/influxdb/cloud-serverless/reference/flux.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: Flux reference documentation -description: > - Learn the Flux syntax and structure used to query InfluxDB. -menu: - influxdb_cloud_serverless: - name: Flux reference - parent: Reference -weight: 103 ---- - -All Flux reference material is provided in the Flux documentation: - -View the Flux documentation - -## Flux with the InfluxDB IOx storage engine - -When querying data from an InfluxDB bucket powered by InfluxDB IOx, use the following -input functions: - -- [`iox.from()`](/flux/v0.x/stdlib/experimental/iox/from/): alternative to - [`from()`](/flux/v0.x/stdlib/influxdata/influxdb/from/). -- [`iox.sql()`](/flux/v0.x/stdlib/experimental/iox/sql/): execute a SQL query - with Flux. - -Both IOx-based input functions return pivoted data with a column for each field -in the output. To unpivot the data: - -1. Group by tag columns. -2. Rename the `time` column to `_time`. -3. Use [`experimental.unpivot()`](/flux/v0.x/stdlib/experimental/unpivot/) to - unpivot the data. All columns not in the group key (other than `_time`) are - treated as fields. - -{{< code-tabs-wrapper >}} -{{% code-tabs %}} -[iox.from()](#) -[iox.sql()](#) -{{% /code-tabs %}} -{{% code-tab-content %}} - -```js -import "experimental" -import "experimental/iox" - -iox.from(bucket: "example-bucket", measurement: "example-measurement") - |> range(start: -1d) - |> group(columns: ["tag1", "tag2". "tag3"]) - |> rename(columns: {time: "_time_"}) - |> experimental.unpivot() -``` - -{{% /code-tab-content %}} -{{% code-tab-content %}} - -```js -import "experimental" -import "experimental/iox" - -query = "SELECT * FROM \"example-measurement\" WHERE time >= now() - INTERVAL '1 day'" - -iox.sql(bucket: "example-bucket", query: query) - |> group(columns: ["tag1", "tag2". "tag3"]) - |> rename(columns: {time: "_time_"}) - |> experimental.unpivot() -``` - -{{% /code-tab-content %}} -{{< /code-tabs-wrapper >}} - -{{% warn %}} -#### Flux performance with InfluxDB IOx - -When querying data from an InfluxDB bucket powered by InfluxDB IOx, using `iox.from()` -is **less performant** than querying a TSM-powered bucket with `from()`. -For better Flux query performance, use `iox.sql()`. -{{% /warn %}} diff --git a/content/influxdb/cloud-serverless/reference/glossary.md b/content/influxdb/cloud-serverless/reference/glossary.md index be4d3c627..bd5e4d42c 100644 --- a/content/influxdb/cloud-serverless/reference/glossary.md +++ b/content/influxdb/cloud-serverless/reference/glossary.md @@ -10,4 +10,1103 @@ menu: influxdb/cloud-serverless/tags: [glossary] --- -{{< duplicate-oss >}} + +[A](#a) | [B](#b) | [C](#c) | [D](#d) | [E](#e) | [F](#f) | [G](#g) | [H](#h) | [I](#i) | [J](#j) | [K](#k) | [L](#l) | [M](#m) | [N](#n) | [O](#o) | [P](#p) | [Q](#q) | [R](#r) | [S](#s) | [T](#t) | [U](#u) | [V](#v) | [W](#w) | X | Y | Z + +## A + +### abstract syntax tree (AST) + +Tree representation of source code that shows the structure, content, and rules +of programming statements and discards additional syntax elements. +The tree is hierarchical, with elements of program statements broken down into their parts. + +For more information about AST design, see [Abstract Syntax Tree on Wikipedia](https://en.wikipedia.org/wiki/Abstract_syntax_tree). + +### agent + +A background process started by (or on behalf of) a user that typically requires user input. + +[Telegraf]({{< latest "telegraf" >}}/) is an agent that requires user input +(a configuration file) to gather metrics from declared input plugins and sends +metrics to declared output plugins, based on the plugins enabled for a configuration. + +Related entries: +[input plugin](#input-plugin), +[output plugin](#output-plugin), +[daemon](#daemon) + +### aggregator plugin + +Receives metrics from input plugins, creates aggregate metrics, and then passes aggregate metrics to configured output plugins. + +Related entries: +[input plugin](#input-plugin), +[output plugin](#output-plugin), +[processor plugin](#processor-plugin) + +### aggregate + +A function that returns an aggregated value across a set of points. +For a list of available aggregation functions, see [SQL aggregate functions](/influxdb/cloud-serverless/reference/sql/functions/aggregate/). + + + +Related entries: +[function](#function), +[selector](#selector) + +### API + +Application programming interface that facilitates and standardizes communication +between two or more computer programs. + +### argument + +A value passed to a function or command that determines how the process operates. + +Related entries: +[parameter](#parameter) + +## B + +### batch + +A collection of points in line protocol format, separated by newlines (`0x0A`). +Submitting a batch of points using a single HTTP request to the write endpoints +drastically increases performance by reducing the HTTP overhead. +InfluxData typically recommends batch sizes of 5,000-10,000 points. +In some use cases, performance may improve with significantly smaller or larger batches. + +Related entries: +[line protocol](#line-protocol), +[point](#point) + +### batch size + +The number of lines or individual data points in a line protocol batch. +The Telegraf agent sends metrics to output plugins in batches rather than individually. +Batch size controls the size of each write batch that Telegraf sends to the output plugins. + +Related entries: +[output plugin](#output-plugin) + +### bin + +In a cumulative histogram, a bin includes all data points less than or equal to a specified upper bound. +In a normal histogram, a bin includes all data points between the upper and lower bounds. +Histogram bins are also sometimes referred to as "buckets." + +### boolean + +A data type with two possible values: true or false. +By convention, you can express `true` as the integer `1` and false as the integer `0` (zero). + +### bucket + +"Bucket" is the term used in InfluxDB 2.x and _InfluxDB Cloud Serverless_ to refer +to named location where time series data is stored. +Bucket is synonymous with "database" when using InfluxDB Cloud Dedicated. + +Related entries: +[database](#database) + +## C + +### CSV + +Comma-separated values (CSV) delimits text between commas to separate values. +A CSV file stores tabular data (numbers and text) in plain text. +Each line of the file is a data row. +Each row consists of one or more columns, separated by commas. +CSV file format is not fully standardized. + +### cardinality + +Cardinality is the number of unique values in a set. +Series cardinality is the number of unique [series](#series) in a bucket as a whole. +With the IOx storage engine, high series cardinality _does not_ affect performance. + +### cluster + +A collection of servers or processes that work together as a single unit. + + +### collect + +Collect and write time series data to InfluxDB using line protocol, Telegraf, +the InfluxDB v1 and v2 HTTP APIs, v1 and v2 `influx` command line interface (CLI), +and InfluxDB client libraries. + +### collection interval + +The default global interval for collecting data from each Telegraf input plugin. +The collection interval can be overridden by each individual input plugin's configuration. + +Related entries: +[input plugin](#input-plugin) + +### collection jitter + +Collection jitter prevents every input plugin from collecting metrics simultaneously, +which can have a measurable effect on the system. +For each collection interval, every Telegraf input plugin will sleep for a random +time between zero and the collection jitter before collecting the metrics. + +Related entries: +[collection interval](#collection-interval), +[input plugin](#input-plugin) + +### column + +InfluxDB data is stored in tables within rows and columns. +Columns store tag sets and fields sets, and time values. +The only required column is _time_, which stores timestamps and is included +in all InfluxDB tables. + +### common log format (CLF) + +A standardized text file format used by the InfluxDB server to create log +entries when generating server log files. + +### compaction + +Compressing time series data to optimize disk usage. + +### continuous query (CQ) + +Continuous queries are a feature of InfluxDB 1.x used to regularly downsample +or process time series data. + +## D + +### daemon + +A background process that runs without user input. + +### dashboard + +A collection of data visualizations used to query and display time series data. +There a many tools designed specifically to create dashboards including +[Grafana](https://grafana.com), [Apache Superset](https://superset.apache.org/), +[Tableau](https://www.tableau.com/), and others. + + + +### data model + +A data model organizes elements of data and standardizes how they relate to one +another and to properties of the real world entities. + +For information about the InfluxDB data model, see +[InfluxDB data organization](/influxdb/cloud-serverless/get-started/#data-organization) + +### data service + +Stores time series data and handles writes and queries. + +### data source + +A source of data that InfluxDB collects or queries data from. + +Related entries: +[bucket](#bucket) + +### data type + +A data type is defined by the values it can take, the programming language used, +or the operations that can be performed on it. + +InfluxDB supports the following data types: + +- string +- boolean +- float (64-bit) +- integer (64-bit) +- unsigned integer (64-bit) +- time + +For more information about different data types, see: + +- [line protocol](/influxdb/v2.7/reference/syntax/line-protocol/#data-types-and-format) +- [InfluxQL](/influxdb/v1.8/query_language/spec/#literals) +- [InfluxDB](/influxdb/v2.7/reference/syntax/line-protocol/#data-types-and-format) + +#### database + +In InfluxDB Cloud Dedicated, a named location where time series data is stored. +This is equivalent to a _bucket_ in _InfluxDB Cloud Serverless_. + +In InfluxDB 1.x, a database represented a logical container for users, retention +policies, continuous queries, and time series data. +In InfluxDB 2.x, the equivalent of this concept is an InfluxDB [bucket](#bucket). + +Related entries: +[bucket](#bucket), +[retention policy](#retention-policy-rp) + +### date-time + +InfluxDB stores the date-time format for each data point in a timestamp with +nanosecond-precision Unix time. +Specifying a timestamp is optional. +If a timestamp isn't specified for a data point, InfluxDB uses the server’s +local nanosecond timestamp in UTC. + +### downsample + +Aggregating high resolution data into lower resolution data to preserve disk space. + +### duration + +A data type that represents a duration of time (1s, 1m, 1h, 1d). +Retention periods are set using durations. + +Related entries: +[retention period](#retention-period) + +## E + +### event + +Metrics gathered at irregular time intervals. + +### expression + +A combination of one or more constants, variables, operators, and functions. + +## F + +### field + +A key-value pair in InfluxDB's data structure that records a data value. +Generally, field values change over time. +Fields are required in InfluxDB's data structure. + +Related entries: +[field key](#field-key), +[field set](#field-set), +[field value](#field-value), +[tag](#tag) + +### field key + +The key of the key-value pair. +Field keys are strings. + +Related entries: +[field](#field), +[field set](#field-set), +[field value](#field-value), +[tag key](#tag-key) + +### field set + +The collection of field key-value pairs. + +Related entries: +[field](#field), +[field key](#field-key), +[field value](#field-value), +[point](#point) + +### field value + +The value of a key-value pair. +Field values are the actual data; they can be strings, floats, integers, unsigned integers or booleans. +A field value is always associated with a timestamp. + +Related entries: +[field](#field), +[field key](#field-key), +[field set](#field-set), +[tag value](#tag-value), +[timestamp](#timestamp) + +### file block + +A file block is a fixed-length chunk of data read into memory when requested by an application. + +Related entries: +[block](#block) + +### float + +A real number written with a decimal point dividing the integer and fractional parts (`1.0`, `3.14`, `-20.1`). +InfluxDB supports 64-bit float values. + +### flush interval + +The global interval for flushing data from each Telegraf output plugin to its destination. +This value should not be set lower than the collection interval. + +Related entries: +[collection interval](#collection-interval), +[flush jitter](#flush-jitter), +[output plugin](#output-plugin) + +### flush jitter + +Flush jitter prevents every Telegraf output plugin from sending writes +simultaneously, which can overwhelm some data sinks. +Each flush interval, every Telegraf output plugin will sleep for a random time +between zero and the flush jitter before emitting metrics. +Flush jitter smooths out write spikes when running a large number of Telegraf instances. + +Related entries: +[flush interval](#flush-interval), +[output plugin](#output-plugin) + +### function + +A function is an operation that performs a specific task. +Functions take input, operate on that input, and then return output. +For a complete list of available SQL functions, see +[SQL functions](/inflxudb/cloud-serverless/reference/sql/functions/). + + + +Related entries: +[aggregate](#aggregate), +[selector](#selector) + +## G + +### gzip + +gzip is a type of data compression that compress chunks of data, which is +restored by unzipping compressed gzip files. +The gzip file extension is `.gz`. + +## H + +### histogram + +A visual representation of statistical information that uses rectangles to show +the frequency of data items in successive, equal intervals or bins. + +## I + +### identifier + +Identifiers are tokens that refer to specific database objects such as database +names, field keys, measurement names, tag keys, etc. + +Related entries: +[database](#database) +[field key](#field-key), +[measurement](#measurement), +[tag key](#tag-key), + + +### influx + +`influx` is a command line interface (CLI) that interacts with {{% cloud-name %}} and the InfluxDB v1.x and v2.x server. + + + +### influxd + +`influxd` is the InfluxDB OSS v1.x and v2.x daemon that runs the InfluxDB server +and other required processes. + +### InfluxDB + +An open-source time series database (TSDB) developed by InfluxData. +Written in Go and optimized for fast, high-availability storage and retrieval of +time series data in fields such as operations monitoring, application metrics, +Internet of Things sensor data, and real-time analytics. + + +### InfluxQL + +The SQL-like query language used to query data in InfluxDB. + +### input plugin + +Telegraf input plugins actively gather metrics and deliver them to the core agent, +where aggregator, processor, and output plugins can operate on the metrics. +In order to activate an input plugin, it needs to be enabled and configured in +Telegraf's configuration file. + +Related entries: +[aggregator plugin](#aggregator-plugin), +[collection interval](#collection-interval), +[output plugin](#output-plugin), +[processor plugin](#processor-plugin) + +### instance + +An entity comprising data on a server (or virtual server in cloud computing). + + +### integer + +A whole number that is positive, negative, or zero (`0`, `-5`, `143`). +InfluxDB supports 64-bit integers (minimum: `-9223372036854775808`, maximum: `9223372036854775807`). + +Related entries: +[unsigned integer](#unsigned-integer) + +### IOx + +The IOx storage engine is real-time, columnar database optimized for time series +data built in Rust on top of [Apache Arrow](https://arrow.apache.org/) and +[DataFusion](https://arrow.apache.org/datafusion/user-guide/introduction.html). +IOx replaces the [TSM](#tsm) storage engine. + +## J + +### JWT + +Typically, JSON web tokens (JWT) are used to authenticate users between an +identity provider and a service provider. +A server can generate a JWT to assert any business processes. +For example, an "admin" token sent to a client can prove the client is logged in as admin. +Tokens are signed by one party's private key (typically, the server). +Private keys are used by both parties to verify that a token is legitimate. + +JWT uses an open standard specified in [RFC 7519](https://tools.ietf.org/html/rfc7519). + +### Jaeger + +Open source tracing used in distributed systems to monitor and troubleshoot transactions. + +### JSON + +JavaScript Object Notation (JSON) is an open-standard file format that uses +human-readable text to transmit data objects consisting of attribute–value pairs +and array data types. + +## K + +### keyword + +A keyword is reserved by a program because it has special meaning. +Every programming language has a set of keywords (reserved names) that cannot be used as an identifier. + +See a list of [SQL keywords](/influxdb/cloud-serverless/reference/sql/#keywords). + + + +## L + +### literal + +A literal is value in an expression, a number, character, string, function, record, or array. +Literal values are interpreted as defined. + +### load balancing + +Improves workload distribution across multiple computing resources in a network. +Load balancing optimizes resource use, maximizes throughput, minimizes response +time, and avoids overloading a single resource. +Using multiple components with load balancing instead of a single component may +increase reliability and availability. +If requests to any server in a network increase, requests are forwarded to +another server with more capacity. +Load balancing can also refer to the communications channels themselves. + +### logs + +Logs record information. +Event logs describe system events and activity that help to describe and diagnose problems. +Transaction logs describe changes to stored data that help recover data if a +database crashes or other errors occur. + +### line protocol (LP) + +The text based format for writing points to InfluxDB. +See [line protocol](/influxdb/cloud-serverless/reference/syntax/line-protocol/). + +## M + +### measurement + +The part of InfluxDB's data structure that describes the data stored in associated fields. +Measurements are strings. + +Related entries: +[field](#field), [series](#series) + +### metric + +Data tracked over time. + +### metric buffer + +The metric buffer caches individual metrics when writes are failing for an Telegraf output plugin. +Telegraf will attempt to flush the buffer upon a successful write to the output. +The oldest metrics are dropped first when this buffer fills. + +Related entries: +[output plugin](#output-plugin) + +### missing values + +Denoted by a null value. +Identifies missing information, which may be useful to include in an error message. + +## N + +### node + +An independent process or server in a cluster. + +Related entries: +[cluster](#cluster), +[server](#server) + + +### now + +The local server's nanosecond timestamp. + +### null + +A data type that represents a missing or unknown value. +Denoted by the `null` value. + +## O + +### operator + +A symbol that usually represents an action or process. +For example: `+`, `-`, `>`. + +Related entries: +[operand](#operand) + +### operand + +The object or value on either side of an [operator](#operator). + +Related entries: +[operator](#operator) + +### organization + +In InfluxDB Cloud Serverless, a workspace for a group of users. +All InfluxDB _resources_ (buckets, members, and so on) belong to an organization. +Organizations are not part of InfluxDB Cloud Dedicated. + +### owner + +A type of role for a user. +Owners have read/write permissions. +Users can have owner roles for databases and other resources. + +Role permissions are separate from API token permissions. +For additional information on API tokens, see [token](#tokens). + +### output plugin + +Telegraf output plugins deliver metrics to their configured destination. +To activate an output plugin, enable and configure the plugin in Telegraf's configuration file. + +Related entries: +[aggregator plugin](#aggregator-plugin), +[flush interval](#flush-interval), +[input plugin](#input-plugin), +[processor plugin](#processor-plugin) + +## P + +### parameter + +A key-value pair used to pass information to a function that determines how the +function operates. + +Related entries: +[argument](#argument) + +### pipe + +Method for passing information from one process to another. +For example, an output parameter from one process is input to another process. +Information passed through a pipe is retained until the receiving process reads the information. + +### point + +Single data record identified by its _measurement_, _tag keys_, _tag values_, +_field key_, and _timestamp_. + +In a [series](#series), each point has a unique timestamp. +If you write a point to a series with a timestamp that matches an existing point, +the field set becomes a union of the old and new field set, where any ties go to +the new field set. + +Related entries: +[measurement](#measurement), +[tag set](#tag-set), +[field set](#field-set), +[timestamp](#timestamp) + +### primary key + +With the InfluxDB IOx storage engine, the primary key is the list of columns +used to uniquely identify each row in a table. +Rows are uniquely identified by their timestamp and tag set. + +### precision + +The precision configuration setting determines the timestamp precision retained +for input data points. +All incoming timestamps are truncated to the specified precision. +Valid precisions are `ns`, `us` or `µs`, `ms`, and `s`. + +In Telegraf, truncated timestamps are padded with zeros to create a nanosecond timestamp. +Telegraf output plugins emit timestamps in nanoseconds. +For example, if the precision is set to `ms`, the nanosecond epoch timestamp `1480000000123456789` is truncated to `1480000000123` in millisecond precision and padded with zeroes to make a new, less precise nanosecond timestamp of `1480000000123000000`. +Telegraf output plugins do not alter the timestamp further. +The precision setting is ignored for service input plugins. + +Related entries: +[aggregator plugin](#aggregator-plugin), +[input plugin](#input-plugin), +[output plugin](#output-plugin), +[processor plugin](#processor-plugin), +[service input plugin](#service-input-plugin) + +### predicate expression + +A predicate expression compares two values and returns `true` or `false` based on +the relationship between the two values. +A predicate expression is comprised of a left operand, a comparison operator, and a right operand. + +### process + +A set of predetermined rules. +A process can refer to instructions being executed by the computer processor or +refer to the act of manipulating data. + +### processor plugin + +Telegraf processor plugins transform, decorate, and filter metrics collected by +input plugins, passing the transformed metrics to the output plugins. + +Related entries: +[aggregator plugin](#aggregator-plugin), +[input plugin](#input-plugin), +[output plugin](#output-plugin) + +### Prometheus format + +A simple text-based format for exposing metrics and ingesting them into Prometheus. + +## Q + +### query + +A request for information. +An InfluxDB query returns time series data. + +See [Query data in InfluxDB](/influxdb/cloud-serverless/query-data/). + +## R + +### REPL + +A Read-Eval-Print Loop (REPL) is an interactive programming environment where +you type a command and immediately see the result. + +### regular expressions + +Regular expressions (regex or regexp) are patterns used to match character +combinations in strings. + +### rejected points + +In a batch of data, points that InfluxDB couldn't write to a bucket. +Field type conflicts are a common cause of rejected points. + +### retention period + +The [duration](#duration) of time that an {{% cloud-name %}} bucket retains data. +InfluxDB drops points with timestamps older than their bucket's retention period +relative to [now](#now). +The minimum retention period is **one hour**. + +Related entries: +[bucket](#bucket), +[shard group duration](#shard-group-duration) + +### retention policy (RP) + +Retention policy is an InfluxDB 1.x concept that represents the duration of time +that each data point in the retention policy persists. +The equivalent is [retention period](#retention-period), however retention periods +are not part of the {{% cloud-name %}} data model. +The retention period describes the behavior of a bucket. + +Related entries: +[retention period](#retention-period), + +### RFC3339 timestamp + +A timestamp that uses the human-readable DateTime format proposed in +[RFC 3339](https://tools.ietf.org/html/rfc3339) (for example: `2020-01-01T00:00:00.00Z`). + +Related entries: +[RFC3339Nano timestamp](#rfc3339nano-timestamp), +[timestamp](#timestamp), +[unix timestamp](#unix-timestamp) + +### RFC3339Nano timestamp + +A [Golang representation of the RFC 3339 DateTime format](https://go.dev/src/time/format.go) +that uses nanosecond resolution--for example: +`2006-01-02T15:04:05.999999999Z07:00`. + +InfluxDB clients can return RFC3339Nano timestamps in log events and CSV-formatted +query results. + +Related entries: +[RFC3339 timestamp](#rfc3339-timestamp), +[timestamp](#timestamp), +[unix timestamp](#unix-timestamp) + +## S + +### schema + +How data is organized in InfluxDB. +The fundamentals of the {{% cloud-name %}} schema are buckets, measurements (or _tables_), +tag keys, tag values, and field keys. + +Related entries: +[bucket](#bucket), +[field key](#field-key), +[measurement](#measurement), +[series](#series), +[tag key](#tag-key), +[tag value](#tag-value) + +### secret + +Secrets are key-value pairs that contain information you want to control access +o, such as API keys, passwords, or certificates. + +### selector + +A function that returns a single point from the range of specified points. +See [SQL selector functions](/influxdb/cloud-serverless/reference/sql/functions/selectors/) +for a complete list of available SQL selector functions. + +Related entries: +[aggregate](#aggregate), +[function](#function), +[transformation](#transformation) + +### series + +A collection of data in the InfluxDB data structure that share a common +_measurement_, _tag set_, and _field key_. + +Related entries: +[field set](#field-set), +[measurement](#measurement), +[tag set](#tag-set) + +### series cardinality + +The number of unique measurement, tag set, and field key combinations in an {{% cloud-name %}} bucket. + +For example, assume that an InfluxDB bucket has one measurement. +The single measurement has two tag keys: `email` and `status`. +If there are three different `email`s, and each email address is associated with two +different `status`es, the series cardinality for the measurement is 6 +(3 × 2 = 6): + +| email | status | +| :-------------------- | :----- | +| lorr@influxdata.com | start | +| lorr@influxdata.com | finish | +| marv@influxdata.com | start | +| marv@influxdata.com | finish | +| cliff@influxdata.com | start | +| cliff@influxdata.com | finish | + +In some cases, performing this multiplication may overestimate series cardinality +because of the presence of dependent tags. +Dependent tags are scoped by another tag and do not increase series cardinality. +If we add the tag `firstname` to the example above, the series cardinality +would not be 18 (3 × 2 × 3 = 18). +The series cardinality would remain unchanged at 6, as `firstname` is already scoped by the `email` tag: + +| email | status | firstname | +| :------------------- | :----- | :-------- | +| lorr@influxdata.com | start | lorraine | +| lorr@influxdata.com | finish | lorraine | +| marv@influxdata.com | start | marvin | +| marv@influxdata.com | finish | marvin | +| cliff@influxdata.com | start | clifford | +| cliff@influxdata.com | finish | clifford | + +Related entries: +[field key](#field-key), +[measurement](#measurement), +[tag key](#tag-key), +[tag set](#tag-set) + +### series key + +A series key identifies a particular series by measurement, tag set, and field key. + +For example: + +``` +# measurement, tag set, field key +h2o_level, location=santa_monica, h2o_feet +``` + +Related entries: +[series](#series) + +### server + +A computer, virtual or physical, running InfluxDB. + + +### service input plugin + +Telegraf input plugins that run in a passive collection mode while the Telegraf agent is running. +Service input plugins listen on a socket for known protocol inputs, or apply +their own logic to ingested metrics before delivering metrics to the Telegraf agent. + +Related entries: +[aggregator plugin](#aggregator-plugin), +[input plugin](#input-plugin), +[output plugin](#output-plugin), +[processor plugin](#processor-plugin) + +### string + +A data type used to represent text. + +## T + +### TCP + +Transmission Control Protocol. + +### table + +A collection of related data organized in a structured way with a predefined set +of columns and data types. +Each row in the table represents a specific record or instance of the data, and +each column represents a specific attribute or property of the data. + +In InfluxDB v3, a table represents a measurement. + +Related entries: +[column](#column), +[measurement](#measurement), +[primary key](#primary-key), +[row](#row) + +### tag + +The key-value pair in InfluxDB's data structure that records metadata. +Tags are an optional part of InfluxDB's data structure but they are useful for +storing commonly-queried metadata. + +Related entries: +[field](#field), +[tag key](#tag-key), +[tag set](#tag-set), +[tag value](#tag-value) + +### tag key + +The key of a tag key-value pair. +Tag keys are strings and store metadata. + + +Related entries: +[field key](#field-key), +[tag](#tag), +[tag set](#tag-set), +[tag value](#tag-value) + +### tag set + +The collection of tag keys and tag values on a point. + +Related entries: +[point](#point), +[series](#series), +[tag](#tag), +[tag key](#tag-key), +[tag value](#tag-value) + +### tag value + +The value of a tag key-value pair. +Tag values are strings and they store metadata. + +Related entries: +[tag](#tag), +[tag key](#tag-key), +[tag set](#tag-set) + +### Telegraf + +A plugin-driven agent that collects, processes, aggregates, and writes metrics. + +Related entries: +[Telegraf plugins](/{{< latest "telegraf" >}}/plugins/), +[Use Telegraf to collect data](/influxdb/cloud-serverless/write-data/telegraf/), + +### time (data type) + +A data type that represents a single point in time with nanosecond precision. + +### time series data + +Sequence of data points typically consisting of successive measurements made +from the same source over a time interval. +Time series data shows how data evolves over time. +On a time series data graph, one of the axes is always time. +Time series data may be regular or irregular. +Regular time series data changes in constant intervals. +Irregular time series data changes at non-constant intervals. + +### timestamp + +The date and time associated with a point. +Time in InfluxDB is in UTC. + +To specify time when writing data, see +[Elements of line protocol](/influxdb/cloud-serverless/reference/syntax/line-protocol/#elements-of-line-protocol). + +Related entries: +[point](#point), +[unix timestamp](#unix-timestamp), +[RFC3339 timestamp](#rfc3339-timestamp) + +### token + +Tokens provide authorization to perform specific actions in InfluxDB. +{{% cloud-name %}} uses **API tokens** to authorize read and write access to resources and data. + +Related entries: +[Manage tokens](/influxdb/cloud-serverless/admin/tokens/) + +### TSM (Time Structured Merge tree) + +The InfluxDB v1 and v2 data storage format that allows greater compaction and +higher write and read throughput than B+ or LSM tree implementations. +The TSM storage engine has been replace by [IOx](#iox). + +Related entries: +[IOx](#iox) + +## U + +### UDP + +User Datagram Protocol is a packet of information. +When a request is made, a UDP packet is sent to the recipient. +The sender doesn't verify the packet is received. +The sender continues to send the next packets. +This means computers can communicate more quickly. +This protocol is used when speed is desirable and error correction is not necessary. + +### unix epoch + +The date and time from which Unix system times are measured. +The Unix epoch is `1970-01-01T00:00:00Z`. + +### unix timestamp + +Counts time since **Unix Epoch (1970-01-01T00:00:00Z UTC)** in specified units ([precision](#precision)). +Specify timestamp precision when [writing data to InfluxDB](/influxdb/cloud-serverless/write-data/). +InfluxDB supports the following unix timestamp precisions: + +| Precision | Description | Example | +|:--------- |:----------- |:------- | +| `ns` | Nanoseconds | `1577836800000000000` | +| `us` | Microseconds | `1577836800000000` | +| `ms` | Milliseconds | `1577836800000` | +| `s` | Seconds | `1577836800` | + +

The examples above represent 2020-01-01T00:00:00Z UTC.

+ +Related entries: +[timestamp](#timestamp), +[RFC3339 timestamp](#rfc3339-timestamp) + +### unsigned integer + +A whole number that is positive or zero (`0`, `143`). Also known as a "uinteger." +InfluxDB supports 64-bit unsigned integers (minimum: `0`, maximum: `18446744073709551615`). + +Related entries: +[integer](#integer) + +### user + +InfluxDB users are granted permission to access to InfluxDB. + +## V + +### values per second + +The preferred measurement of the rate at which data are persisted to InfluxDB. +Write speeds are generally quoted in values per second. + +To calculate the values per second rate, multiply the number of points written +per second by the number of values stored per point. +For example, if the points have four fields each, and a batch of 5000 points is +written 10 times per second, the values per second rate is: + +**4 field values per point** × **5000 points per batch** × **10 batches per second** = **200,000 values per second** + +Related entries: +[batch](#batch), +[field](#field), +[point](#point) + +### variable + +A storage location (identified by a memory address) paired with an associated +symbolic name (an identifier). +A variable contains some known or unknown quantity of information referred to as a value. + +### variable assignment + +A statement that sets or updates the value stored in a variable. + +## W + +### WAL (Write Ahead Log) - enterprise + +The temporary cache for recently written points. +To reduce the frequency that permanent storage files are accessed, InfluxDB +caches new points in the WAL until their total size or age triggers a flush to +more permanent storage. This allows for efficient batching of the writes into the TSM. + +Points in the WAL can be queried and persist through a system reboot. +On process start, all points in the WAL must be flushed before the system accepts new writes. + +Related entries: +[tsm](#tsm-time-structured-merge-tree) + +### windowing + +Grouping data based on specified time intervals. +This is also referred to as "time binning" or "date binning." + diff --git a/content/influxdb/cloud-serverless/reference/syntax/annotated-csv/_index.md b/content/influxdb/cloud-serverless/reference/syntax/annotated-csv/_index.md index 396358b7b..e6a36b227 100644 --- a/content/influxdb/cloud-serverless/reference/syntax/annotated-csv/_index.md +++ b/content/influxdb/cloud-serverless/reference/syntax/annotated-csv/_index.md @@ -1,8 +1,7 @@ --- title: Annotated CSV description: > - The InfluxDB `/api/v2/query` API returns query results in annotated CSV format. - You can write data to InfluxDB using annotated CSV and the `influx write` command. + You can write data to InfluxDB using annotated CSV and the InfluxDB HTTP API. weight: 103 menu: influxdb_cloud_serverless: @@ -12,8 +11,7 @@ related: - /influxdb/cloud-serverless/reference/syntax/annotated-csv/extended/ --- -The InfluxDB `/api/v2/query` API returns query results in annotated CSV format. -You can also write data to InfluxDB using annotated CSV and the `influx write` command, +You can write data to InfluxDB using annotated CSV and the InfluxDB HTTP API or [upload a CSV file](/influxdb/cloud-serverless/write-data/csv/user-interface) in the InfluxDB UI. CSV tables must be encoded in UTF-8 and Unicode Normal Form C as defined in [UAX15](http://www.unicode.org/reports/tr15/). @@ -111,29 +109,22 @@ Subsequent columns contain annotation values as shown in the table below. | Annotation name | Values | Description | |:-------- |:--------- | :------- | -| **datatype** | a [data type](#data-types) or [line protocol element](#line-protocol-elements) | Describes the type of data or which line protocol element the column represents. | -| **group** | boolean flag `true` or `false` | Indicates the column is part of the group key. | +| **datatype** | a [data type](#data-types) or [line protocol element](#line-protocol-elements) | Describes the type of data or which line protocol element the column represents. | | | **default** | a value of the column's data type | Value to use for rows with an empty value. | -{{% note %}} -To encode a table with its [group key](/influxdb/cloud-serverless/reference/glossary/#group-key), -the `datatype`, `group`, and `default` annotations must be included. -If a table has no rows, the `default` annotation provides the group key values. -{{% /note %}} - ## Data types -| Datatype | Flux type | Description | -| :-------- | :--------- | :---------- | -| boolean | bool | "true" or "false" | -| unsignedLong | uint | unsigned 64-bit integer | -| long | int | signed 64-bit integer | -| double | float | IEEE-754 64-bit floating-point number | -| string | string | UTF-8 encoded string | -| base64Binary | bytes | base64 encoded sequence of bytes as defined in RFC 4648 | -| dateTime | time | instant in time, may be followed with a colon : and a description of the format (number, RFC3339, RFC3339Nano) | -| duration | duration | length of time represented as an unsigned 64-bit integer number of nanoseconds | +| Datatype | Description | +| :-------- | :---------- | +| boolean | "true" or "false" | +| unsignedLong | unsigned 64-bit integer | +| long | signed 64-bit integer | +| double | IEEE-754 64-bit floating-point number | +| string | UTF-8 encoded string | +| base64Binary | base64 encoded sequence of bytes as defined in RFC 4648 | +| dateTime | instant in time, may be followed with a colon : and a description of the format (number, RFC3339, RFC3339Nano) | +| duration | length of time represented as an unsigned 64-bit integer number of nanoseconds | ## Line protocol elements diff --git a/content/influxdb/cloud-serverless/write-data/best-practices/schema-design.md b/content/influxdb/cloud-serverless/write-data/best-practices/schema-design.md index 882c2aadf..cbd203a21 100644 --- a/content/influxdb/cloud-serverless/write-data/best-practices/schema-design.md +++ b/content/influxdb/cloud-serverless/write-data/best-practices/schema-design.md @@ -266,7 +266,6 @@ matching or regular expressions to evaluate the `sensor` tag: {{% code-tabs %}} [SQL](#) [InfluxQL](#) -[Flux](#) {{% /code-tabs %}} {{% code-tab-content %}} @@ -281,18 +280,6 @@ SELECT * FROM home WHERE sensor LIKE '%id-1726ZA%' SELECT * FROM home WHERE sensor =~ /id-1726ZA/ ``` -{{% /code-tab-content %}} -{{% code-tab-content %}} - -```js -import "experimental/iox" - -iox.from(bucket: "example-bucket") - |> range(start: -1y) - |> filter(fn: (r) => r._measurement == "home") - |> filter(fn: (r) => r.sensor =~ /id-1726ZA/) -``` - {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} @@ -329,7 +316,6 @@ simple equality expression: {{< code-tabs-wrapper >}} {{% code-tabs %}} [SQL & InfluxQL](#) -[Flux](#) {{% /code-tabs %}} {{% code-tab-content %}} @@ -337,18 +323,6 @@ simple equality expression: SELECT * FROM home WHERE sensor_id = '1726ZA' ``` -{{% /code-tab-content %}} -{{% code-tab-content %}} - -```js -import "experimental/iox" - -iox.from(bucket: "example-bucket") - |> range(start: -1y) - |> filter(fn: (r) => r._measurement == "home") - |> filter(fn: (r) => r.sensor_id == "1726ZA") -``` - {{% /code-tab-content %}} {{< /code-tabs-wrapper >}} @@ -362,20 +336,9 @@ in measurement names, tag keys, and field keys. - [SQL keywords](/influxdb/cloud-serverless/reference/sql/#keywords) - [InfluxQL keywords](/influxdb/cloud-serverless/reference/syntax/influxql/spec/#keywords) -- [Flux keywords](/{{< latest "flux" >}}/spec/lexical-elements/#keywords) When using SQL or InfluxQL to query measurements, tags, and fields with special characters or keywords, you have to wrap these keys in **double quotes**. -In Flux, if using special characters in tag keys, you have to use -[bracket notation](/{{< latest "flux" >}}/data-types/composite/record/#bracket-notation) -to reference those columns. - -{{< code-tabs-wrapper >}} -{{% code-tabs %}} -[SQL & InfluxQL](#) -[Flux](#) -{{% /code-tabs %}} -{{% code-tab-content %}} ```sql SELECT @@ -385,18 +348,3 @@ FROM WHERE "tag@1-23" = 'ABC' ``` - -{{% /code-tab-content %}} -{{% code-tab-content %}} - -```js -import "experimental/iox" - -iox.from(bucket: "example-bucket") - |> range(start: -1y) - |> filter(fn: (r) => r._measurement == "example-measurement") - |> filter(fn: (r) => r["tag@1-23"] == "ABC") -``` - -{{% /code-tab-content %}} -{{< /code-tabs-wrapper >}} diff --git a/content/influxdb/cloud-serverless/write-data/migrate-data/_index.md b/content/influxdb/cloud-serverless/write-data/migrate-data/_index.md index 6297cadd4..845c98807 100644 --- a/content/influxdb/cloud-serverless/write-data/migrate-data/_index.md +++ b/content/influxdb/cloud-serverless/write-data/migrate-data/_index.md @@ -9,6 +9,9 @@ menu: parent: Write data weight: 104 alt_engine: /influxdb/cloud/migrate-data/ +aliases: + - /influxdb/cloud-serverless/reference/flux/ + - /influxdb/cloud-serverless/query-data/sql/execute-queries/flux-sql/ --- Migrate data to InfluxDB Cloud Serverless powered by InfluxDB IOx from other @@ -69,12 +72,6 @@ in more regions around the world. storage engine. Flux is optimized to work with the TSM storage engine, but these optimizations do not apply to the on-disk structure of InfluxDB IOx. -To maintain performant Flux queries against the IOx storage engine, you need to -update Flux queries to use a mixture of both SQL and Flux—SQL to query the base -dataset and Flux to perform other transformations that SQL does not support. -For information about using SQL and Flux together for performant queries, see -[Use Flux and SQL to query data](/influxdb/cloud-serverless/query-data/flux-sql/). - --- ## Data migration guides