This guide describes how to manually rebalance an InfluxDB Enterprise cluster.
+Rebalancing a cluster involves two primary goals:
+
+
Evenly distribute
+shards across all data nodes in the
+cluster
+
Ensure that every
+shard is on n number of nodes, where n is determined by the retention policy’s
+replication factor
+
+
Rebalancing a cluster is essential for cluster health.
+Perform a rebalance if you add a new data node to your cluster.
+The proper rebalance path depends on the purpose of the new data node.
+If you added a data node to expand the disk size of the cluster or increase
+write throughput, follow the steps in
+Rebalance Procedure 1.
+If you added a data node to increase data availability for queries and query
+throughput, follow the steps in
+Rebalance Procedure 2.
+
Requirements
+
The following sections assume that you already added a new data node to the
+cluster, and they use the
+influxd-ctl tool available on
+all meta nodes.
+
+
+
+
+
Stop writing data before rebalancing
+
Before you begin, stop writing historical data to InfluxDB.
+Historical data have timestamps that occur at anytime in the past.
+Performing a rebalance while writing historical data can lead to data loss.
+
+
+
+
+
+
Risks of rebalancing with future data
+
Truncating shards that contain data with future timestamps (such as forecast or prediction data)
+can lead to overlapping shards and data duplication.
+For more information, see truncate-shards and future data
+or contact InfluxData support.
+
+
Rebalance Procedure 1: Rebalance a cluster to create space
+
For demonstration purposes, the next steps assume that you added a third
+data node to a previously two-data-node cluster that has a
+replication factor of
+two.
+This rebalance procedure is applicable for different cluster sizes and
+replication factors, but some of the specific, user-provided values will depend
+on that cluster size.
+
Rebalance Procedure 1 focuses on how to rebalance a cluster after adding a
+data node to expand the total disk capacity of the cluster.
+In the next steps, you will safely move shards from one of the two original data
+nodes to the new data node.
+
Step 1: Truncate hot shards
+
Hot shards are shards that currently receive writes.
+Performing any action on a hot shard can lead to data inconsistency within the
+cluster which requires manual intervention from the user.
+
+
+
+
+
Risks of rebalancing with future data
+
Truncating shards that contain data with future timestamps (such as forecast or prediction data)
+can lead to overlapping shards and data duplication.
+For more information, see truncate-shards and future data
+or contact InfluxData support.
+
+
To prevent data inconsistency, truncate shards before moving any shards
+across data nodes.
+The following command truncates all hot shards and creates new shards to write data to:
+
+
+
influxd-ctl truncate-shards
+
The expected output of this command is:
+
+
+
Truncated shards.
+
New shards are automatically distributed across all data nodes, and InfluxDB writes new points to them.
+Previous writes are stored in cold shards.
+
After truncating shards, you can redistribute cold shards without data inconsistency.
+Hot and new shards are evenly distributed and require no further intervention.
+
Step 2: Identify Cold Shards
+
In this step, you identify the cold shards that you will copy to the new data node
+and remove from one of the original two data nodes.
+
The following command lists every shard in our cluster:
The sample output includes three shards.
+The first two shards are cold shards.
+The timestamp in the End column occurs in the past (assume that the current
+time is just after 2017-01-26T18:05:36.418734949Z), and the shards’ owners
+are the two original data nodes: enterprise-data-01:8088 and
+enterprise-data-02:8088.
+The second shard is the truncated shard; truncated shards have an asterix (*)
+on the timestamp in the End column.
+
The third shard is the newly-created hot shard; the timestamp in the End
+column is in the future (again, assume that the current time is just after
+2017-01-26T18:05:36.418734949Z), and the shard’s owners include one of the
+original data nodes (enterprise-data-02:8088) and the new data node
+(enterprise-data-03:8088).
+That hot shard and any subsequent shards require no attention during
+the rebalance process.
+
Identify the cold shards that you’d like to move from one of the original two
+data nodes to the new data node.
+Take note of the cold shard’s ID (for example: 22) and the TCP address of
+one of its owners in the Owners column (for example:
+enterprise-data-01:8088).
+
+
+
+
+
To determine the size of shards in
+your cluster, enter the following command:
+
+
+
find /var/lib/influxdb/data/ -mindepth 3 -type d -exec du -h {}\;
+
+
In general, we recommend moving larger shards to the new data node to increase the
+available disk space on the original data nodes.
+
Moving shards will impact network traffic.
+
Step 3: Copy Cold Shards
+
Next, copy the relevant cold shards to the new data node with the syntax below.
+Repeat this command for every cold shard that you’d like to move to the
+new data node.
Where source_TCP_address is the address that you noted in step 2,
+destination_TCP_address is the TCP address of the new data node, and shard_ID
+is the ID of the shard that you noted in step 2.
+
The expected output of the command is:
+
+
+
Copied shard <shard_ID> from <source_TCP_address> to <destination_TCP_address>
+
Step 4: Confirm the Copied Shards
+
Confirm that the TCP address of the new data node appears in the Owners column
+for every copied shard:
+
+
+
influxd-ctl show-shards
+
The expected output shows that the copied shard now has three owners:
In addition, verify that the copied shards appear in the new data node’s shard
+directory and match the shards in the source data node’s shard directory.
+Shards are located in
+/var/lib/influxdb/data/<database>/<retention_policy>/<shard_ID>.
+
Here’s an example of the correct output for shard 22:
+
+
+
# On the source data node (enterprise-data-01)
+
+~# ls /var/lib/influxdb/data/telegraf/autogen/22
+000000001-000000001.tsm # 👍
+
+# On the new data node (enterprise-data-03)
+
+~# ls /var/lib/influxdb/data/telegraf/autogen/22
+000000001-000000001.tsm # 👍
+
It is essential that every copied shard appears on the new data node both
+in the influxd-ctl show-shards output and in the shard directory.
+If a shard does not pass both of the tests above, please repeat step 3.
+
Step 5: Remove Unnecessary Cold Shards
+
Next, remove the copied shard from the original data node with the command below.
+Repeat this command for every cold shard that you’d like to remove from one of
+the original data nodes.
+Removing a shard is an irrecoverable, destructive action; please be
+cautious with this command.
That’s it.
+You’ve successfully rebalanced your cluster; you expanded the available disk
+size on the original data nodes and increased the cluster’s write throughput.
+
Rebalance Procedure 2: Rebalance a cluster to increase availability
+
For demonstration purposes, the next steps assume that you added a third
+data node to a previously two-data-node cluster that has a
+replication factor of
+two.
+This rebalance procedure is applicable for different cluster sizes and
+replication factors, but some of the specific, user-provided values will depend
+on that cluster size.
+
Rebalance Procedure 2 focuses on how to rebalance a cluster to improve availability
+and query throughput.
+In the next steps, you will increase the retention policy’s replication factor and
+safely copy shards from one of the two original data nodes to the new data node.
+
Step 1: Update the Retention Policy
+
Update
+every retention policy to have a replication factor of three.
+This step ensures that the system automatically distributes all newly-created
+shards across the three data nodes in the cluster.
+
The following query increases the replication factor to three.
+Run the query on any data node for each retention policy and database.
+Here, we use InfluxDB’s CLI to execute the query:
+
+
+
ALTER RETENTION POLICY "<retention_policy_name>" ON "<database_name>" REPLICATION 3
+
A successful ALTER RETENTION POLICY query returns no results.
+Use the
+SHOW RETENTION POLICIES query
+to verify the new replication factor.
Hot shards are shards that currently receive writes.
+Performing any action on a hot shard can lead to data inconsistency within the
+cluster which requires manual intervention from the user.
+
+
+
+
+
Risks of rebalancing with future data
+
Truncating shards that contain data with future timestamps (such as forecast or prediction data)
+can lead to overlapping shards and data duplication.
+For more information, see truncate-shards and future data
+or contact InfluxData support.
+
+
To prevent data inconsistency, truncate shards before copying any shards
+to the new data node.
+The following command truncates all hot shards and creates new shards to write data to:
+
+
+
influxd-ctl truncate-shards
+
The expected output of this command is:
+
+
+
Truncated shards.
+
New shards are automatically distributed across all data nodes, and InfluxDB writes new points to them.
+Previous writes are stored in cold shards.
+
After truncating shards, you can redistribute cold shards without data inconsistency.
+Hot and new shards are evenly distributed and require no further intervention.
+
Step 3: Identify Cold Shards
+
In this step, you identify the cold shards that you will copy to the new data node.
+
The following command lists every shard in your cluster:
+
+
+
influxd-ctl show-shards
+
The expected output is similar to the items in the codeblock below:
The sample output includes three shards.
+The first two shards are cold shards.
+The timestamp in the End column occurs in the past (assume that the current
+time is just after 2017-01-26T18:05:36.418734949Z), and the shards’ owners
+are the two original data nodes: enterprise-data-01:8088 and
+enterprise-data-02:8088.
+The second shard is the truncated shard; truncated shards have an asterix (*)
+on the timestamp in the End column.
+
The third shard is the newly-created hot shard; the timestamp in the End
+column is in the future (again, assume that the current time is just after
+2017-01-26T18:05:36.418734949Z), and the shard’s owners include all three
+data nodes: enterprise-data-01:8088, enterprise-data-02:8088, and
+enterprise-data-03:8088.
+That hot shard and any subsequent shards require no attention during
+the rebalance process.
+
Identify the cold shards that you’d like to copy from one of the original two
+data nodes to the new data node.
+Take note of the cold shard’s ID (for example: 22) and the TCP address of
+one of its owners in the Owners column (for example:
+enterprise-data-01:8088).
+
Step 4: Copy Cold Shards
+
Next, copy the relevant cold shards to the new data node with the syntax below.
+Repeat this command for every cold shard that you’d like to move to the
+new data node.
Where source_TCP_address is the address that you noted in step 3,
+destination_TCP_address is the TCP address of the new data node, and shard_ID
+is the ID of the shard that you noted in step 3.
+
The expected output of the command is:
+
+
+
Copied shard <shard_ID> from <source_TCP_address> to <destination_TCP_address>
+
Step 5: Confirm the Rebalance
+
Confirm that the TCP address of the new data node appears in the Owners column
+for every copied shard:
+
+
+
influxd-ctl show-shards
+
The expected output shows that the copied shard now has three owners:
In addition, verify that the copied shards appear in the new data node’s shard
+directory and match the shards in the source data node’s shard directory.
+Shards are located in
+/var/lib/influxdb/data/<database>/<retention_policy>/<shard_ID>.
+
Here’s an example of the correct output for shard 22:
+
+
+
# On the source data node (enterprise-data-01)
+
+~# ls /var/lib/influxdb/data/telegraf/autogen/22
+000000001-000000001.tsm # 👍
+
+# On the new data node (enterprise-data-03)
+
+~# ls /var/lib/influxdb/data/telegraf/autogen/22
+000000001-000000001.tsm # 👍
+
That’s it.
+You’ve successfully rebalanced your cluster and increased data availability for
+queries and query throughput.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
Important
+Authentication must be enabled before authorization can be managed.
+If authentication is not enabled, permissions will not be enforced.
+See “Enable authentication”.
Outside of creating users,
+we recommend operators do not mix and match InfluxQL
+with other authorization management methods (Chronograf and the API).
+Doing so may lead to inconsistencies in user permissions.
+
+
+
+
This page shows examples of basic user and permission management using InfluxQL statements.
+However, only a subset of Enterprise permissions can be managed with InfluxQL.
+Using InfluxQL, you can perform the following actions:
+
+
Create new users and assign them either the admin role (or no role).
+
grant READ and/or WRITE permissions to users. (READ, WRITE, ALL)
+
REVOKE permissions from users.
+
GRANT or REVOKE specific database access to individual users.
+
+
However, InfluxDB Enterprise offers an expanded set of permissions.
+You can use the Meta API and Chronograf to access and assign these more granular permissions to individual users.
When authentication is enabled,
+a new non-admin user has no access to any database
+until they are specifically granted privileges to a database
+by an admin user.
+
Non-admin users can SHOW
+the databases for which they have ReadData or WriteData permissions.
The user value must be wrapped in double quotes if
+it starts with a digit, is an InfluxQL keyword, contains a hyphen,
+or includes any special characters (for example: !@#$%^&*()-).
+
The password string must be wrapped in single quotes.
+Do not include the single quotes when authenticating requests.
+We recommend avoiding the single quote (') and backslash (\) characters in passwords.
+For passwords that include these characters, escape the special character with a backslash
+(e.g. (\') when creating the password and when submitting authentication requests.
+
Repeating the exact CREATE USER statement is idempotent.
+If any values change the database will return a duplicate user error.
The password string must be wrapped in single quotes.
+Do not include the single quotes when authenticating requests.
+
We recommend avoiding the single quote (') and backslash (\) characters in passwords
+For passwords that include these characters, escape the special character with a backslash (e.g. (\') when creating the password and when submitting authentication requests.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
Flux, at its core, is a scripting language designed specifically for working with data.
+This guide walks through a handful of simple expressions and how they are handled in Flux.
+
Simple expressions
+
Flux is a scripting language that supports basic expressions.
+For example, simple addition:
+
+
+
>1+1
+2
+
Variables
+
Assign an expression to a variable using the assignment operator, =.
+
+
+
s="this is a string"
+i=1// an integer
+f=2.0// a floating point number
+
+
Type the name of a variable to print its value:
+
+
+
>s
+thisisastring
+>i
+1
+>f
+2
+
Records
+
Flux also supports records. Each value in a record can be a different data type.
+
+
+
o={name:"Jim",age:42,"favorite color":"red"}
+
Use dot notation to access a properties of a record:
Use bracket notation to reference record properties with special or
+white space characters in the property key.
+
+
+
+
Lists
+
Flux supports lists. List values must be the same type.
+
+
+
>n=4
+>l=[1,2,3,n]
+>l
+[1,2,3,4]
+
Functions
+
Flux uses functions for most of its heavy lifting.
+Below is a simple function that squares a number, n.
+
+
+
>square=(n)=>n*n
+>square(n:3)
+9
+
+
+
Flux does not support positional arguments or parameters.
+Parameters must always be named when calling a function.
+
+
+
Pipe-forward operator
+
Flux uses the pipe-forward operator (|>) extensively to chain operations together.
+After each function or operation, Flux returns a table or collection of tables containing data.
+The pipe-forward operator pipes those tables into the next function where they are further processed or manipulated.
+
+
+
data|>someFunction()|>anotherFunction()
+
Real-world application of basic syntax
+
This likely seems familiar if you’ve already been through through the other getting started guides.
+Flux’s syntax is inspired by JavaScript and other functional scripting languages.
+As you begin to apply these basic principles in real-world use cases such as creating data stream variables,
+custom functions, etc., the power of Flux and its ability to query and process data will become apparent.
+
The examples below provide both multi-line and single-line versions of each input command.
+Carriage returns in Flux aren’t necessary, but do help with readability.
+Both single- and multi-line commands can be copied and pasted into the influx CLI running in Flux mode.
These variables can be used in other functions, such as join(), while keeping the syntax minimal and flexible.
+
Define custom functions
+
Create a function that returns the N number rows in the input stream with the highest _values.
+To do this, pass the input stream (tables) and the number of results to return (n) into a custom function.
+Then using Flux’s sort() and limit() functions to find the top n results in the data set.
These variables can be used in other functions, such as join(), while keeping the syntax minimal and flexible.
+
Define custom functions
+
Let’s create a function that returns the N number rows in the input data stream with the highest _values.
+To do this, pass the input stream (tables) and the number of results to return (n) into a custom function.
+Then using Flux’s sort() and limit() functions to find the top n results in the data set.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
InfluxDB can handle hundreds of thousands of data points per second. Working with that much data over a long period of time can create storage concerns.
+A natural solution is to downsample the data; keep the high precision raw data for only a limited time, and store the lower precision, summarized data longer.
+This guide describes how to automate the process of downsampling data and expiring old data using InfluxQL. To downsample and retain data using Flux and InfluxDB 2.0,
+see Process data with InfluxDB tasks.
+
Definitions
+
+
+
Continuous query (CQ) is an InfluxQL query that runs automatically and periodically within a database.
+CQs require a function in the SELECT clause and must include a GROUP BY time() clause.
+
+
+
Retention policy (RP) is the part of InfluxDB data structure that describes for how long InfluxDB keeps data.
+InfluxDB compares your local server’s timestamp to the timestamps on your data and deletes data older than the RP’s DURATION.
+A single database can have several RPs and RPs are unique per database.
+
+
+
This guide doesn’t go into detail about the syntax for creating and managing CQs and RPs or tasks.
+If you’re new to these concepts, we recommend reviewing the following:
This section uses fictional real-time data to track the number of food orders
+to a restaurant via phone and via website at ten second intervals.
+We store this data in a database or bucket called food_data, in
+the measurementorders, and
+in the fieldsphone and website.
Assume that, in the long run, we’re only interested in the average number of orders by phone
+and by website at 30 minute intervals.
+In the next steps, we use RPs and CQs to:
+
+
Automatically aggregate the ten-second resolution data to 30-minute resolution data
+
Automatically delete the raw, ten-second resolution data that are older than two hours
+
Automatically delete the 30-minute resolution data that are older than 52 weeks
+
+
Database preparation
+
We perform the following steps before writing the data to the database
+food_data.
+We do this before inserting any data because CQs only run against recent
+data; that is, data with timestamps that are no older than now() minus
+the FOR clause of the CQ, or now() minus the GROUP BY time() interval if
+the CQ has no FOR clause.
+
1. Create the database
+
+
+
CREATEDATABASE"food_data"
+
2. Create a two-hour DEFAULT retention policy
+
InfluxDB writes to the DEFAULT retention policy if we do not supply an explicit RP when
+writing a point to the database.
+We make the DEFAULT RP keep data for two hours, because we want InfluxDB to
+automatically write the incoming ten-second resolution data to that RP.
That query creates an RP called two_hours that exists in the database
+food_data.
+two_hours keeps data for a DURATION of two hours (2h) and it’s the DEFAULT
+RP for the database food_data.
+
+
+
The replication factor (REPLICATION 1) is a required parameter but must always
+be set to 1 for single node instances.
+
+
+
+
+
+
Note: When we created the food_data database in step 1, InfluxDB
+automatically generated an RP named autogen and set it as the DEFAULT
+RP for the database.
+The autogen RP has an infinite retention period.
+With the query above, the RP two_hours replaces autogen as the DEFAULT RP
+for the food_data database.
+
+
+
3. Create a 52-week retention policy
+
Next we want to create another retention policy that keeps data for 52 weeks and is not the
+DEFAULT retention policy (RP) for the database.
+Ultimately, the 30-minute rollup data will be stored in this RP.
That query creates a retention policy (RP) called a_year that exists in the database
+food_data.
+The a_year setting keeps data for a DURATION of 52 weeks (52w).
+Leaving out the DEFAULT argument ensures that a_year is not the DEFAULT
+RP for the database food_data.
+That is, write and read operations against food_data that do not specify an
+RP will still go to the two_hours RP (the DEFAULT RP).
+
4. Create the continuous query
+
Now that we’ve set up our RPs, we want to create a continuous query (CQ) that will automatically
+and periodically downsample the ten-second resolution data to the 30-minute
+resolution, and then store those results in a different measurement with a different
+retention policy.
That query creates a CQ called cq_30m in the database food_data.
+cq_30m tells InfluxDB to calculate the 30-minute average of the two fields
+website and phone in the measurement orders and in the DEFAULT RP
+two_hours.
+It also tells InfluxDB to write those results to the measurement
+downsampled_orders in the retention policy a_year with the field keys
+mean_website and mean_phone.
+InfluxDB will run this query every 30 minutes for the previous 30 minutes.
+
+
+
Note: Notice that we fully qualify (that is, we use the syntax
+"<retention_policy>"."<measurement>") the measurement in the INTO
+clause.
+InfluxDB requires that syntax to write data to an RP other than the DEFAULT
+RP.
+
+
+
Results
+
With the new CQ and two new RPs, food_data is ready to start receiving data.
+After writing data to our database and letting things run for a bit, we see
+two measurements: orders and downsampled_orders.
The data in orders are the raw, ten-second resolution data that reside in the
+two-hour RP.
+The data in downsampled_orders are the aggregated, 30-minute resolution data
+that are subject to the 52-week RP.
+
Notice that the first timestamps in downsampled_orders are older than the first
+timestamps in orders.
+This is because InfluxDB has already deleted data from orders with timestamps
+that are older than our local server’s timestamp minus two hours (assume we
+executed the SELECT queries at 2016-05-14T00:59:59Z).
+InfluxDB will only start dropping data from downsampled_orders after 52 weeks.
+
+
+
Notes:
+
+
+
+
Notice that we fully qualify (that is, we use the syntax
+"<retention_policy>"."<measurement>") downsampled_orders in
+the second SELECT statement. We must specify the RP in that query to SELECT
+data that reside in an RP other than the DEFAULT RP.
+
+
+
+
+
+
+
By default, InfluxDB checks to enforce an RP every 30 minutes.
+Between checks, orders may have data that are older than two hours.
+The rate at which InfluxDB checks to enforce an RP is a configurable setting,
+see
+Database Configuration.
+
+
Using a combination of RPs and CQs, we’ve successfully set up our database to
+automatically keep the high precision raw data for a limited time, create lower
+precision data, and store that lower precision data for a longer period of time.
+Now that you have a general understanding of how these features can work
+together, check out the detailed documentation on CQs and RPs
+to see all that they can do for you.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
Continuous queries (CQ) are InfluxQL queries that run automatically and
+periodically on realtime data and store query results in a
+specified measurement.
Note: Notice that the cq_query does not require a time range in a WHERE clause.
+InfluxDB automatically generates a time range for the cq_query when it executes the CQ.
+Any user-specified time ranges in the cq_query’s WHERE clause will be ignored
+by the system.
+
+
+
Schedule and coverage
+
Continuous queries operate on real-time data.
+They use the local server’s timestamp, the GROUP BY time() interval, and
+InfluxDB database’s preset time boundaries to determine when to execute and what time
+range to cover in the query.
+
CQs execute at the same interval as the cq_query’s GROUP BY time() interval,
+and they run at the start of the InfluxDB database’s preset time boundaries.
+If the GROUP BY time() interval is one hour, the CQ executes at the start of
+every hour.
+
When the CQ executes, it runs a single query for the time range between
+now() and now() minus the
+GROUP BY time() interval.
+If the GROUP BY time() interval is one hour and the current time is 17:00,
+the query’s time range is between 16:00 and 16:59.999999999.
+
Examples of basic syntax
+
The examples below use the following sample data in the transportation
+database.
+The measurement bus_data stores 15-minute resolution data on the number of bus
+passengers and complaints:
cq_basic calculates the average hourly number of passengers from the
+bus_data measurement and stores the results in the average_passengers
+measurement in the transportation database.
+
cq_basic executes at one-hour intervals, the same interval as the
+GROUP BY time() interval.
+Every hour, cq_basic runs a single query that covers the time range between
+now() and now() minus the GROUP BY time() interval, that is, the time
+range between now() and one hour prior to now().
+
Annotated log output on the morning of August 28, 2016:
cq_basic_rp calculates the average hourly number of passengers from the
+bus_data measurement and stores the results in the transportation database,
+the three_weeks RP, and the average_passengers measurement.
+
cq_basic_rp executes at one-hour intervals, the same interval as the
+GROUP BY time() interval.
+Every hour, cq_basic_rp runs a single query that covers the time range between
+now() and now() minus the GROUP BY time() interval, that is, the time
+range between now() and one hour prior to now().
+
Annotated log output on the morning of August 28, 2016:
cq_basic_rp uses CQs and retention policies to automatically downsample data
+and keep those downsampled data for an alternative length of time.
+See the Downsampling and Data Retention
+guide for an in-depth discussion about this CQ use case.
+
Automatically downsampling a database with backreferencing
+
Use a function with a wildcard (*) and INTO query’s
+backreferencing syntax
+to automatically downsample data from all measurements and numerical fields in
+a database.
cq_basic_br calculates the 30-minute average of passengers and complaints
+from every measurement in the transportation database (in this case, there’s only the
+bus_data measurement).
+It stores the results in the downsampled_transportation database.
+
cq_basic_br executes at 30 minutes intervals, the same interval as the
+GROUP BY time() interval.
+Every 30 minutes, cq_basic_br runs a single query that covers the time range
+between now() and now() minus the GROUP BY time() interval, that is,
+the time range between now() and 30 minutes prior to now().
+
Annotated log output on the morning of August 28, 2016:
cq_basic_offsetcalculates the average hourly number of passengers from the
+bus_data measurement and stores the results in the average_passengers
+measurement.
+
cq_basic_offset executes at one-hour intervals, the same interval as the
+GROUP BY time() interval.
+The 15 minute offset interval forces the CQ to execute 15 minutes after the
+default execution time; cq_basic_offset executes at 8:15 instead of 8:00.
+
Every hour, cq_basic_offset runs a single query that covers the time range
+between now() and now() minus the GROUP BY time() interval, that is, the
+time range between now() and one hour prior to now().
+The 15 minute offset interval shifts forward the generated preset time boundaries in the
+CQ’s WHERE clause; cq_basic_offset queries between 7:15 and 8:14.999999999 instead of 7:00 and 7:59.999999999.
+
Annotated log output on the morning of August 28, 2016:
+
+
+
>
+At **8:15** `cq_basic_offset` executes a query with the time range `time >= '7:15' AND time < '8:15'`.
+`cq_basic_offset` writes one point to the `average_passengers` measurement:
+>
+ name: average_passengers
+ ------------------------
+ time mean
+ 2016-08-28T07:15:00Z 7.75
+>
+At **9:15** `cq_basic_offset` executes a query with the time range `time >= '8:15' AND time < '9:15'`.
+`cq_basic_offset` writes one point to the `average_passengers` measurement:
+>
+ name: average_passengers
+ ------------------------
+ time mean
+ 2016-08-28T08:15:00Z 16.75
Notice that the timestamps are for 7:15 and 8:15 instead of 7:00 and 8:00.
+
Common issues with basic syntax
+
Handling time intervals with no data
+
CQs do not write any results for a time interval if no data fall within that
+time range.
+
Note that the basic syntax does not support using
+fill()
+to change the value reported for intervals with no data.
+Basic syntax CQs ignore fill() if it’s included in the CQ query.
+A possible workaround is to use the
+advanced CQ syntax.
+
Resampling previous time intervals
+
The basic CQ runs a single query that covers the time range between now()
+and now() minus the GROUP BY time() interval.
+See the advanced syntax for how to configure the query’s
+time range.
+
Backfilling results for older data
+
CQs operate on realtime data, that is, data with timestamps that occur
+relative to now().
+Use a basic
+INTO query
+to backfill results for data with older timestamps.
+
Missing tags in the CQ results
+
By default, all
+INTO queries
+convert any tags in the source measurement to fields in the destination
+measurement.
+
Include GROUP BY * in the CQ to preserve tags in the destination measurement.
+
Advanced syntax
+
+
+
CREATE CONTINUOUS QUERY <cq_name> ON <database_name>
+RESAMPLE EVERY <interval> FOR <interval>
+BEGIN
+ <cq_query>
+END
CQs operate on real-time data. With the advanced syntax, CQs use the local
+server’s timestamp, the information in the RESAMPLE clause, and the InfluxDB
+server’s preset time boundaries to determine when to execute and what time range to
+cover in the query.
+
CQs execute at the same interval as the EVERY interval in the RESAMPLE
+clause, and they run at the start of InfluxDB’s preset time boundaries.
+If the EVERY interval is two hours, InfluxDB executes the CQ at the top of
+every other hour.
+
When the CQ executes, it runs a single query for the time range between
+now() and now() minus the FOR interval in the RESAMPLE clause.
+If the FOR interval is two hours and the current time is 17:00, the query’s
+time range is between 15:00 and 16:59.999999999.
+
Both the EVERY interval and the FOR interval accept
+duration literals.
+The RESAMPLE clause works with either or both of the EVERY and FOR intervals
+configured.
+CQs default to the relevant
+basic syntax behavior
+if the EVERY interval or FOR interval is not provided (see the first issue in
+Common Issues with Advanced Syntax
+for an anomalous case).
+
Examples of advanced syntax
+
The examples below use the following sample data in the transportation database.
+The measurement bus_data stores 15-minute resolution data on the number of bus
+passengers:
cq_advanced_every calculates the one-hour average of passengers
+from the bus_data measurement and stores the results in the
+average_passengers measurement in the transportation database.
+
cq_advanced_every executes at 30-minute intervals, the same interval as the
+EVERY interval.
+Every 30 minutes, cq_advanced_every runs a single query that covers the time
+range for the current time bucket, that is, the one-hour time bucket that
+intersects with now().
+
Annotated log output on the morning of August 28, 2016:
Notice that cq_advanced_every calculates the result for the 8:00 time interval
+twice.
+First, it runs at 8:30 and calculates the average for every available data point
+between 8:00 and 9:00 (8,15, and 15).
+Second, it runs at 9:00 and calculates the average for every available data
+point between 8:00 and 9:00 (8, 15, 15, and 17).
+Because of the way InfluxDB
+handles duplicate points
+, the second result simply overwrites the first result.
+
Configuring time ranges for resampling
+
Use a FOR interval in the RESAMPLE clause to specify the length of the CQ’s
+time range.
cq_advanced_for calculates the 30-minute average of passengers
+from the bus_data measurement and stores the results in the average_passengers
+measurement in the transportation database.
+
cq_advanced_for executes at 30-minute intervals, the same interval as the
+GROUP BY time() interval.
+Every 30 minutes, cq_advanced_for runs a single query that covers the time
+range between now() and now() minus the FOR interval, that is, the time
+range between now() and one hour prior to now().
+
Annotated log output on the morning of August 28, 2016:
Notice that cq_advanced_for will calculate the result for every time interval
+twice.
+The CQ calculates the average for the 7:30 time interval at 8:00 and at 8:30,
+and it calculates the average for the 8:00 time interval at 8:30 and 9:00.
cq_advanced_every_for calculates the 30-minute average of
+passengers from the bus_data measurement and stores the results in the
+average_passengers measurement in the transportation database.
+
cq_advanced_every_for executes at one-hour intervals, the same interval as the
+EVERY interval.
+Every hour, cq_advanced_every_for runs a single query that covers the time
+range between now() and now() minus the FOR interval, that is, the time
+range between now() and 90 minutes prior to now().
+
Annotated log output on the morning of August 28, 2016:
Notice that cq_advanced_every_for will calculate the result for every time
+interval twice.
+The CQ calculates the average for the 7:30 interval at 8:00 and 9:00.
Configuring CQ time ranges and filling empty results
+
Use a FOR interval and fill() to change the value reported for time
+intervals with no data.
+Note that at least one data point must fall within the FOR interval for fill()
+to operate.
+If no data fall within the FOR interval the CQ writes no points to the
+destination measurement.
cq_advanced_for_fill calculates the one-hour average of passengers from the
+bus_data measurement and stores the results in the average_passengers
+measurement in the transportation database.
+Where possible, it writes the value 1000 for time intervals with no results.
+
cq_advanced_for_fill executes at one-hour intervals, the same interval as the
+GROUP BY time() interval.
+Every hour, cq_advanced_for_fill runs a single query that covers the time
+range between now() and now() minus the FOR interval, that is, the time
+range between now() and two hours prior to now().
+
Annotated log output on the morning of August 28, 2016:
+
+
+
>
+At**6:00**,`cq_advanced_for_fill`executesaquerywiththetimerange`WHEREtime>='4:00'ANDtime<'6:00'`.
+`cq_advanced_for_fill`writesnothingto`average_passengers`;`bus_data`hasnodata
+thatfallwithinthattimerange.
+>
+At**7:00**,`cq_advanced_for_fill`executesaquerywiththetimerange`WHEREtime>='5:00'ANDtime<'7:00'`.
+`cq_advanced_for_fill`writestwopointsto`average_passengers`:
+>
+name:average_passengers
+------------------------
+timemean
+2016-08-28T05:00:00Z1000<------ fill(1000)
+2016-08-28T06:00:00Z3<------ average of 2 and 4
+>
+[...]
+>
+At**11:00**,`cq_advanced_for_fill`executesaquerywiththetimerange`WHEREtime>='9:00'ANDtime<'11:00'`.
+`cq_advanced_for_fill`writestwopointsto`average_passengers`:
+>
+name:average_passengers
+------------------------
+2016-08-28T09:00:00Z20<------ average of 20
+2016-08-28T10:00:00Z1000<------ fill(1000)
+>
+
At 12:00, cq_advanced_for_fill executes a query with the time range WHERE time >= '10:00' AND time < '12:00'.
+cq_advanced_for_fill writes nothing to average_passengers; bus_data has no data
+that fall within that time range.
Note:fill(previous) doesn’t fill the result for a time interval if the
+previous value is outside the query’s time range.
+See Frequently Asked Questions
+for more information.
+
+
+
Common issues with advanced syntax
+
If the EVERY interval is greater than the GROUP BY time() interval
+
If the EVERY interval is greater than the GROUP BY time() interval, the CQ
+executes at the same interval as the EVERY interval and runs a single query
+that covers the time range between now() and now() minus the EVERY
+interval (not between now() and now() minus the GROUP BY time() interval).
+
For example, if the GROUP BY time() interval is 5m and the EVERY interval
+is 10m, the CQ executes every ten minutes.
+Every ten minutes, the CQ runs a single query that covers the time range
+between now() and now() minus the EVERY interval, that is, the time
+range between now() and ten minutes prior to now().
+
This behavior is intentional and prevents the CQ from missing data between
+execution times.
+
If the FOR interval is less than the execution interval
+
If the FOR interval is less than the GROUP BY time() interval or, if
+specified, the EVERY interval, InfluxDB returns the following error:
To avoid missing data between execution times, the FOR interval must be equal
+to or greater than the GROUP BY time() interval or, if specified, the EVERY
+interval.
+
Currently, this is the intended behavior.
+GitHub Issue #6963
+outlines a feature request for CQs to support gaps in data coverage.
Drop the idle_hands CQ from the telegraf database:
+
+
+
DROPCONTINUOUSQUERY"idle_hands"ON"telegraf"
+
Altering continuous queries
+
CQs cannot be altered once they’re created.
+To change a CQ, you must DROP and reCREATE it with the updated settings.
+
Continuous query statistics
+
If query-stats-enabled is set to true in your influxdb.conf or using the INFLUXDB_CONTINUOUS_QUERIES_QUERY_STATS_ENABLED environment variable, data will be written to _internal with information about when continuous queries ran and their duration.
+Information about CQ configuration settings is available in the Configuration documentation.
+
+
+
Note:_internal houses internal system data and is meant for internal use.
+The structure of and data stored in _internal can change at any time.
+Use of this data falls outside the scope of official InfluxData support.
+
+
+
Continuous query use cases
+
Downsampling and Data Retention
+
Use CQs with InfluxDB database
+retention policies
+(RPs) to mitigate storage concerns.
+Combine CQs and RPs to automatically downsample high precision data to a lower
+precision and remove the dispensable, high precision data from the database.
Shorten query runtimes by pre-calculating expensive queries with CQs.
+Use a CQ to automatically downsample commonly queried, high precision data to a
+lower precision.
+Queries on lower precision data require fewer resources and return faster.
+
Tip: Pre-calculate queries for your preferred graphing tool to accelerate
+the population of graphs and dashboards.
+
Substituting for a HAVING clause
+
InfluxQL does not support HAVING clauses.
+Get the same functionality by creating a CQ to aggregate the data and querying
+the CQ results to apply the HAVING clause.
+
+
+
Note: InfluxQL supports subqueries which also offer similar functionality to HAVING clauses.
+See Data Exploration for more information.
+
+
+
Example
+
InfluxDB does not accept the following query with a HAVING clause.
+The query calculates the average number of bees at 30 minute intervals and
+requests averages that are greater than 20.
This step performs the mean("bees") part of the query above.
+Because this step creates CQ you only need to execute it once.
+
The following CQ automatically calculates the average number of bees at
+30 minutes intervals and writes those averages to the mean_bees field in the
+aggregate_bees measurement.
Some InfluxQL functions
+support nesting
+of other functions.
+Most do not.
+If your function does not support nesting, you can get the same functionality using a CQ to calculate
+the inner-most function.
+Then simply query the CQ results to calculate the outer-most function.
+
+
+
Note: InfluxQL supports subqueries which also offer the same functionality as nested functions.
+See Data Exploration for more information.
+
+
+
Example
+
InfluxDB does not accept the following query with a nested function.
+The query calculates the number of non-null values
+of bees at 30 minute intervals and the average of those counts:
This step performs the count("bees") part of the nested function above.
+Because this step creates a CQ you only need to execute it once.
+
The following CQ automatically calculates the number of non-null values of bees at 30 minute intervals
+and writes those counts to the count_bees field in the aggregate_bees measurement.
To see how to combine two InfluxDB features, CQs, and retention policies,
+to periodically downsample data and automatically expire the dispensable high
+precision data, see Downsampling and data retention.
+
Kapacitor, InfluxData’s data processing engine, can do the same work as
+continuous queries in InfluxDB databases.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
InfluxQL is an SQL-like query language for interacting with data in InfluxDB.
+The following sections detail InfluxQL’s SELECT statement and useful query syntax
+for exploring your data.
$ influx -precision rfc3339 -database NOAA_water_database
+Connected to http://localhost:8086 version 1.12.2
+InfluxDB shell 1.12.2
+>
+
Next, get acquainted with this subsample of the data in the h2o_feet measurement:
+
name: h2o_feet
+
+
+
+
time
+
level description
+
location
+
water_level
+
+
+
+
+
2015-08-18T00:00:00Z
+
between 6 and 9 feet
+
coyote_creek
+
8.12
+
+
+
2015-08-18T00:00:00Z
+
below 3 feet
+
santa_monica
+
2.064
+
+
+
2015-08-18T00:06:00Z
+
between 6 and 9 feet
+
coyote_creek
+
8.005
+
+
+
2015-08-18T00:06:00Z
+
below 3 feet
+
santa_monica
+
2.116
+
+
+
2015-08-18T00:12:00Z
+
between 6 and 9 feet
+
coyote_creek
+
7.887
+
+
+
2015-08-18T00:12:00Z
+
below 3 feet
+
santa_monica
+
2.028
+
+
+
+
The data in the h2o_feetmeasurement
+occur at six-minute time intervals.
+The measurement has one tag key
+(location) which has two tag values:
+coyote_creek and santa_monica.
+The measurement also has two fields:
+level description stores string field values
+and water_level stores float field values.
+All of these data is in the NOAA_water_databasedatabase.
+
+
+
Disclaimer: The level description field isn’t part of the original NOAA data - we snuck it in there for the sake of having a field key with a special character and string field values.
+
+
+
The basic SELECT statement
+
The SELECT statement queries data from a particular measurement or measurements.
SELECT "<field_key>","<field_key>"
+ Returns more than one field.
+
SELECT "<field_key>","<tag_key>"
+ Returns a specific field and tag.
+The SELECT clause must specify at least one field when it includes a tag.
+
SELECT "<field_key>"::field,"<tag_key>"::tag
+ Returns a specific field and tag.
+The ::[field | tag] syntax specifies the identifier’s type.
+Use this syntax to differentiate between field keys and tag keys that have the same name.
The FROM clause supports several formats for specifying a measurement(s):
+
FROM <measurement_name>
+
+Returns data from a single measurement.
+If you’re using the CLI InfluxDB queries the measurement in the
+USEd
+database and the DEFAULTretention policy.
+If you’re using the InfluxDB API InfluxDB queries the
+measurement in the database specified in the db query string parameter
+and the DEFAULT retention policy.
+
FROM <measurement_name>,<measurement_name>
+
+Returns data from more than one measurement.
+
FROM <database_name>.<retention_policy_name>.<measurement_name>
+
+Returns data from a fully qualified measurement.
+Fully qualify a measurement by specifying its database and retention policy.
+
FROM <database_name>..<measurement_name>
+
+Returns data from a measurement in a user-specified database and the DEFAULT
+retention policy.
Identifiersmust be double quoted if they contain characters other than [A-z,0-9,_], if they
+begin with a digit, or if they are an InfluxQL keyword.
+While not always necessary, we recommend that you double quote identifiers.
If you’re using the CLI be sure to enter
+USE NOAA_water_database before you run the query.
+The CLI queries the data in the USEd database and the
+DEFAULTretention policy.
+If you’re using the InfluxDB API be sure to set the
+dbquery string parameter
+to NOAA_water_database.
+If you do not set the rp query string parameter, the InfluxDB API automatically
+queries the database’s DEFAULT retention policy.
+
Select specific tags and fields from a single measurement
The query selects the level description field, the location tag, and the
+water_level field.
+Note that the SELECT clause must specify at least one field when it includes
+a tag.
+
Select specific tags and fields from a single measurement, and provide their identifier type
The query selects the level description field, the location tag, and the
+water_level field from the h2o_feet measurement.
+The ::[field | tag] syntax specifies if the
+identifier is a field or tag.
+Use ::[field | tag] to differentiate between an identical field key and tag key .
+That syntax is not required for most use cases.
The query multiplies water_level’s field values by two and adds four to those
+values.
+Note that InfluxDB follows the standard order of operations.
+See Mathematical Operators
+for more on supported operators.
The query selects data in the NOAA_water_database, the autogen retention
+policy, and the measurement h2o_feet.
+
In the CLI, fully qualify a measurement to query data in a database other
+than the USEd database and in a retention policy other than the
+DEFAULT retention policy.
+In the InfluxDB API, fully qualify a measurement in place of using the db
+and rp query string parameters if desired.
+
Select all data from a measurement in a particular database
The query selects data in the NOAA_water_database, the DEFAULT retention
+policy, and the h2o_feet measurement.
+The .. indicates the DEFAULT retention policy for the specified database.
+
In the CLI, specify the database to query data in a database other than the
+USEd database.
+In the InfluxDB API, specify the database in place of using the db query
+string parameter if desired.
+
Common issues with the SELECT statement
+
Selecting tag keys in the SELECT clause
+
A query requires at least one field key
+in the SELECT clause to return data.
+If the SELECT clause only includes a single tag key or several tag keys, the
+query returns an empty response.
+This behavior is a result of how the system stores data.
+
Example
+
The following query returns no data because it specifies a single tag key (location) in
+the SELECT clause:
+
+
+
SELECT"location"FROM"h2o_feet"
+
To return any data associated with the location tag key, the query’s SELECT
+clause must include at least one field key (water_level):
Tired of reading? Check out this InfluxQL Short:
+
+
+
+
Syntax
+
+
+
SELECT_clause FROM_clause WHERE <conditional_expression> [(AND|OR) <conditional_expression> [...]]
+
The WHERE clause supports conditional_expressions on fields, tags, and
+timestamps.
+
+
+
Note InfluxDB does not support using OR in the WHERE clause to specify multiple time ranges. For example, InfluxDB returns an empty response for the following query:
+
+
+
> SELECT * FROM "absolutismus" WHERE time = '2016-07-31T20:07:00Z' OR time = '2016-07-31T23:07:17Z'
The WHERE clause supports comparisons against string, boolean, float,
+and integer field values.
+
Single quote string field values in the WHERE clause.
+Queries with unquoted string field values or double quoted string field values
+will not return any data and, in most cases,
+will not return an error.
Single quote tag values in
+the WHERE clause.
+Queries with unquoted tag values or double quoted tag values will not return
+any data and, in most cases,
+will not return an error.
The query returns data from the h2o_feet measurement with field values of
+level description that equal the below 3 feet string.
+InfluxQL requires single quotes around string field values in the WHERE
+clause.
+
Select data that have a specific field key-value and perform basic arithmetic
The query returns data from the h2o_feet measurement with field values of
+water_level plus two that are greater than 11.10.
+Note that InfluxDB follows the standard order of operations
+See Mathematical Operators
+for more on supported operators.
The query returns data from the h2o_feet measurement where the
+tag keylocation is set to santa_monica.
+InfluxQL requires single quotes around tag values in the WHERE clause.
+
Select data that have specific field key-values and tag key-values
The query returns data from the h2o_feet measurement where the tag key
+location is not set to santa_monica and where the field values of
+water_level are either less than -0.59 or greater than 9.95.
+The WHERE clause supports the operators AND and OR, and supports
+separating logic with parentheses.
+
Select data that have specific timestamps
+
+
+
SELECT*FROM"h2o_feet"WHEREtime>now()-7d
+
The query returns data from the h2o_feet measurement that have timestamps
+within the past seven days.
+The Time Syntax section on this page
+offers in-depth information on supported time syntax in the WHERE clause.
+
Common issues with the WHERE clause
+
A WHERE clause query unexpectedly returns no data
+
In most cases, this issue is the result of missing single quotes around
+tag values
+or string field values.
+Queries with unquoted or double quoted tag values or string field values will
+not return any data and, in most cases, will not return an error.
+
The first two queries in the code block below attempt to specify the tag value
+santa_monica without any quotes and with double quotes.
+Those queries return no results.
+The third query single quotes santa_monica (this is the supported syntax)
+and returns the expected results.
The first two queries in the code block below attempt to specify the string
+field value at or greater than 9 feet without any quotes and with double
+quotes.
+The first query returns an error because the string field value includes
+white spaces.
+The second query returns no results.
+The third query single quotes at or greater than 9 feet (this is the
+supported syntax) and returns the expected results.
+
+
+
>SELECT"level description"FROM"h2o_feet"WHERE"level description"=atorgreaterthan9feet
+
+ERR:errorparsingquery:foundthan,expected;atline1,char86
+
+>SELECT"level description"FROM"h2o_feet"WHERE"level description"="at or greater than 9 feet"
+
+>SELECT"level description"FROM"h2o_feet"WHERE"level description"='at or greater than 9 feet'
+
+name:h2o_feet
+--------------
+timeleveldescription
+2015-08-26T04:00:00Zatorgreaterthan9feet
+[...]
+2015-09-15T22:42:00Zatorgreaterthan9feet
The query uses an InfluxQL function
+to calculate the average water_level for each
+tag value of location in
+the h2o_feetmeasurement.
+InfluxDB returns results in two series: one for each tag value of location.
+
+
+
Note: In InfluxDB, epoch 0 (1970-01-01T00:00:00Z) is often used as a null timestamp equivalent.
+If you request a query that has no timestamp to return, such as an aggregation function with an unbounded time range, InfluxDB returns epoch 0 as the timestamp.
The query uses an InfluxQL function to calculate the average index for
+each combination of the locationtag and the randtag tag in the
+h2o_quality measurement.
+Separate multiple tags with a comma in the GROUP BY clause.
The query uses an InfluxQL function
+to calculate the average index for every possible
+tag combination in the h2o_quality
+measurement.
+
Note that the query results are identical to the results of the query in Example 2
+where we explicitly specified the location and randtag tag keys.
+This is because the h2o_quality measurement only has two tag keys.
+
GROUP BY time intervals
+
GROUP BY time() queries group query results by a user-specified time interval.
Basic GROUP BY time() queries require an InfluxQL function
+in the SELECT clause and a time range in the
+WHERE clause.
+Note that the GROUP BY clause must come after the WHERE clause.
+
time(time_interval)
+
The time_interval in the GROUP BY time() clause is a
+duration literal.
+It determines how InfluxDB groups query results over time.
+For example, a time_interval of 5m groups query results into five-minute
+time groups across the time range specified in the WHERE clause.
+
fill(<fill_option>)
+
fill(<fill_option>) is optional.
+It changes the value reported for time intervals that have no data.
+See GROUP BY time intervals and fill()
+for more information.
+
Coverage:
+
Basic GROUP BY time() queries rely on the time_interval and on the InfluxDB database’s
+preset time boundaries to determine the raw data included in each time interval
+and the timestamps returned by the query.
+
Examples of basic syntax
+
The examples below use the following subsample of the sample data:
The query uses an InfluxQL function
+to count the number of water_level points with the tag
+location = coyote_creek and it group results into 12 minute intervals.
+
The result for each timestamp
+represents a single 12 minute interval.
+The count for the first timestamp covers the raw data between 2015-08-18T00:00:00Z
+and up to, but not including, 2015-08-18T00:12:00Z.
+The count for the second timestamp covers the raw data between 2015-08-18T00:12:00Z
+and up to, but not including, 2015-08-18T00:24:00Z.
+
Group query results into 12 minutes intervals and by a tag key
The query uses an InfluxQL function
+to count the number of water_level points.
+It groups results by the location tag and into 12 minute intervals.
+Note that the time interval and the tag key are separated by a comma in the
+GROUP BY clause.
+
The query returns two series of results: one for each
+tag value of the location tag.
+The result for each timestamp represents a single 12 minute interval.
+The count for the first timestamp covers the raw data between 2015-08-18T00:00:00Z
+and up to, but not including, 2015-08-18T00:12:00Z.
+The count for the second timestamp covers the raw data between 2015-08-18T00:12:00Z
+and up to, but not including, 2015-08-18T00:24:00Z.
+
Common issues with basic syntax
+
Unexpected timestamps and values in query results
+
With the basic syntax, InfluxDB relies on the GROUP BY time() interval
+and on the system’s preset time boundaries to determine the raw data included
+in each time interval and the timestamps returned by the query.
+In some cases, this can lead to unexpected results.
The following query covers a 12-minute time range and groups results into 12-minute time intervals, but it returns two results:
+
+
+
>SELECTCOUNT("water_level")FROM"h2o_feet"WHERE"location"='coyote_creek'ANDtime>='2015-08-18T00:06:00Z'ANDtime<'2015-08-18T00:18:00Z'GROUPBYtime(12m)
+
+name:h2o_feet
+timecount
+---- -----
+2015-08-18T00:00:00Z1<----- Note that this timestamp occurs before the start of the query's time range
+2015-08-18T00:12:00Z1
+
Explanation:
+
InfluxDB uses preset round-number time boundaries for GROUP BY intervals that are
+independent of any time conditions in the WHERE clause.
+When it calculates the results, all returned data must occur within the query’s
+explicit time range but the GROUP BY intervals will be based on the preset
+time boundaries.
+
The table below shows the preset time boundary, the relevant GROUP BY time() interval, the
+points included, and the returned timestamp for each GROUP BY time()
+interval in the results.
+
+
+
+
Time Interval Number
+
Preset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:00:00Z AND time < 2015-08-18T00:12:00Z
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:12:00Z
+
8.005
+
2015-08-18T00:00:00Z
+
+
+
2
+
time >= 2015-08-12T00:12:00Z AND time < 2015-08-18T00:24:00Z
+
time >= 2015-08-12T00:12:00Z AND time < 2015-08-18T00:18:00Z
+
7.887
+
2015-08-18T00:12:00Z
+
+
+
+
The first preset 12-minute time boundary begins at 00:00 and ends just before
+00:12.
+Only one raw point (8.005) falls both within the query’s first GROUP BY time() interval and in that
+first time boundary.
+Note that while the returned timestamp occurs before the start of the query’s time range,
+the query result excludes data that occur before the query’s time range.
+
The second preset 12-minute time boundary begins at 00:12 and ends just before
+00:24.
+Only one raw point (7.887) falls both within the query’s second GROUP BY time() interval and in that
+second time boundary.
+
The advanced GROUP BY time() syntax allows users to shift
+the start time of the InfluxDB database’s preset time boundaries.
+Example 3
+in the Advanced Syntax section continues with the query shown here;
+it shifts forward the preset time boundaries by six minutes such that
+InfluxDB returns:
Advanced GROUP BY time() queries require an InfluxQL function
+in the SELECT clause and a time range in the
+WHERE clause.
+Note that the GROUP BY clause must come after the WHERE clause.
The offset_interval is a
+duration literal.
+It shifts forward or back the InfluxDB database’s preset time boundaries.
+The offset_interval can be positive or negative.
+
fill(<fill_option>)
+
fill(<fill_option>) is optional.
+It changes the value reported for time intervals that have no data.
+See GROUP BY time intervals and fill()
+for more information.
+
Coverage:
+
Advanced GROUP BY time() queries rely on the time_interval, the offset_interval
+, and on the InfluxDB database’s preset time boundaries to determine the raw data included in each time interval
+and the timestamps returned by the query.
+
Examples of advanced syntax
+
The examples below use the following subsample of the sample data:
The query uses an InfluxQL function
+to calculate the average water_level, grouping results into 18 minute
+time intervals, and offsetting the preset time boundaries by six minutes.
+
The time boundaries and returned timestamps for the query without the offset_interval adhere to the InfluxDB database’s preset time boundaries. Let’s first examine the results without the offset:
The time boundaries and returned timestamps for the query without the
+offset_interval adhere to the InfluxDB database’s preset time boundaries:
+
+
+
+
Time Interval Number
+
Preset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:00:00Z AND time < 2015-08-18T00:18:00Z
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:18:00Z
+
8.005,7.887
+
2015-08-18T00:00:00Z
+
+
+
2
+
time >= 2015-08-18T00:18:00Z AND time < 2015-08-18T00:36:00Z
+
<— same
+
7.762,7.635,7.5
+
2015-08-18T00:18:00Z
+
+
+
3
+
time >= 2015-08-18T00:36:00Z AND time < 2015-08-18T00:54:00Z
+
<— same
+
7.372,7.234,7.11
+
2015-08-18T00:36:00Z
+
+
+
4
+
time >= 2015-08-18T00:54:00Z AND time < 2015-08-18T01:12:00Z
+
time = 2015-08-18T00:54:00Z
+
6.982
+
2015-08-18T00:54:00Z
+
+
+
+
The first preset 18-minute time boundary begins at 00:00 and ends just before
+00:18.
+Two raw points (8.005 and 7.887) fall both within the first GROUP BY time() interval and in that
+first time boundary.
+Note that while the returned timestamp occurs before the start of the query’s time range,
+the query result excludes data that occur before the query’s time range.
+
The second preset 18-minute time boundary begins at 00:18 and ends just before
+00:36.
+Three raw points (7.762 and 7.635 and 7.5) fall both within the second GROUP BY time() interval and in that
+second time boundary. In this case, the boundary time range and the interval’s time range are the same.
+
The fourth preset 18-minute time boundary begins at 00:54 and ends just before
+1:12:00.
+One raw point (6.982) falls both within the fourth GROUP BY time() interval and in that
+fourth time boundary.
+
The time boundaries and returned timestamps for the query with the
+offset_interval adhere to the offset time boundaries:
+
+
+
+
Time Interval Number
+
Offset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:24:00Z
+
<— same
+
8.005,7.887,7.762
+
2015-08-18T00:06:00Z
+
+
+
2
+
time >= 2015-08-18T00:24:00Z AND time < 2015-08-18T00:42:00Z
+
<— same
+
7.635,7.5,7.372
+
2015-08-18T00:24:00Z
+
+
+
3
+
time >= 2015-08-18T00:42:00Z AND time < 2015-08-18T01:00:00Z
+
<— same
+
7.234,7.11,6.982
+
2015-08-18T00:42:00Z
+
+
+
4
+
time >= 2015-08-18T01:00:00Z AND time < 2015-08-18T01:18:00Z
+
NA
+
NA
+
NA
+
+
+
+
The six-minute offset interval shifts forward the preset boundary’s time range
+such that the boundary time ranges and the relevant GROUP BY time() interval time ranges are
+always the same.
+With the offset, each interval performs the calculation on three points, and
+the timestamp returned matches both the start of the boundary time range and the
+start of the GROUP BY time() interval time range.
+
Note that offset_interval forces the fourth time boundary to be outside
+the query’s time range so the query returns no results for that last interval.
+
Group query results into 12 minute intervals and shift the preset time boundaries back
The query uses an InfluxQL function
+to calculate the average water_level, grouping results into 18 minute
+time intervals, and offsetting the preset time boundaries by -12 minutes.
+
+
+
Note: The query in Example 2 returns the same results as the query in Example 1, but
+the query in Example 2 uses a negative offset_interval instead of a positive
+offset_interval.
+There are no performance differences between the two queries; feel free to choose the most
+intuitive option when deciding between a positive and negative offset_interval.
+
+
+
The time boundaries and returned timestamps for the query without the offset_interval adhere to InfluxDB database’s preset time boundaries. Let’s first examine the results without the offset:
The time boundaries and returned timestamps for the query without the
+offset_interval adhere to the InfluxDB database’s preset time boundaries:
+
+
+
+
Time Interval Number
+
Preset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:00:00Z AND time < 2015-08-18T00:18:00Z
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:18:00Z
+
8.005,7.887
+
2015-08-18T00:00:00Z
+
+
+
2
+
time >= 2015-08-18T00:18:00Z AND time < 2015-08-18T00:36:00Z
+
<— same
+
7.762,7.635,7.5
+
2015-08-18T00:18:00Z
+
+
+
3
+
time >= 2015-08-18T00:36:00Z AND time < 2015-08-18T00:54:00Z
+
<— same
+
7.372,7.234,7.11
+
2015-08-18T00:36:00Z
+
+
+
4
+
time >= 2015-08-18T00:54:00Z AND time < 2015-08-18T01:12:00Z
+
time = 2015-08-18T00:54:00Z
+
6.982
+
2015-08-18T00:54:00Z
+
+
+
+
The first preset 18-minute time boundary begins at 00:00 and ends just before
+00:18.
+Two raw points (8.005 and 7.887) fall both within the first GROUP BY time() interval and in that
+first time boundary.
+Note that while the returned timestamp occurs before the start of the query’s time range,
+the query result excludes data that occur before the query’s time range.
+
The second preset 18-minute time boundary begins at 00:18 and ends just before
+00:36.
+Three raw points (7.762 and 7.635 and 7.5) fall both within the second GROUP BY time() interval and in that
+second time boundary. In this case, the boundary time range and the interval’s time range are the same.
+
The fourth preset 18-minute time boundary begins at 00:54 and ends just before
+1:12:00.
+One raw point (6.982) falls both within the fourth GROUP BY time() interval and in that
+fourth time boundary.
+
The time boundaries and returned timestamps for the query with the
+offset_interval adhere to the offset time boundaries:
+
+
+
+
Time Interval Number
+
Offset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-17T23:48:00Z AND time < 2015-08-18T00:06:00Z
+
NA
+
NA
+
NA
+
+
+
2
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:24:00Z
+
<— same
+
8.005,7.887,7.762
+
2015-08-18T00:06:00Z
+
+
+
3
+
time >= 2015-08-18T00:24:00Z AND time < 2015-08-18T00:42:00Z
+
<— same
+
7.635,7.5,7.372
+
2015-08-18T00:24:00Z
+
+
+
4
+
time >= 2015-08-18T00:42:00Z AND time < 2015-08-18T01:00:00Z
+
<— same
+
7.234,7.11,6.982
+
2015-08-18T00:42:00Z
+
+
+
+
The negative 12-minute offset interval shifts back the preset boundary’s time range
+such that the boundary time ranges and the relevant GROUP BY time() interval time ranges are always the
+same.
+With the offset, each interval performs the calculation on three points, and
+the timestamp returned matches both the start of the boundary time range and the
+start of the GROUP BY time() interval time range.
+
Note that offset_interval forces the first time boundary to be outside
+the query’s time range so the query returns no results for that first interval.
+
Group query results into 12 minute intervals and shift the preset time boundaries forward
The query uses an InfluxQL function
+to count the number of water_level points, grouping results into 12 minute
+time intervals, and offsetting the preset time boundaries by six minutes.
+
The time boundaries and returned timestamps for the query without the offset_interval adhere to InfluxDB database’s preset time boundaries. Let’s first examine the results without the offset:
The time boundaries and returned timestamps for the query without the
+offset_interval adhere to InfluxDB database’s preset time boundaries:
+
+
+
+
Time Interval Number
+
Preset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:00:00Z AND time < 2015-08-18T00:12:00Z
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:12:00Z
+
8.005
+
2015-08-18T00:00:00Z
+
+
+
2
+
time >= 2015-08-12T00:12:00Z AND time < 2015-08-18T00:24:00Z
+
time >= 2015-08-12T00:12:00Z AND time < 2015-08-18T00:18:00Z
+
7.887
+
2015-08-18T00:12:00Z
+
+
+
+
The first preset 12-minute time boundary begins at 00:00 and ends just before
+00:12.
+Only one raw point (8.005) falls both within the query’s first GROUP BY time() interval and in that
+first time boundary.
+Note that while the returned timestamp occurs before the start of the query’s time range,
+the query result excludes data that occur before the query’s time range.
+
The second preset 12-minute time boundary begins at 00:12 and ends just before
+00:24.
+Only one raw point (7.887) falls both within the query’s second GROUP BY time() interval and in that
+second time boundary.
+
The time boundaries and returned timestamps for the query with the
+offset_interval adhere to the offset time boundaries:
+
+
+
+
Time Interval Number
+
Offset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:18:00Z
+
<— same
+
8.005,7.887
+
2015-08-18T00:06:00Z
+
+
+
2
+
time >= 2015-08-18T00:18:00Z AND time < 2015-08-18T00:30:00Z
+
NA
+
NA
+
NA
+
+
+
+
The six-minute offset interval shifts forward the preset boundary’s time range
+such that the preset boundary time range and the relevant GROUP BY time() interval time range are the
+same.
+With the offset, the query returns a single result, and the timestamp returned
+matches both the start of the boundary time range and the start of the GROUP BY time() interval
+time range.
+
Note that offset_interval forces the second time boundary to be outside
+the query’s time range so the query returns no results for that second interval.
+
GROUP BY time intervals and fill()
+
fill() changes the value reported for time intervals that have no data.
By default, a GROUP BY time() interval with no data reports null as its
+value in the output column.
+fill() changes the value reported for time intervals that have no data.
+Note that fill() must go at the end of the GROUP BY clause if you’re
+GROUP(ing) BY several things (for example, both tags and a time interval).
+
fill_option
+
Any numerical value
+
+Reports the given numerical value for time intervals with no data.
+
linear
+
+Reports the results of linear interpolation for time intervals with no data.
+
none
+
+
+
+Reports no timestamp and no value for time intervals with no data.
+
null
+
+
+
+Reports null for time intervals with no data but returns a timestamp. This is the same as the default behavior.
+
previous
+
+
+
+Reports the value from the previous time interval for time intervals with no data.
fill(previous) changes the value reported for the time interval with no data to 3.235,
+the value from the previous time interval.
+
+
+
+
+
+
+
Common issues with fill()
+
Queries with fill() when no data fall within the query’s time range
+
Currently, queries ignore fill() if no data fall within the query’s time range.
+This is the expected behavior. An open
+feature request on GitHub
+proposes that fill() should force a return of values even if the query’s time
+range covers no data.
+
Example
+
The following query returns no data because water_level has no points within
+the query’s time range.
+Note that fill(800) has no effect on the query results.
Queries with fill(previous) when the previous result falls outside the query’s time range
+
fill(previous) doesn’t fill the result for a time interval if the previous
+value is outside the query’s time range.
+
Example
+
The following query covers the time range between 2015-09-18T16:24:00Z and 2015-09-18T16:54:00Z.
+Note that fill(previous) fills the result for 2015-09-18T16:36:00Z with the
+result from 2015-09-18T16:24:00Z.
The next query shortens the time range in the previous query.
+It now covers the time between 2015-09-18T16:36:00Z and 2015-09-18T16:54:00Z.
+Note that fill(previous) doesn’t fill the result for 2015-09-18T16:36:00Z with the
+result from 2015-09-18T16:24:00Z; the result for 2015-09-18T16:24:00Z is outside the query’s
+shorter time range.
fill(linear) when the previous or following result falls outside the query’s time range
+
fill(linear) doesn’t fill the result for a time interval with no data if the
+previous result or the following result is outside the query’s time range.
+
Example
+
The following query covers the time range between 2016-11-11T21:24:00Z and
+2016-11-11T22:06:00Z. Note that fill(linear) fills the results for the
+2016-11-11T21:36:00Z time interval and the 2016-11-11T21:48:00Z time interval
+using the values from the 2016-11-11T21:24:00Z time interval and the
+2016-11-11T22:00:00Z time interval.
The next query shortens the time range in the previous query.
+It now covers the time between 2016-11-11T21:36:00Z and 2016-11-11T22:06:00Z.
+Note that fill() previous doesn’t fill the results for the 2016-11-11T21:36:00Z
+time interval and the 2016-11-11T21:48:00Z time interval; the result for
+2016-11-11T21:24:00Z is outside the query’s shorter time range and InfluxDB
+cannot perform the linear interpolation.
The INTO clause supports several formats for specifying a measurement:
+
INTO <measurement_name>
+
+Writes data to the specified measurement.
+If you’re using the CLI InfluxDB writes the data to the measurement in the
+USEd
+database and the DEFAULTretention policy.
+If you’re using the InfluxDB API InfluxDB writes the data to the
+measurement in the database specified in the db query string parameter
+and the DEFAULT retention policy.
+
INTO <database_name>.<retention_policy_name>.<measurement_name>
+
+Writes data to a fully qualified measurement.
+Fully qualify a measurement by specifying its database and retention policy.
+
INTO <database_name>..<measurement_name>
+
+Writes data to a measurement in a user-specified database and the DEFAULT
+retention policy.
+
INTO <database_name>.<retention_policy_name>.:MEASUREMENT FROM /<regular_expression>/
+
+Writes data to all measurements in the user-specified database and
+retention policy that match the regular expression in the FROM clause.
+:MEASUREMENT is a backreference to each measurement matched in the FROM clause.
Directly renaming a database in InfluxDB is not possible, so a common use for the INTO clause is to move data from one database to another.
+The query above writes all data in the NOAA_water_database and autogen retention policy to the copy_NOAA_water_database database and the autogen retention policy.
+
The backreference syntax (:MEASUREMENT) maintains the source measurement names in the destination database.
+Note that both the copy_NOAA_water_database database and its autogen retention policy must exist prior to running the INTO query.
+See Database Management
+for how to manage databases and retention policies.
+
The GROUP BY * clause preserves tags in the source database as tags in the destination database.
+The following query does not maintain the series context for tags; tags will be stored as fields in the destination database (copy_NOAA_water_database):
When moving large amounts of data, we recommend sequentially running INTO queries for different measurements and using time boundaries in the WHERE clause.
+This prevents your system from running out of memory.
+The codeblock below provides sample syntax for those queries:
+
+
+
SELECT *
+INTO <destination_database>.<retention_policy_name>.<measurement_name>
+FROM <source_database>.<retention_policy_name>.<measurement_name>
+WHERE time > now() - 100w AND time < now() - 90w GROUP BY *
+
+SELECT *
+INTO <destination_database>.<retention_policy_name>.<measurement_name>
+FROM <source_database>.<retention_policy_name>.<measurement_name>}
+WHERE time > now() - 90w AND < now() - 80w GROUP BY *
+
+SELECT *
+INTO <destination_database>.<retention_policy_name>.<measurement_name>
+FROM <source_database>.<retention_policy_name>.<measurement_name>
+WHERE time > now() - 80w AND time < now() - 70w GROUP BY *
The query writes its results a new measurement: h2o_feet_copy_1.
+If you’re using the CLI, InfluxDB writes the data to
+the USEd database and the DEFAULTretention policy.
+If you’re using the InfluxDB API, InfluxDB writes the
+data to the database and retention policy specified in the db and rp
+query string parameters.
+If you do not set the rp query string parameter, the InfluxDB API automatically
+writes the data to the database’s DEFAULT retention policy.
+
The response shows the number of points (7605) that InfluxDB writes to h2o_feet_copy_1.
+The timestamp in the response is meaningless; InfluxDB uses epoch 0
+(1970-01-01T00:00:00Z) as a null timestamp equivalent.
+
Write the results of a query to a fully qualified measurement
The query writes its results to a new measurement: h2o_feet_copy_2.
+InfluxDB writes the data to the where_else database and to the autogen
+retention policy.
+Note that both where_else and autogen must exist prior to running the INTO
+query.
+See Database Management
+for how to manage databases and retention policies.
+
The response shows the number of points (7605) that InfluxDB writes to h2o_feet_copy_2.
+The timestamp in the response is meaningless; InfluxDB uses epoch 0
+(1970-01-01T00:00:00Z) as a null timestamp equivalent.
+
Write aggregated results to a measurement (downsampling)
The query aggregates data using an
+InfluxQL function and a GROUP BY time() clause.
+It also writes its results to the all_my_averages measurement.
+
The response shows the number of points (3) that InfluxDB writes to all_my_averages.
+The timestamp in the response is meaningless; InfluxDB uses epoch 0
+(1970-01-01T00:00:00Z) as a null timestamp equivalent.
+
The query is an example of downsampling: taking higher precision data,
+aggregating those data to a lower precision, and storing the lower precision
+data in the database.
+Downsampling is a common use case for the INTO clause.
+
Write aggregated results for more than one measurement to a different database (downsampling with backreferencing)
The query aggregates data using an
+InfluxQL function and a GROUP BY time() clause.
+It aggregates data in every measurement that matches the regular expression
+in the FROM clause and writes the results to measurements with the same name in the
+where_else database and the autogen retention policy.
+Note that both where_else and autogen must exist prior to running the INTO
+query.
+See Database management
+for how to manage databases and retention policies.
+
The response shows the number of points (5) that InfluxDB writes to the where_else
+database and the autogen retention policy.
+The timestamp in the response is meaningless; InfluxDB uses epoch 0
+(1970-01-01T00:00:00Z) as a null timestamp equivalent.
+
The query is an example of downsampling with backreferencing.
+It takes higher precision data from more than one measurement,
+aggregates those data to a lower precision, and stores the lower precision
+data in the database.
+Downsampling with backreferencing is a common use case for the INTO clause.
+
Common issues with the INTO clause
+
Missing data
+
If an INTO query includes a tag key in the SELECT clause, the query converts tags in the current
+measurement to fields in the destination measurement.
+This can cause InfluxDB to overwrite points that were previously differentiated
+by a tag value.
+Note that this behavior does not apply to queries that use the TOP() or BOTTOM() functions.
+The
+Frequently Asked Questions
+document describes that behavior in detail.
+
To preserve tags in the current measurement as tags in the destination measurement,
+GROUP BY the relevant tag key or GROUP BY * in the INTO query.
+
Automating queries with the INTO clause
+
The INTO clause section in this document shows how to manually implement
+queries with an INTO clause.
+See the Continuous Queries
+documentation for how to automate INTO clause queries on realtime data.
+Among other uses,
+Continuous Queries automate the downsampling process.
+
ORDER BY time DESC
+
By default, InfluxDB returns results in ascending time order; the first point
+returned has the oldest timestamp and
+the last point returned has the most recent timestamp.
+ORDER BY time DESC reverses that order such that InfluxDB returns the points
+with the most recent timestamps first.
ORDER by time DESC must appear after the GROUP BY clause
+if the query includes a GROUP BY clause.
+ORDER by time DESC must appear after the WHERE clause
+if the query includes a WHERE clause and no GROUP BY clause.
The query returns the points with the most recent timestamps from the
+h2o_feetmeasurement first.
+Without ORDER by time DESC, the query would return 2015-08-18T00:00:00Z
+first and 2015-09-18T21:42:00Z last.
+
Return the newest points first and include a GROUP BY time() clause
The query uses an InfluxQL function
+and a time interval in the GROUP BY clause
+to calculate the average water_level for each twelve-minute
+interval in the query’s time range.
+ORDER BY time DESC returns the most recent 12-minute time intervals
+first.
+
Without ORDER BY time DESC, the query would return
+2015-08-18T00:00:00Z first and 2015-08-18T00:36:00Z last.
+
The LIMIT and SLIMIT clauses
+
LIMIT and SLIMIT limit the number of
+points and the number of
+series returned per query.
N specifies the number of points to return from the specified measurement.
+If N is greater than the number of points in a measurement, InfluxDB returns
+all points from that series.
+
Note that the LIMIT clause must appear in the order outlined in the syntax above.
The query uses an InfluxQL function
+and a GROUP BY clause
+to calculate the average water_level for each tag and for each twelve-minute
+interval in the query’s time range.
+LIMIT 2 requests the two oldest twelve-minute averages (determined by timestamp).
+
Note that without LIMIT 2, the query would return four points per series;
+one for each twelve-minute interval in the query’s time range.
N specifies the number of series to return from the specified measurement.
+If N is greater than the number of series in a measurement, InfluxDB returns
+all series from that measurement.
+
There is an ongoing issue that requires queries with SLIMIT to include GROUP BY *.
+Note that the SLIMIT clause must appear in the order outlined in the syntax above.
The query uses an InfluxQL function
+and a time interval in the GROUP BY clause
+to calculate the average water_level for each twelve-minute
+interval in the query’s time range.
+SLIMIT 1 requests a single series associated with the h2o_feet measurement.
+
Note that without SLIMIT 1, the query would return results for the two series
+associated with the h2o_feet measurement: location=coyote_creek and
+location=santa_monica.
+
LIMIT and SLIMIT
+
LIMIT <N> followed by SLIMIT <N> returns the first <N> points from <N> series in the specified measurement.
N1 specifies the number of points to return per measurement.
+If N1 is greater than the number of points in a measurement, InfluxDB returns all points from that measurement.
+
N2 specifies the number of series to return from the specified measurement.
+If N2 is greater than the number of series in a measurement, InfluxDB returns all series from that measurement.
+
There is an ongoing issue that requires queries with LIMIT and SLIMIT to include GROUP BY *.
+Note that the LIMIT and SLIMIT clauses must appear in the order outlined in the syntax above.
The query uses an InfluxQL function
+and a time interval in the GROUP BY clause
+to calculate the average water_level for each twelve-minute
+interval in the query’s time range.
+LIMIT 2 requests the two oldest twelve-minute averages (determined by
+timestamp) and SLIMIT 1 requests a single series
+associated with the h2o_feet measurement.
+
Note that without LIMIT 2 SLIMIT 1, the query would return four points
+for each of the two series associated with the h2o_feet measurement.
+
The OFFSET and SOFFSET clauses
+
OFFSET and SOFFSET paginates points and series returned.
Note: InfluxDB returns no results if the WHERE clause includes a time
+range and the OFFSET clause would cause InfluxDB to return points with
+timestamps outside of that time range.
The query returns the fourth, fifth, and sixth points from the h2o_feetmeasurement.
+If the query did not include OFFSET 3, it would return the first, second,
+and third points from that measurement.
This example is pretty involved, so here’s the clause-by-clause breakdown:
+
The SELECT clause specifies an InfluxQL function.
+The FROM clause specifies a single measurement.
+The WHERE clause specifies the time range for the query.
+The GROUP BY clause groups results by all tags (*) and into 12-minute intervals.
+The ORDER BY time DESC clause returns results in descending timestamp order.
+The LIMIT 2 clause limits the number of points returned to two.
+The OFFSET 2 clause excludes the first two averages from the query results.
+The SLIMIT 1 clause limits the number of series returned to one.
+
Without OFFSET 2, the query would return the first two averages of the query results:
N specifies the number of series to paginate.
+The SOFFSET clause requires an SLIMIT clause.
+Using the SOFFSET clause without an SLIMIT clause can cause inconsistent
+query results.
+There is an ongoing issue that requires queries with SLIMIT to include GROUP BY *.
+
+
+
Note: InfluxDB returns no results if the SOFFSET clause paginates
+through more than the total number of series.
The query returns data for the series associated with the h2o_feet
+measurement and the location = santa_monicatag.
+Without SOFFSET 1, the query returns data for the series associated with the
+h2o_feet measurement and the location = coyote_creek tag.
This example is pretty involved, so here’s the clause-by-clause breakdown:
+
The SELECT clause specifies an InfluxQL function.
+The FROM clause specifies a single measurement.
+The WHERE clause specifies the time range for the query.
+The GROUP BY clause groups results by all tags (*) and into 12-minute intervals.
+The ORDER BY time DESC clause returns results in descending timestamp order.
+The LIMIT 2 clause limits the number of points returned to two.
+The OFFSET 2 clause excludes the first two averages from the query results.
+The SLIMIT 1 clause limits the number of series returned to one.
+The SOFFSET 1 clause paginates the series returned.
+
Without SOFFSET 1, the query would return the results for a different series:
By default, InfluxDB stores and returns timestamps in UTC.
+The tz() clause includes the UTC offset or, if applicable, the UTC Daylight Savings Time (DST) offset to the query’s returned timestamps.
+The returned timestamps must be in RFC3339 format for the UTC offset or UTC DST to appear.
+The time_zone parameter follows the TZ syntax in the Internet Assigned Numbers Authority time zone database and it requires single quotes.
Currently, InfluxDB does not support using OR with absolute time in the WHERE
+clause. See the Frequently Asked Questions
+document and the GitHub Issue
+for more information.
+
rfc3339_date_time_string
+
+
+
'YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ'
+
.nnnnnnnnn is optional and is set to .000000000 if not included.
+The RFC3339 date-time string requires single quotes.
+
rfc3339_like_date_time_string
+
+
+
'YYYY-MM-DD HH:MM:SS.nnnnnnnnn'
+
HH:MM:SS.nnnnnnnnn.nnnnnnnnn is optional and is set to 00:00:00.000000000 if not included.
+The RFC3339-like date-time string requires single quotes.
+
epoch_time
+
Epoch time is the amount of time that has elapsed since 00:00:00
+Coordinated Universal Time (UTC), Thursday, 1 January 1970.
+
By default, InfluxDB assumes that all epoch timestamps are in nanoseconds.
+Include a duration literal
+at the end of the epoch timestamp to indicate a precision other than nanoseconds.
+
Basic arithmetic
+
All timestamp formats support basic arithmetic.
+Add (+) or subtract (-) a time from a timestamp with a duration literal.
+Note that InfluxQL requires a whitespace between the + or - and the
+duration literal.
+
Examples
+
Specify a time range with RFC3339 date-time strings
The query returns data with timestamps between August 18, 2015 at 00:00:00.000000000 and
+August 18, 2015 at 00:12:00.
+The nanosecond specification in the first timestamp (.000000000)
+is optional.
+
Note that the single quotes around the RFC3339 date-time strings are required.
+
Specify a time range with RFC3339-like date-time strings
The query returns data with timestamps between August 18, 2015 at 00:00:00 and August 18, 2015
+at 00:12:00.
+The first date-time string does not include a time; InfluxDB assumes the time
+is 00:00:00.
+
Note that the single quotes around the RFC3339-like date-time strings are
+required.
The query returns data with timestamps that occur between August 18, 2015
+at 00:00:00 and August 18, 2015 at 00:12:00.
+By default InfluxDB assumes epoch timestamps are in nanoseconds.
+
Specify a time range with second-precision epoch timestamps
The query returns data with timestamps that occur between August 18, 2015
+at 00:00:00 and August 18, 2015 at 00:12:00.
+The sduration literal at the
+end of the epoch timestamps indicate that the epoch timestamps are in seconds.
+
Perform basic arithmetic on an RFC3339-like date-time string
The query returns data with timestamps that occur at least six minutes after
+September 18, 2015 at 21:24:00.
+Note that the whitespace between the + and 6m is required.
The query returns data with timestamps that occur at least six minutes before
+September 18, 2015 at 21:24:00.
+Note that the whitespace between the - and 6m is required.
+
Relative time
+
Use now() to query data with timestamps relative to the server’s current timestamp.
now() is the Unix time of the server at the time the query is executed on that server.
+The whitespace between - or + and the duration literal is required.
The query returns data with timestamps that occur between September 18, 2015
+at 21:18:00 and 1000 days from now().
+The whitespace between + and 1000d is required.
+
Common issues with time syntax
+
Using OR to select time multiple time intervals
+
InfluxDB does not support using the OR operator in the WHERE clause to specify multiple time intervals.
To query data with timestamps that occur after now(), SELECT statements with
+a GROUP BY time() clause must provide an alternative upper bound in the
+WHERE clause.
+
Example
+
Use the CLI to write a point to the NOAA_water_database that occurs after now():
Note that the WHERE clause must provide an alternative upper bound to
+override the default now() upper bound. The following query merely resets
+the lower bound to now() such that the query’s time range is between
+now() and now():
Currently, InfluxQL does not support using regular expressions to match
+non-string field values in the
+WHERE clause,
+databases, and
+retention polices.
+
+
+
Note: Regular expression comparisons are more computationally intensive than exact
+string comparisons; queries with regular expressions are not as performant
+as those without.
The query selects all field keys
+and tag keys that include an l.
+Note that the regular expression in the SELECT clause must match at least one
+field key in order to return results for a tag key that matches the regular
+expression.
+
Currently, there is no syntax to distinguish between regular expressions for
+field keys and regular expressions for tag keys in the SELECT clause.
+The syntax /<regular_expression>/::[field | tag] is not supported.
+
Use a regular expression to specify measurements in the FROM clause
The query uses an InfluxQL function
+to calculate the average degrees for every measurement in the NOAA_water_database
+database that contains the word temperature.
+
Use a regular expression to specify tag values in the WHERE clause
The query uses an InfluxQL function
+to calculate the average water_level where the tag value of location
+includes an m and water_level is greater than three.
+
Use a regular expression to specify a tag with no value in the WHERE clause
+
+
+
SELECT*FROM"h2o_feet"WHERE"location"!~/./
+
The query selects all data from the h2o_feet measurement where the location
+tag has no value.
+Every data point in the NOAA_water_database has a tag value for location.
+
It’s possible to perform this same query without a regular expression.
+See the
+Frequently Asked Questions
+document for more information.
+
Use a regular expression to specify a tag with a value in the WHERE clause
The query uses an InfluxQL function
+to calculate the average water_level for all data where the field value of
+level description includes the word between.
+
Use a regular expression to specify tag keys in the GROUP BY clause
Field values can be floats, integers, strings, or booleans.
+The :: syntax allows users to specify the field’s type in a query.
+
+
+
Note: Generally, it is not necessary to specify the field value
+type in the SELECT clause.
+In most cases, InfluxDB rejects any writes that attempt to write a field value
+to a field that previously accepted field values of a different type.
+
+
+
It is possible for field value types to differ across shard groups.
+In these cases, it may be necessary to specify the field value type in the
+SELECT clause.
+Please see the
+Frequently Asked Questions
+document for more information on how InfluxDB handles field value type discrepancies.
+
Syntax
+
+
+
SELECT_clause<field_key>::<type>FROM_clause
+
type can be float, integer, string, or boolean.
+In most cases, InfluxDB returns no data if the field_key does not store data of the specified
+type. See Cast Operations for more information.
The query returns values of the water_level field key that are floats.
+
Cast operations
+
The :: syntax allows users to perform basic cast operations in queries.
+Currently, InfluxDB supports casting field values from integers to
+floats or from floats to integers.
+
Syntax
+
+
+
SELECT_clause<field_key>::<type>FROM_clause
+
type can be float or integer.
+
InfluxDB returns no data if the query attempts to cast an integer or float to a
+string or boolean.
The h2o_feetmeasurement in the NOAA_water_database is part of two series.
+The first series is made up of the h2o_feet measurement and the location = coyote_creektag.
+The second series is made of up the h2o_feet measurement and the location = santa_monica tag.
+
The following query automatically merges those two series when it calculates the averagewater_level:
A subquery is a query that is nested in the FROM clause of another query.
+Use a subquery to apply a query as a condition in the enclosing query.
+Subqueries offer functionality similar to nested functions and SQL
+HAVING clauses.
+
Syntax
+
+
+
SELECT_clauseFROM(SELECT_statement)[...]
+
InfluxDB performs the subquery first and the main query second.
+
The main query surrounds the subquery and requires at least the SELECT clause and the FROM clause.
+The main query supports all clauses listed in this document.
+
The subquery appears in the main query’s FROM clause, and it requires surrounding parentheses.
+The subquery supports all clauses listed in this document.
+
InfluxQL supports multiple nested subqueries per main query.
+Sample syntax for multiple subqueries:
To improve the performance of InfluxQL queries with time-bound subqueries,
+apply the WHERE time clause to the outer query instead of the inner query.
+For example, the following queries return the same results, but the query with
+time bounds on the outer query is more performant than the query with time
+bounds on the inner query:
Next, InfluxDB performs the main query and calculates the sum of those maximum values: 9.964 + 7.205 = 17.169.
+Notice that the main query specifies max, not water_level, as the field key in the SUM() function.
+
Calculate the MEAN() difference between two fields
The query returns the average of the differences between the number of cats and dogs in the pet_daycare measurement.
+
InfluxDB first performs the subquery.
+The subquery calculates the difference between the values in the cats field and the values in the dogs field,
+and it names the output column difference:
Next, InfluxDB performs the main query and calculates the average of those differences.
+Notice that the main query specifies difference as the field key in the MEAN() function.
+
Calculate several MEAN() values and place a condition on those mean values
The query returns all mean values of the water_level field that are greater than five.
+
InfluxDB first performs the subquery.
+The subquery calculates MEAN() values of water_level from 2015-08-18T00:00:00Z through 2015-08-18T00:30:00Z and groups the results into 12-minute intervals.
+It also names the output column all_the_means:
Next, InfluxDB performs the main query and returns only those mean values that are greater than five.
+Notice that the main query specifies all_the_means as the field key in the SELECT clause.
The query returns the sum of the derivative of average water_level values for each tag value of location.
+
InfluxDB first performs the subquery.
+The subquery calculates the derivative of average water_level values taken at 12-minute intervals.
+It performs that calculation for each tag value of location and names the output column water_level_derivative:
Next, InfluxDB performs the main query and calculates the sum of the water_level_derivative values for each tag value of location.
+Notice that the main query specifies water_level_derivative, not water_level or derivative, as the field key in the SUM() function.
+
Common issues with subqueries
+
Multiple SELECT statements in a subquery
+
InfluxQL supports multiple nested subqueries per main query:
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
If you’re looking for SHOW queries (for example, SHOW DATABASES or SHOW RETENTION POLICIES), see Schema Exploration.
+
The examples in the sections below use the InfluxDB Command Line Interface (CLI).
+You can also execute the commands using the InfluxDB API; simply send a GET request to the /query endpoint and include the command in the URL parameter q.
+For more on using the InfluxDB API, see Querying data.
+
+
+
Note: When authentication is enabled, only admin users can execute most of the commands listed on this page.
+See the documentation on authentication and authorization for more information.
The WITH, DURATION, REPLICATION, SHARD DURATION, PAST LIMIT,
+FUTURE LIMIT, and NAMEclauses are optional and create a single [retention policy](/enterprise_influxdb/v1/concepts/glossary/#retention-policy-rp) associated with the created database. If you do not specify one of the clauses afterWITH, the relevant behavior defaults to the autogen` retention policy settings.
+The created retention policy automatically serves as the database’s default retention policy.
+For more information about those clauses, see
+Retention Policy Management.
+
A successful CREATE DATABASE query returns an empty result.
+If you attempt to create a database that already exists, InfluxDB does nothing and does not return an error.
+
Examples
+
Create a database
+
+
+
CREATE DATABASE "NOAA_water_database"
+
The query creates a database called NOAA_water_database.
+By default, InfluxDB also creates the autogen retention policy and associates it with the NOAA_water_database.
+
Create a database with a specific retention policy
+
+
+
CREATE DATABASE "NOAA_water_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "liquid"
+
The query creates a database called NOAA_water_database.
+It also creates a default retention policy for NOAA_water_database with a DURATION of three days, a replication factor of one, a shard group duration of one hour, and with the name liquid.
+
Delete a database with DROP DATABASE
+
The DROP DATABASE query deletes all of the data, measurements, series, continuous queries, and retention policies from the specified database.
+The query takes the following form:
+
+
+
DROPDATABASE<database_name>
+
Drop the database NOAA_water_database:
+
+
+
DROP DATABASE "NOAA_water_database"
+
A successful DROP DATABASE query returns an empty result.
+If you attempt to drop a database that does not exist, InfluxDB does not return an error.
+
Drop series from the index with DROP SERIES
+
The DROP SERIES query deletes all points from a series in a database,
+and it drops the series from the index.
+
The query takes the following form, where you must specify either the FROM clause or the WHERE clause:
Delete all data associated with the measurement h2o_feet:
+
+
+
DELETEFROM"h2o_feet"
+
Delete all data associated with the measurement h2o_quality and where the tag randtag equals 3:
+
+
+
DELETEFROM"h2o_quality"WHERE"randtag"='3'
+
Delete all data in the database that occur before January 01, 2020:
+
+
+
DELETEWHEREtime<'2020-01-01'
+
Delete all data associated with the measurement h2o_feet in retention policy one_day:
+
+
+
DELETEFROM"one_day"."h2o_feet"
+
A successful DELETE query returns an empty result.
+
Things to note about DELETE:
+
+
DELETE supports regular expressions
+in the FROM clause when specifying measurement names and in the WHERE clause
+when specifying tag values. It does not support regular expressions for the
+retention policy in the FROM clause.
+If deleting a series in a retention policy, DELETE requires that you define
+only one retention policy in the FROM clause.
+
DELETE does not support fields
+in the WHERE clause.
+
If you need to delete points in the future, you must specify that time period
+as DELETE SERIES runs for time < now() by default.
+
+
Delete measurements with DROP MEASUREMENT
+
The DROP MEASUREMENT query deletes all data and series from the specified measurement and deletes the
+measurement from the index.
+
The query takes the following form:
+
+
+
DROPMEASUREMENT<measurement_name>
+
Delete the measurement h2o_feet:
+
+
+
DROPMEASUREMENT"h2o_feet"
+
+
+
Note:DROP MEASUREMENT drops all data and series in the measurement.
+It does not drop the associated continuous queries.
+
+
+
A successful DROP MEASUREMENT query returns an empty result.
+
+
+
Currently, InfluxDB does not support regular expressions with DROP MEASUREMENTS.
+See GitHub Issue #4275 for more information.
+
+
+
+
Delete a shard with DROP SHARD
+
The DROP SHARD query deletes a shard. It also drops the shard from the
+metastore.
+The query takes the following form:
+
+
+
DROPSHARD<shard_id_number>
+
Delete the shard with the id 1:
+
+
+
DROPSHARD1
+
A successful DROP SHARD query returns an empty result.
+InfluxDB does not return an error if you attempt to drop a shard that does not
+exist.
+
Retention policy management
+
The following sections cover how to create, alter, and delete retention policies.
+Note that when you create a database, InfluxDB automatically creates a retention policy named autogen which has infinite retention.
+You may disable its auto-creation in the configuration file.
+
Create retention policies with CREATE RETENTION POLICY
The DURATION clause determines how long InfluxDB keeps the data.
+The <duration> is a duration literal
+or INF (infinite).
+The minimum duration for a retention policy is one hour and the maximum
+duration is INF.
+
+
REPLICATION
+
+
+
The REPLICATION clause determines how many independent copies of each point
+are stored in the cluster.
+
+
+
By default, the replication factor n usually equals the number of data nodes. However, if you have four or more data nodes, the default replication factor n is 3.
+
+
+
To ensure data is immediately available for queries, set the replication factor n to less than or equal to the number of data nodes in the cluster.
+
+
+
+
+
Important: If you have four or more data nodes, verify that the database replication factor is correct.
+
+
+
+
Replication factors do not serve a purpose with single node instances.
+
+
SHARD DURATION
+
+
Optional. The SHARD DURATION clause determines the time range covered by a shard group.
+
The <duration> is a duration literal
+and does not support an INF (infinite) duration.
+
By default, the shard group duration is determined by the retention policy’s
+DURATION:
+
+
+
+
+
Retention Policy’s DURATION
+
Shard Group Duration
+
+
+
+
+
< 2 days
+
1 hour
+
+
+
>= 2 days and <= 6 months
+
1 day
+
+
+
> 6 months
+
7 days
+
+
+
+
The minimum allowable SHARD GROUP DURATION is 1h.
+If the CREATE RETENTION POLICY query attempts to set the SHARD GROUP DURATION to less than 1h and greater than 0s, InfluxDB automatically sets the SHARD GROUP DURATION to 1h.
+If the CREATE RETENTION POLICY query attempts to set the SHARD GROUP DURATION to 0s, InfluxDB automatically sets the SHARD GROUP DURATION according to the default settings listed above.
The PAST LIMIT clause defines a time boundary before and relative to now
+in which points written to the retention policy are accepted. If a point has a
+timestamp before the specified boundary, the point is rejected and the write
+request returns a partial write error.
+
For example, if a write request tries to write data to a retention policy with a
+PAST LIMIT 6h and there are points in the request with timestamps older than
+6 hours, those points are rejected.
+
+
+
+
+
PAST LIMIT cannot be changed after it is set.
+This will be fixed in a future release.
+
+
FUTURE LIMIT
+
The FUTURE LIMIT clause defines a time boundary after and relative to now
+in which points written to the retention policy are accepted. If a point has a
+timestamp after the specified boundary, the point is rejected and the write
+request returns a partial write error.
+
For example, if a write request tries to write data to a retention policy with a
+FUTURE LIMIT 6h and there are points in the request with future timestamps
+greater than 6 hours from now, those points are rejected.
+
+
+
+
+
FUTURE LIMIT cannot be changed after it is set.
+This will be fixed in a future release.
+
+
DEFAULT
+
Sets the new retention policy as the default retention policy for the database.
+This setting is optional.
The query creates a retention policy called one_day_only for the database
+NOAA_water_database with a one day duration and a replication factor of one.
The query creates the same retention policy as the one in the example above, but
+sets it as the default retention policy for the database.
+
A successful CREATE RETENTION POLICY query returns an empty response.
+If you attempt to create a retention policy identical to one that already exists, InfluxDB does not return an error.
+If you attempt to create a retention policy with the same name as an existing retention policy but with differing attributes, InfluxDB returns an error.
Modify retention policies with ALTER RETENTION POLICY
+
The ALTER RETENTION POLICY query takes the following form, where you must declare at least one of the retention policy attributes DURATION, REPLICATION, SHARD DURATION, or DEFAULT:
A successful DROP RETENTION POLICY query returns an empty result.
+If you attempt to drop a retention policy that does not exist, InfluxDB does not return an error.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
In order to explore the query language further, these instructions help you create a database,
+download and write data to that database within your InfluxDB installation.
+The sample data is then used and referenced in Data Exploration,
+Schema Exploration, and Functions.
+
Creating a database
+
If you’ve installed InfluxDB locally, the influx command should be available via the command line.
+Executing influx will start the CLI and automatically connect to the local InfluxDB instance
+(assuming you have already started the server with service influxdb start or by running influxd directly).
+The output should look like this:
+
+
+
$ influx -precision rfc3339
+Connected to http://localhost:8086 version 1.12.2
+InfluxDB shell 1.12.2
+>
+
+
+
Notes:
+
+
+
+
The InfluxDB API runs on port 8086 by default.
+Therefore, influx will connect to port 8086 and localhost by default.
+If you need to alter these defaults, run influx --help.
+
The -precision argument specifies the format/precision of any returned timestamps.
+In the example above, rfc3339 tells InfluxDB to return timestamps in RFC3339 format (YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ).
+
+
The command line is now ready to take input in the form of the Influx Query Language (a.k.a InfluxQL) statements.
+To exit the InfluxQL shell, type exit and hit return.
+
A fresh install of InfluxDB has no databases (apart from the system _internal),
+so creating one is our first task.
+You can create a database with the CREATE DATABASE <db-name> InfluxQL statement,
+where <db-name> is the name of the database you wish to create.
+Names of databases can contain any unicode character as long as the string is double-quoted.
+Names can also be left unquoted if they contain only ASCII letters,
+digits, or underscores and do not begin with a digit.
+
Throughout the query language exploration, we’ll use the database name NOAA_water_database:
+
+
+
CREATEDATABASENOAA_water_database
+exit
+
Download and write the data to InfluxDB
+
From your terminal, download the text file that contains the data in line protocol format:
Note that the measurements average_temperature, h2o_pH, h2o_quality, and h2o_temperature contain fictional data.
+Those measurements serve to illuminate query functionality in Schema Exploration.
+
The h2o_feet measurement is the only measurement that contains the NOAA data.
+Please note that the level description field isn’t part of the original NOAA data - we snuck it in there for the sake of having a field key with a special character and string field values.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
The syntax is specified using Extended Backus-Naur Form (“EBNF”).
+EBNF is the same notation used in the Go programming language specification,
+which can be found here.
+
+
+
Production = production_name "=" [ Expression ] "." .
+Expression = Alternative { "|" Alternative } .
+Alternative = Term { Term } .
+Term = production_name | token [ "…" token ] | Group | Option | Repetition .
+Group = "(" Expression ")" .
+Option = "[" Expression "]" .
+Repetition = "{" Expression "}" .
+
Notation operators in order of increasing precedence:
+
+
+
| alternation
+() grouping
+[] option (0 or 1 times)
+{} repetition (0 to n times)
cpu
+_cpu_stats
+"1h"
+"anything really"
+"1_Crazy-1337.identifier>NAME👍"
+
Keywords
+
+
+
ALL ALTER ANY AS ASC BEGIN
+BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
+DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
+DURATION END EVERY EXPLAIN FIELD FOR
+FROM FUTURE GRANT GRANTS GROUP GROUPS
+IN INF INSERT INTO KEY KEYS
+KILL LIMIT SHOW MEASUREMENT MEASUREMENTS NAME
+OFFSET ON ORDER PASSWORD PAST POLICY
+POLICIES PRIVILEGES QUERIES QUERY READ REPLICATION
+RESAMPLE RETENTION REVOKE SELECT SERIES SET
+SHARD SHARDS SLIMIT SOFFSET STATS SUBSCRIPTION
+SUBSCRIPTIONS TAG TO USER USERS VALUES
+WHERE WITH WRITE
+
If you use an InfluxQL keywords as an
+identifier you will need to
+double quote that identifier in every query.
+
The keyword time is a special case.
+time can be a
+continuous query name,
+database name,
+measurement name,
+retention policy name,
+subscription name, and
+user name.
+In those cases, time does not require double quotes in queries.
+time cannot be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+See Frequently Asked Questions for more information.
+
Literals
+
Integers
+
InfluxQL supports decimal integer literals.
+Hexadecimal and octal literals are not currently supported.
+
+
+
int_lit = ( "1" … "9" ) { digit } .
+
Floats
+
InfluxQL supports floating-point literals.
+Exponents are not currently supported.
+
+
+
float_lit = int_lit "." int_lit .
+
Strings
+
String literals must be surrounded by single quotes.
+Strings may contain ' characters as long as they are escaped (i.e., \').
+
+
+
string_lit = `'` { unicode_char } `'` .
+
Durations
+
Duration literals specify a length of time.
+An integer literal followed immediately (with no spaces) by a duration unit listed below is interpreted as a duration literal.
+Durations can be specified with mixed units.
The date and time literal format is not specified in EBNF like the rest of this document.
+It is specified using Go’s date / time parsing format, which is a reference date written in the format required by InfluxQL.
+The reference date time is:
+
InfluxQL reference date time: January 2nd, 2006 at 3:04:05 PM
Currently, InfluxQL does not support using regular expressions to match
+non-string field values in the
+WHERE clause,
+databases, and
+retention polices.
+
+
+
Queries
+
A query is composed of one or more statements separated by a semicolon.
-- Set default retention policy for mydb to 1h.cpu.
+ALTERRETENTIONPOLICY"1h.cpu"ON"mydb"DEFAULT
+
+-- Change duration and replication factor.
+-- REPLICATION (replication factor) not valid for OSS instances.
+ALTERRETENTIONPOLICY"policy1"ON"somedb"DURATION1hREPLICATION4
-- selects from DEFAULT retention policy and writes into 6_months retention policy
+CREATECONTINUOUSQUERY"10m_event_count"
+ON"db_name"
+BEGIN
+SELECTcount("value")
+INTO"6_months"."events"
+FROM"events"
+GROUP(10m)
+END;
+
+-- this selects from the output of one continuous query in one retention policy and outputs to another series in another retention policy
+CREATECONTINUOUSQUERY"1h_event_count"
+ON"db_name"
+BEGIN
+SELECTsum("count")as"count"
+INTO"2_years"."events"
+FROM"6_months"."events"
+GROUPBYtime(1h)
+END;
+
+-- this customizes the resample interval so the interval is queried every 10s and intervals are resampled until 2m after their start time
+-- when resample is used, at least one of "EVERY" or "FOR" must be used
+CREATECONTINUOUSQUERY"cpu_mean"
+ON"db_name"
+RESAMPLEEVERY10sFOR2m
+BEGIN
+SELECTmean("value")
+INTO"cpu_mean"
+FROM"cpu"
+GROUPBYtime(1m)
+END;
Replication factors do not serve a purpose with single node instances.
+
+
Examples
+
+
+
-- Create a database called foo
+CREATEDATABASE"foo"
+
+-- Create a database called bar with a new DEFAULT retention policy and specify
+-- the duration, replication, shard group duration, and name of that retention policy
+CREATEDATABASE"bar"WITHDURATION1dREPLICATION1SHARDDURATION30mNAME"myrp"
+
+-- Create a database called mydb with a new DEFAULT retention policy and specify
+-- the name of that retention policy
+CREATEDATABASE"mydb"WITHNAME"myrp"
+
+-- Create a database called bar with a new retention policy named "myrp", and
+-- specify the duration, past and future limits, and name of that retention policy
+CREATEDATABASE"bar"WITHDURATION1dPASTLIMIT6hFUTURELIMIT6hNAME"myrp"
Replication factors do not serve a purpose with single node instances.
+
+
Examples
+
+
+
-- Create a retention policy.
+CREATERETENTIONPOLICY"10m.events"ON"somedb"DURATION60mREPLICATION2
+
+-- Create a retention policy and set it as the DEFAULT.
+CREATERETENTIONPOLICY"10m.events"ON"somedb"DURATION60mREPLICATION2DEFAULT
+
+-- Create a retention policy and specify the shard group duration.
+CREATERETENTIONPOLICY"10m.events"ON"somedb"DURATION60mREPLICATION2SHARDDURATION30m
+
+-- Create a retention policy and specify past and future limits.
+CREATERETENTIONPOLICY"10m.events"ON"somedb"DURATION12hPASTLIMIT6hFUTURELIMIT6h
+
CREATE SUBSCRIPTION
+
Subscriptions tell InfluxDB to send all the data it receives to Kapacitor.
-- Create a SUBSCRIPTION on database 'mydb' and retention policy 'autogen' that send data to 'example.com:9090' via UDP.
+CREATESUBSCRIPTION"sub0"ON"mydb"."autogen"DESTINATIONSALL'udp://example.com:9090'
+
+-- Create a SUBSCRIPTION on database 'mydb' and retention policy 'autogen' that round robins the data to 'h1.example.com:9090' and 'h2.example.com:9090'.
+CREATESUBSCRIPTION"sub0"ON"mydb"."autogen"DESTINATIONSANY'udp://h1.example.com:9090','udp://h2.example.com:9090'
-- Create a normal database user.
+CREATEUSER"jdoe"WITHPASSWORD'1337password'
+
+-- Create an admin user.
+-- Note: Unlike the GRANT statement, the "PRIVILEGES" keyword is required here.
+CREATEUSER"jdoe"WITHPASSWORD'1337password'WITHALLPRIVILEGES
+
+
+
Note: The password string must be wrapped in single quotes.
Parses and plans the query, and then prints a summary of estimated costs.
+
Many SQL engines use the EXPLAIN statement to show join order, join algorithms, and predicate and expression pushdown.
+Since InfluxQL does not support joins, the cost of a InfluxQL query is typically a function of the total series accessed, the number of iterator accesses to a TSM file, and the number of TSM blocks that need to be scanned.
Executes the specified SELECT statement and returns data on the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including execution time and planning time, and the iterator type and cursor type.
Note: EXPLAIN ANALYZE ignores query output, so the cost of serialization to JSON or CSV is not accounted for.
+
+
+
execution_time
+
Shows the amount of time the query took to execute, including reading the time series data, performing operations as data flows through iterators, and draining processed data from iterators. Execution time doesn’t include the time taken to serialize the output into JSON or other formats.
+
planning_time
+
Shows the amount of time the query took to plan.
+Planning a query in InfluxDB requires a number of steps. Depending on the complexity of the query, planning can require more work and consume more CPU and memory resources than the executing the query. For example, the number of series keys required to execute a query affects how quickly the query is planned and the required memory.
+
First, InfluxDB determines the effective time range of the query and selects the shards to access (in InfluxDB Enterprise, shards may be on remote nodes).
+Next, for each shard and each measurement, InfluxDB performs the following steps:
+
+
Select matching series keys from the index, filtered by tag predicates in the WHERE clause.
+
Group filtered series keys into tag sets based on the GROUP BY dimensions.
+
Enumerate each tag set and create a cursor and iterator for each series key.
+
Merge iterators and return the merged result to the query executor.
+
+
iterator type
+
EXPLAIN ANALYZE supports the following iterator types:
+
+
create_iterator node represents work done by the local influxd instance──a complex composition of nested iterators combined and merged to produce the final query output.
+
(InfluxDB Enterprise only) remote_iterator node represents work done on remote machines.
EXPLAIN ANALYZE distinguishes 3 cursor types. While the cursor types have the same data structures and equal CPU and I/O costs, each cursor type is constructed for a different reason and separated in the final output. Consider the following cursor types when tuning a statement:
+
+
cursor_ref: Reference cursor created for SELECT projections that include a function, such as last() or mean().
+
cursor_aux: Auxiliary cursor created for simple expression projections (not selectors or an aggregation). For example, SELECT foo FROM m or SELECT foo+bar FROM m, where foo and bar are fields.
+
cursor_cond: Condition cursor created for fields referenced in a WHERE clause.
EXPLAIN ANALYZE separates storage block types, and reports the total number of blocks decoded and their size (in bytes) on disk. The following block types are supported:
-- grant admin privileges
+GRANTALLTO"jdoe"
+
+-- grant read access to a database
+GRANTREADON"mydb"TO"jdoe"
+
KILL QUERY
+
Stop currently-running query.
+
+
+
kill_query_statement = "KILL QUERY" query_id .
+
Where query_id is the query ID, displayed in the SHOW QUERIES output as qid.
+
+
+
InfluxDB Enterprise clusters: To kill queries on a cluster, you need to specify the query ID (qid) and the TCP host (for example, myhost:8088),
+available in the SHOW QUERIES output.
+
+
+
+
+
+
KILL QUERY ON “”
+
+
+
+#### Examples
+
+```sql
+-- kill query with qid of 36 on the local host
+KILL QUERY 36
+
+
+
-- kill query on InfluxDB Enterprise cluster
+KILLQUERY53ON"myhost:8088"
Refers to the group of commands used to estimate or count exactly the cardinality of measurements, series, tag keys, tag key values, and field keys.
+
The SHOW CARDINALITY commands are available in two variations: estimated and exact. Estimated values are calculated using sketches and are a safe default for all cardinality sizes. Exact values are counts directly from TSM (Time-Structured Merge Tree) data, but are expensive to run for high cardinality data. Unless required, use the estimated variety.
+
Filtering by time is only supported when Time Series Index (TSI) is enabled on a database.
+
See the specific SHOW CARDINALITY commands for details:
Estimates or counts exactly the cardinality of the field key set for the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when Time Series Index (TSI) is enabled and time is not supported in the WHERE clause.
+
+
+
+
+
show_field_key_cardinality_stmt="SHOW FIELD KEY CARDINALITY"[on_clause][from_clause][where_clause][group_by_clause][limit_clause][offset_clause]
+
+show_field_key_exact_cardinality_stmt="SHOW FIELD KEY EXACT CARDINALITY"[on_clause][from_clause][where_clause][group_by_clause][limit_clause][offset_clause]
+
Examples
+
+
+
-- show estimated cardinality of the field key set of current database
+SHOWFIELDKEYCARDINALITY
+-- show exact cardinality on field key set of specified database
+SHOWFIELDKEYEXACTCARDINALITYONmydb
+
SHOW FIELD KEYS
+
+
+
show_field_keys_stmt = "SHOW FIELD KEYS" [on_clause] [ from_clause ] .
+
Examples
+
+
+
-- show field keys and field value data types from all measurements
+SHOWFIELDKEYS
+
+-- show field keys and field value data types from specified measurement
+SHOWFIELDKEYSFROM"cpu"
+
SHOW GRANTS
+
+
+
show_grants_stmt = "SHOW GRANTS FOR" user_name .
+
Example
+
+
+
-- show grants for jdoe
+SHOWGRANTSFOR"jdoe"
+
SHOW MEASUREMENT CARDINALITY
+
Estimates or counts exactly the cardinality of the measurement set for the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when TSI (Time Series Index) is enabled and time is not supported in the WHERE clause.
-- show estimated cardinality of measurement set on current database
+SHOWMEASUREMENTCARDINALITY
+-- show exact cardinality of measurement set on specified database
+SHOWMEASUREMENTEXACTCARDINALITYONmydb
-- show all measurements
+SHOWMEASUREMENTS
+
+-- show measurements where region tag = 'uswest' AND host tag = 'serverA'
+SHOWMEASUREMENTSWHERE"region"='uswest'AND"host"='serverA'
+
+-- show measurements that start with 'h2o'
+SHOWMEASUREMENTSWITHMEASUREMENT=~/h2o.*/
+
SHOW QUERIES
+
+
+
show_queries_stmt = "SHOW QUERIES" .
+
Example
+
+
+
-- show all currently-running queries
+SHOWQUERIES
+--
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is not supported in the WHERE clause.
-- show estimated cardinality of the series on current database
+SHOWSERIESCARDINALITY
+-- show estimated cardinality of the series on specified database
+SHOWSERIESCARDINALITYONmydb
+-- show exact series cardinality
+SHOWSERIESEXACTCARDINALITY
+-- show series cardinality of the series on specified database
+SHOWSERIESEXACTCARDINALITYONmydb
id column: Shard IDs that belong to the specified database and retention policy.
+
shard_group column: Group number that a shard belongs to. Shards in the same shard group have the same start_time and end_time. This interval indicates how long the shard is active, and the expiry_time columns shows when the shard group expires. No timestamps will show under expiry_time if the retention policy duration is set to infinite.
+
owners column: Shows the data nodes that own a shard. The number of nodes that own a shard is equal to the replication factor. In this example, the replication factor is 3, so 3 nodes own each shard.
+
+
SHOW STATS
+
Returns detailed statistics on available components of an InfluxDB node and available (enabled) components.
+
Statistics returned by SHOW STATS are stored in memory and reset to zero when the node is restarted,
+but SHOW STATS is triggered every 10 seconds to populate the _internal database.
+
The SHOW STATS command does not list index memory usage –
+use the SHOW STATS FOR 'indexes' command.
For the specified component (<component>), the command returns available statistics.
+For the runtime component, the command returns an overview of memory usage by the InfluxDB system,
+using the Go runtime package.
+
SHOW STATS FOR 'indexes'
+
Returns an estimate of memory use of all indexes.
+Index memory use is not reported with SHOW STATS because it is a potentially expensive operation.
+
SHOW SUBSCRIPTIONS
+
+
+
show_subscriptions_stmt = "SHOW SUBSCRIPTIONS" .
+
Example
+
+
+
SHOWSUBSCRIPTIONS
+
SHOW TAG KEY CARDINALITY
+
Estimates or counts exactly the cardinality of tag key set on the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when TSI (Time Series Index) is enabled and time is not supported in the WHERE clause.
-- show all tag keys
+SHOWTAGKEYS
+
+-- show all tag keys from the cpu measurement
+SHOWTAGKEYSFROM"cpu"
+
+-- show all tag keys from the cpu measurement where the region key = 'uswest'
+SHOWTAGKEYSFROM"cpu"WHERE"region"='uswest'
+
+-- show all tag keys where the host key = 'serverA'
+SHOWTAGKEYSWHERE"host"='serverA'
-- show all tag values across all measurements for the region tag
+SHOWTAGVALUESWITHKEY="region"
+
+-- show tag values from the cpu measurement for the region tag
+SHOWTAGVALUESFROM"cpu"WITHKEY="region"
+
+-- show tag values across all measurements for all tag keys that do not include the letter c
+SHOWTAGVALUESWITHKEY!~/.*c.*/
+
+-- show tag values from the cpu measurement for region & host tag keys where service = 'redis'
+SHOWTAGVALUESFROM"cpu"WITHKEYIN("region","host")WHERE"service"='redis'
+
SHOW TAG VALUES CARDINALITY
+
Estimates or counts exactly the cardinality of tag key values for the specified tag key on the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when TSI (Time Series Index) is enabled.
-- show estimated tag key values cardinality for a specified tag key
+SHOWTAGVALUESCARDINALITYWITHKEY="myTagKey"
+-- show estimated tag key values cardinality for a specified tag key
+SHOWTAGVALUESCARDINALITYWITHKEY="myTagKey"
+-- show exact tag key values cardinality for a specified tag key
+SHOWTAGVALUESEXACTCARDINALITYWITHKEY="myTagKey"
+-- show exact tag key values cardinality for a specified tag key
+SHOWTAGVALUESEXACTCARDINALITYWITHKEY="myTagKey"
Use comments with InfluxQL statements to describe your queries.
+
+
A single line comment begins with two hyphens (--) and ends where InfluxDB detects a line break.
+This comment type cannot span several lines.
+
A multi-line comment begins with /* and ends with */. This comment type can span several lines.
+Multi-line comments do not support nested multi-line comments.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
The influx command line interface (CLI) provides an interactive shell for the HTTP API associated with influxd.
+Use influx to write data (manually or from a file), query data interactively, view query output in different formats, and manage resources in InfluxDB.
If you install InfluxDB via a package manager, the CLI is installed at /usr/bin/influx ( on macOS).
+
To access the CLI, first launch the influxd database process and then launch influx in your terminal.
+
+
+
influx
+
If successfully connected to an InfluxDB node, the output is the following:
+
+
+
Connected to http://localhost:8086 version 1.12.2
+InfluxDB shell version: 1.12.2
+>
+
The versions of InfluxDB and the CLI should be identical. If not, parsing issues can occur with queries.
+
In the prompt, you can enter InfluxQL queries as well as CLI-specific commands.
+Enter help to get a list of available commands.
+Use Ctrl+C to cancel if you want to cancel a long-running InfluxQL query.
+
Environment Variables
+
The following environment variables can be used to configure settings used by the influx client. They can be specified in lower or upper case, however the upper case version takes precedence.
+
HTTP_PROXY
+
Defines the proxy server to use for HTTP.
+
Value format:[protocol://]<host>[:port]
+
+
+
HTTP_PROXY=http://localhost:1234
+
HTTPS_PROXY
+
Defines the proxy server to use for HTTPS. Takes precedence over HTTP_PROXY for HTTPS.
+
Value format:[protocol://]<host>[:port]
+
+
+
HTTPS_PROXY=https://localhost:1443
+
NO_PROXY
+
List of host names that should not go through any proxy. If set to an asterisk ‘*’ only, it matches all hosts.
+
Value format: comma-separated list of hosts
+
+
+
NO_PROXY=123.45.67.89,123.45.67.90
+
influx arguments
+
Arguments specify connection, write, import, and output options for the CLI session.
+
influx provides the following arguments:
+
-h, -help
+List influx arguments
+
-compressed
+Set to true if the import file is compressed.
+Use with -import.
+
-consistency 'any|one|quorum|all'
+Set the write consistency level.
+
-database 'database name'
+The database to which influx connects.
+
-execute 'command'
+Execute an InfluxQL command and quit.
+See -execute.
+
-format 'json|csv|column'
+Specifies the format of the server responses.
+See -format.
+
-host 'host name'
+The host to which influx connects.
+By default, InfluxDB runs on localhost.
-password 'password'
+The password influx uses to connect to the server.
+influx will prompt for a password if you leave it blank (-password '').
+Alternatively, set the password for the CLI with the INFLUX_PASSWORD environment
+variable.
+
-path
+The path to the file to import.
+Use with -import.
+
-port 'port #'
+The port to which influx connects.
+By default, InfluxDB runs on port 8086.
+
-pps
+How many points per second the import will allow.
+By default, pps is zero and influx will not throttle importing.
+Use with -import.
+
-precision 'rfc3339|h|m|s|ms|u|ns'
+Specifies the format/precision of the timestamp: rfc3339 (YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ), h (hours), m (minutes), s (seconds), ms (milliseconds), u (microseconds), ns (nanoseconds).
+Precision defaults to nanoseconds.
+
+
+
Note: Setting the precision to rfc3339 (-precision rfc3339) works with the -execute option, but it does not work with the -import option. All other precision formats (e.g., h,m,s,ms,u, and ns) work with the -execute and -import options.
+
+
+
-pretty
+Turns on pretty print for the json format.
+
-ssl
+Use HTTPS for requests.
+
-unsafeSsl
+Disables SSL certificate verification.
+Use when connecting over HTTPS with a self-signed certificate.
+
-username 'username'
+The username that influx uses to connect to the server.
+Alternatively, set the username for the CLI with the INFLUX_USERNAME environment variable.
+
-version
+Display the InfluxDB version and exit.
+
The following sections provide detailed examples for some arguments, including -execute, -format, and -import.
Optional: DDL (Data Definition Language): Contains the InfluxQL commands for creating the relevant database and managing the retention policy.
+If your database and retention policy already exist, your file can skip this section.
+
DML (Data Manipulation Language): Context metadata that specifies the database and (if desired) retention policy for the import and contains the data in line protocol.
Note: For large datasets, influx writes out a status message every 100,000 points.
+For example:
+
2015/08/21 14:48:01 Processed 3100000 lines.
+Time elapsed: 56.740578415s.
+Points per second (PPS): 54634
+
+
+
+
Keep the following in mind when using -import:
+
+
To throttle the import, use -pps to set the number of points per second to ingest. By default, pps is zero and influx does not throttle importing.
+
To import a file compressed with gzip (GNU zip), include the -compressed flag.
+
Include timestamps in the data file.
+If points don’t include a timestamp, InfluxDB assigns the same timestamp to those points, which can result in unintended duplicate points or overwrites.
+
If your data file contains more than 5,000 points, consider splitting it into smaller files to write data to InfluxDB in batches.
+We recommend writing points in batches of 5,000 to 10,000 for optimal performance.
+Writing smaller batches increases the number of HTTP requests, which can negatively impact performance.
+By default, the HTTP request times out after five seconds. Although InfluxDB continues attempting to write the points after a timeout, you won’t receive confirmation of a successful write.
Enter help in the CLI for a partial list of the available commands.
+
Commands
+
The list below offers a brief discussion of each command.
+We provide detailed information on insert at the end of this section.
+
auth
+Prompts you for your username and password.
+influx uses those credentials when querying a database.
+Alternatively, set the username and password for the CLI with the
+INFLUX_USERNAME and INFLUX_PASSWORD environment variables.
+
chunked
+Turns on chunked responses from the server when issuing queries.
+This setting is enabled by default.
+
chunk size <size>
+Sets the size of the chunked responses.
+The default size is 10,000.
+Setting it to 0 resets chunk size to its default value.
+
clear [ database | db | retention policy | rp ]
+Clears the current context for the database or retention policy.
+
connect <host:port>
+Connect to a different server without exiting the shell.
+By default, influx connects to localhost:8086.
+If you do not specify either the host or the port, influx assumes the default setting for the missing attribute.
+
consistency <level>
+Sets the write consistency level: any, one, quorum, or all.
+
Ctrl+C
+Terminates the currently running query. Useful when an interactive query is taking too long to respond
+because it is trying to return too much data.
+
exitquitCtrl+D
+Quits the influx shell.
+
format <format>
+Specifies the format of the server responses: json, csv, or column.
+See the description of -format for examples of each format.
+
history
+Displays your command history.
+To use the history while in the shell, simply use the “up” arrow.
+influx stores your last 1,000 commands in your home directory in .influx_history.
+
insert
+Write data using line protocol.
+See insert.
+
precision <format>
+Specifies the format/precision of the timestamp: rfc3339 (YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ), h (hours), m (minutes), s (seconds), ms (milliseconds), u (microseconds), ns (nanoseconds).
+Precision defaults to nanoseconds.
+
pretty
+Turns on pretty print for the json format.
+
settings
+Outputs the current settings for the shell including the Host, Username, Database, Retention Policy, Pretty status, Chunked status, Chunk Size, Format, and Write Consistency.
+
use [ "<database_name>" | "<database_name>"."<retention policy_name>" ]
+Sets the current database and/or retention policy.
+Once influx sets the current database and/or retention policy, there is no need to specify that database and/or retention policy in queries.
+If you do not specify the retention policy, influx automatically queries the used database’s DEFAULT retention policy.
+
Write data to InfluxDB with insert
+
Enter insert followed by the data in line protocol to write data to InfluxDB.
+Use insert into <retention policy> <line protocol> to write data to a specific retention policy.
+
Write data to a single field in the measurement treasures with the tag captain_id = pirate_king.
+influx automatically writes the point to the database’s DEFAULT retention policy.
+
+
+
INSERT treasures,captain_id=pirate_king value=2
+
Write the same point to the already-existing retention policy oneday:
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
This page documents errors, their descriptions, and, where applicable,
+common resolutions.
+
+
+
Disclaimer: This document does not contain an exhaustive list of all possible InfluxDB errors.
+
+
+
+
error: database name required
+
The database name required error occurs when certain SHOW queries do
+not specify a database.
+Specify a database with an ON clause in the SHOW query, with USE <database_name> in the
+CLI, or with the db query string parameter in
+the InfluxDB API request.
+
The relevant SHOW queries include SHOW RETENTION POLICIES, SHOW SERIES,
+SHOW MEASUREMENTS, SHOW TAG KEYS, SHOW TAG VALUES, and SHOW FIELD KEYS.
The max series per database exceeded error occurs when a write causes the
+number of series in a database to
+exceed the maximum allowable series per database.
+The maximum allowable series per database is controlled by the
+max-series-per-database
+setting in the [data] section of the configuration
+file.
+
The information in the < > shows the measurement and the tag set of the series
+that exceeded max-series-per-database.
+
By default max-series-per-database is set to one million.
+Changing the setting to 0 allows an unlimited number of series per database.
+
error parsing query: found < >, expected identifier at line < >, char < >
+
InfluxQL syntax
+
The expected identifier error occurs when InfluxDB anticipates an identifier
+in a query but doesn’t find it.
+Identifiers are tokens that refer to continuous query names, database names,
+field keys, measurement names, retention policy names, subscription names,
+tag keys, and user names.
+The error is often a gentle reminder to double-check your query’s syntax.
Query 2 is missing a measurement name between FROM and WHERE.
+
InfluxQL keywords
+
In some cases the expected identifier error occurs when one of the
+identifiers in the query is an
+InfluxQL Keyword.
+To successfully query an identifier that’s also a keyword, enclose that
+identifier in double quotes.
error parsing query: found < >, expected string at line < >, char < >
+
The expected string error occurs when InfluxDB anticipates a string
+but doesn’t find it.
+In most cases, the error is a result of forgetting to quote the password
+string in the CREATE USER statement.
error parsing query: mixing aggregate and non-aggregate queries is not supported
+
The mixing aggregate and non-aggregate error occurs when a SELECT statement
+includes both an aggregate function
+and a standalone field key or
+tag key.
+
Aggregate functions return a single calculated value and there is no obvious
+single value to return for any unaggregated fields or tags.
+
Example
+
Raw data:
+
The peg measurement has two fields (square and round) and one tag
+(force):
Query 1 includes an aggregate function and a standalone field.
+
mean("square") returns a single aggregated value calculated from the four values
+of square in the peg measurement, and there is no obvious single field value
+to return from the four unaggregated values of the round field.
Query 2 includes an aggregate function and a standalone tag.
+
mean("square") returns a single aggregated value calculated from the four values
+of square in the peg measurement, and there is no obvious single tag value
+to return from the four unaggregated values of the force tag.
invalid operation: time and \*influxql.VarRef are not compatible
+
The time and \*influxql.VarRef are not compatible error occurs when
+date-time strings are double quoted in queries.
+Date-time strings require single quotes.
The bad timestamp error occurs when the
+line protocol includes a
+timestamp in a format other than a UNIX timestamp.
+
Example
+
+
+
>INSERTpineapplevalue=1'2015-08-18T23:00:00Z'
+ERR:{"error":"unable to parse 'pineapple value=1 '2015-08-18T23:00:00Z'': bad timestamp"}
+
The line protocol above uses an RFC3339
+timestamp.
+Replace the timestamp with a UNIX timestamp to avoid the error and successfully
+write the point to InfluxDB:
In some cases, the bad timestamp error occurs with more general syntax errors
+in the InfluxDB line protocol.
+Line protocol is whitespace sensitive; misplaced spaces can cause InfluxDB
+to assume that a field or tag is an invalid timestamp.
+
Example
+
Write 1
+
+
+
>INSERThenslocation=2value=9
+ERR:{"error":"unable to parse 'hens location=2 value=9': bad timestamp"}
+
The line protocol in Write 1 separates the hen measurement from the location=2
+tag with a space instead of a comma.
+InfluxDB assumes that the value=9 field is the timestamp and returns an error.
+
Use a comma instead of a space between the measurement and tag to avoid the error:
+
+
+
INSERThens,location=2value=9
+
Write 2
+
+
+
>INSERTcows,name=daisymilk_prod=3happy=3
+ERR:{"error":"unable to parse 'cows,name=daisy milk_prod=3 happy=3': bad timestamp"}
+
The line protocol in Write 2 separates the milk_prod=3 field and the
+happy=3 field with a space instead of a comma.
+InfluxDB assumes that the happy=3 field is the timestamp and returns an error.
+
Use a comma instead of a space between the two fields to avoid the error:
The time outside range error occurs when the timestamp in the
+InfluxDB line protocol
+falls outside the valid time range for InfluxDB.
+
The minimum valid timestamp is -9223372036854775806 or 1677-09-21T00:12:43.145224194Z.
+The maximum valid timestamp is 9223372036854775806 or 2262-04-11T23:47:16.854775806Z.
write failed for shard < >: engine: cache maximum memory size exceeded
+
The cache maximum memory size exceeded error occurs when the cached
+memory size increases beyond the
+cache-max-memory-size setting
+in the configuration file.
+
By default, cache-max-memory-size is set to 512mb.
+This value is fine for most workloads, but is too small for larger write volumes
+or for datasets with higher series cardinality.
+If you have lots of RAM you could set it to 0 to disable the cached memory
+limit and never get this error.
+You can also examine the memBytes field in thecache measurement in the
+_internal database
+to get a sense of how big the caches are in memory.
+
already killed
+
The already killed error occurs when a query has already been killed, but
+there are subsequent kill attempts before the query has exited.
+When a query is killed, it may not exit immediately.
+It will be in the killed state, which means the signal has been sent, but the
+query itself has not hit an interrupt point.
This error occurs when fields in an imported measurement have inconsistent data types. Make sure all fields in a measurement have the same data type, such as float64, int64, and so on.
This error occurs when an imported data point is older than the specified retention policy and dropped. Verify the correct retention policy is specified in the import file.
+
Unnamed import file
+
Error:reading standard input: /path/to/directory: is a directory
+
This error occurs when the -import command doesn’t include the name of an import file. Specify the file to import, for example: $ influx -import -path={filename}.txt -precision=s
+
Docker container cannot read host files
+
Error:open /path/to/file: no such file or directory
+
This error occurs when the Docker container cannot read files on the host machine. To make host machine files readable, complete the following procedure.
+
Make host machine files readable to Docker
+
+
+
Create a directory, and then copy files to import into InfluxDB to this directory.
+
+
+
When you launch the Docker container, mount the new directory on the InfluxDB container by running the following command:
+
docker run -v /dir/path/on/host:/dir/path/in/container
+
+
+
Verify the Docker container can read host machine files by running the following command:
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
This page addresses frequent sources of confusion and places where InfluxDB
+behaves in an unexpected way relative to other database systems.
+Where applicable, it links to outstanding issues on GitHub.
On System V operating systems logs are stored under /var/log/influxdb/.
+
On systemd operating systems you can access the logs using journalctl.
+Use journalctl -u influxdb to view the logs in the journal or journalctl -u influxdb > influxd.log to print the logs to a text file. With systemd, log retention depends on your system’s journald settings.
+
What is the relationship between shard group durations and retention policies?
+
InfluxDB stores data in shard groups.
+A single shard group covers a specific time interval; InfluxDB determines that time interval by looking at the DURATION of the relevant retention policy (RP).
+The table below outlines the default relationship between the DURATION of an RP and the time interval of a shard group:
Why aren’t data dropped after I’ve altered a retention policy?
+
Several factors explain why data may not be immediately dropped after a
+retention policy (RP) change.
+
The first and most likely cause is that, by default, InfluxDB checks to enforce
+an RP every 30 minutes.
+You may need to wait for the next RP check for InfluxDB to drop data that are
+outside the RP’s new DURATION setting.
+The 30 minute interval is
+configurable.
+
Second, altering both the DURATION and SHARD DURATION of an RP can result in
+unexpected data retention.
+InfluxDB stores data in shard groups which cover a specific RP and time
+interval.
+When InfluxDB enforces an RP it drops entire shard groups, not individual data
+points.
+InfluxDB cannot divide shard groups.
+
If the RP’s new DURATION is less than the old SHARD DURATION and InfluxDB is
+currently writing data to one of the old, longer shard groups, the system is
+forced to keep all of the data in that shard group.
+This occurs even if some of the data in that shard group are outside of the new
+DURATION.
+InfluxDB will drop that shard group once all of its data is outside the new
+DURATION.
+The system will then begin writing data to shard groups that have the new,
+shorter SHARD DURATION preventing any further unexpected data retention.
+
Why does InfluxDB fail to parse microsecond units in the configuration file?
+
The syntax for specifying microsecond duration units differs for
+configuration
+settings, writes, queries, and setting the precision in the InfluxDB
+Command Line Interface (CLI).
+The table below shows the supported syntax for each category:
+
+
+
+
+
Configuration File
+
InfluxDB API Writes
+
All Queries
+
CLI Precision Command
+
+
+
+
+
u
+
❌
+
👍
+
👍
+
👍
+
+
+
us
+
👍
+
❌
+
❌
+
❌
+
+
+
µ
+
❌
+
❌
+
👍
+
❌
+
+
+
µs
+
👍
+
❌
+
❌
+
❌
+
+
+
+
If a configuration option specifies the u or µ syntax, InfluxDB fails to start and reports the following error in the logs:
+
+
+
run: parse config: time: unknown unit [µ|u] in duration [<integer>µ|<integer>u]
+
Does InfluxDB have a file system size limit?
+
InfluxDB works within file system size restrictions for Linux and Windows POSIX. Some storage providers and distributions have size restrictions; for example:
+
+
Amazon EBS volume limits size to ~16TB
+
Linux ext3 file system limits size ~16TB
+
Linux ext4 file system limits size to ~1EB (with file size limit ~16TB)
+
+
If you anticipate growing over 16TB per volume/file system, we recommend finding a provider and distribution that supports your storage requirements.
+
How do I use the InfluxDB CLI to return human readable timestamps?
+
When you first connect to the CLI, specify the rfc3339 precision:
+
+
+
influx -precision rfc3339
+
Alternatively, specify the precision once you’ve already connected to the CLI:
+
+
+
$ influx
+Connected to http://localhost:8086 version 0.xx.x
+InfluxDB shell 0.xx.x
+> precision rfc3339
+>
How can a non-admin user USE a database in the InfluxDB CLI?
+
In versions prior to v1.3, non-admin users could not execute a USE <database_name> query in the CLI even if they had READ and/or WRITE permissions on that database.
+
Starting with version 1.3, non-admin users can execute the USE <database_name> query for databases on which they have READ and/or WRITE permissions.
+If a non-admin user attempts to USE a database on which the user doesn’t have READ and/or WRITE permissions, the system returns an error:
+
+
+
ERR: Database <database_name> doesn't exist. Run SHOW DATABASES for a list of existing databases.
+
+
+
Note that the SHOW DATABASES query returns only those databases on which the non-admin user has READ and/or WRITE permissions.
+
+
+
How do I write to a non-DEFAULT retention policy with the InfluxDB CLI?
+
Use the syntax INSERT INTO [<database>.]<retention_policy> <line_protocol> to write data to a non-DEFAULT retention policy using the CLI.
+(Specifying the database and retention policy this way is only allowed with the CLI.
+Writes over HTTP must specify the database and optionally the retention policy with the db and rp query parameters.)
Note that you will need to fully qualify the measurement to query data in the non-DEFAULT retention policy. Fully qualify the measurement with the syntax:
+
+
+
"<database>"."<retention_policy>"."<measurement>"
+
How do I cancel a long-running query?
+
You can cancel a long-running interactive query from the CLI using Ctrl+C. To stop other long-running query that you see when using the SHOW QUERIES command,
+you can use the KILL QUERY command to stop it.
+
Why can’t I query Boolean field values?
+
Acceptable Boolean syntax differs for data writes and data queries.
+
+
+
+
Boolean syntax
+
Writes
+
Queries
+
+
+
+
+
t,f
+
👍
+
❌
+
+
+
T,F
+
👍
+
❌
+
+
+
true,false
+
👍
+
👍
+
+
+
True,False
+
👍
+
👍
+
+
+
TRUE,FALSE
+
👍
+
👍
+
+
+
+
For example, SELECT * FROM "hamlet" WHERE "bool"=True returns all points with bool set to TRUE, but SELECT * FROM "hamlet" WHERE "bool"=T returns nothing.
+
+
+
How does InfluxDB handle field type discrepancies across shards?
+
Field values can be floats, integers, strings, or Booleans.
+Field value types cannot differ within a
+shard, but they can differ across shards.
+
The SELECT statement
+
The
+SELECT statement
+returns all field values if all values have the same type.
+If field value types differ across shards, InfluxDB first performs any
+applicable cast
+operations and then returns all values with the type that occurs first in the
+following list: float, integer, string, Boolean.
+
If your data have field value type discrepancies, use the syntax
+<field_key>::<type> to query the different data types.
+
Example
+
The measurement just_my_type has a single field called my_field.
+my_field has four field values across four different shards, and each value has
+a different data type (float, integer, string, and Boolean).
+
SELECT * returns only the float and integer field values.
+Note that InfluxDB casts the integer value to a float in the response.
SELECT <field_key>::<type> [...] returns all value types.
+InfluxDB outputs each value type in its own column with incremented column names.
+Where possible, InfluxDB casts field values to another type;
+it casts the integer 7 to a float in the first column, and it
+casts the float 9.879034 to an integer in the second column.
+InfluxDB cannot cast floats or integers to strings or Booleans.
SHOW FIELD KEYS returns every data type, across every shard, associated with
+the field key.
+
Example
+
The measurement just_my_type has a single field called my_field.
+my_field has four field values across four different shards, and each value has
+a different data type (float, integer, string, and Boolean).
+SHOW FIELD KEYS returns all four data types:
What are the minimum and maximum integers that InfluxDB can store?
+
InfluxDB stores all integers as signed int64 data types.
+The minimum and maximum valid values for int64 are -9223372036854775808 and 9223372036854775807.
+See Go builtins for more information.
+
Values close to but within those limits may lead to unexpected results; some functions and operators convert the int64 data type to float64 during calculation which can cause overflow issues.
+
What are the minimum and maximum timestamps that InfluxDB can store?
+
The minimum timestamp is -9223372036854775806 or 1677-09-21T00:12:43.145224194Z.
+The maximum timestamp is 9223372036854775806 or 2262-04-11T23:47:16.854775806Z.
+
Timestamps outside that range return a parsing error.
+
How can I tell what type of data is stored in a field?
Currently, InfluxDB offers very limited support for changing a field’s data type.
+
The <field_key>::<type> syntax supports casting field values from integers to
+floats or from floats to integers.
+See Cast Operations
+for an example.
+There is no way to cast a float or integer to a string or Boolean (or vice versa).
+
We list possible workarounds for changing a field’s data type below.
+Note that these workarounds will not update data that have already been
+written to the database.
+
Write the data to a different field
+
The simplest workaround is to begin writing the new data type to a different field in the same
+series.
+
Work the shard system
+
Field value types cannot differ within a
+shard but they can differ across
+shards.
+
Users looking to change a field’s data type can use the SHOW SHARDS query
+to identify the end_time of the current shard.
+InfluxDB will accept writes with a different data type to an existing field if the point has a timestamp
+that occurs after that end_time.
Why does my query return epoch 0 as the timestamp?
+
In InfluxDB, epoch 0 (1970-01-01T00:00:00Z) is often used as a null timestamp equivalent.
+If you request a query that has no timestamp to return, such as an aggregation function with an unbounded time range, InfluxDB returns epoch 0 as the timestamp.
For information on how to use a subquery as a substitute for nested functions, see
+Data exploration.
+
What determines the time intervals returned by GROUP BY time() queries?
+
The time intervals returned by GROUP BY time() queries conform to the InfluxDB database’s preset time
+buckets or to the user-specified offset interval.
+
Example
+
Preset time buckets
+
The following query calculates the average value of sunflowers between
+6:15pm and 7:45pm and groups those averages into one hour intervals:
The results below show how InfluxDB maintains its preset time buckets.
+
In this example, the 6pm hour is a preset bucket and the 7pm hour is a preset bucket.
+The average for the 6pm time bucket does not include data prior to 6:15pm because of the WHERE time clause,
+but any data included in the average for the 6pm time bucket must occur in the 6pm hour.
+The same goes for the 7pm time bucket; any data included in the average for the 7pm
+time bucket must occur in the 7pm hour.
+The dotted lines show the points that make up each average.
+
Note that while the first timestamp in the results is 2016-08-29T18:00:00Z,
+the query results in that bucket do not include data with timestamps that occur before the start of the
+WHERE time clause (2016-08-29T18:15:00Z).
The following query calculates the average value of sunflowers between
+6:15pm and 7:45pm and groups those averages into one hour intervals.
+It also offsets the InfluxDB database’s preset time buckets by 15 minutes.
In this example, the user-specified
+offset interval
+shifts the InfluxDB database’s preset time buckets forward by 15 minutes.
+The average for the 6pm time bucket now includes data between 6:15pm and 7pm, and
+the average for the 7pm time bucket includes data between 7:15pm and 8pm.
+The dotted lines show the points that make up each average.
+
Note that the first timestamp in the result is 2016-08-29T18:15:00Z
+instead of 2016-08-29T18:00:00Z.
InfluxDB automatically queries data in a database’s default retention policy (RP). If your data is stored in another RP, you must specify the RP in your query to get results.
+
No field key in the SELECT clause
+
A query requires at least one field key in the SELECT clause. If the SELECT clause includes only tag keys, the query returns an empty response. For more information, see Data exploration.
+
SELECT query includes GROUP BY time()
+
If your SELECT query includes a GROUP BY time() clause, only data points between 1677-09-21 00:12:43.145224194 and now() are returned. Therefore, if any of your data points occur after now(), specify an alternative upper bound in your time interval.
+
(By default, most SELECT queries query data with timestamps between 1677-09-21 00:12:43.145224194 and 2262-04-11T23:47:16.854775806Z UTC.)
+
Tag and field key with the same name
+
Avoid using the same name for a tag and field key. If you inadvertently add the same name for a tag and field key, and then query both keys together, the query results show the second key queried (tag or field) appended with _1 (also visible as the column header in Chronograf). To query a tag or field key appended with _1, you must drop the appended _1and include the syntax ::tag or ::field.
Write the following points to create both a field and tag key with the same name leaves:
+
+
+
# create the `leaves` tag key
+INSERT grape,leaves=species leaves=6
+
+#create the `leaves` field key
+INSERT grape leaves=5
+
+
+
If you view both keys, you’ll notice that neither key includes _1:
+
+
+
# show the `leaves` tag key
+SHOW TAG KEYS
+
+name: grape
+tagKey
+------
+leaves
+
+# create the `leaves` field key
+SHOW FIELD KEYS
+
+name: grape
+fieldKey fieldType
+------ ---------
+leaves float
+
+
+
If you query the grape measurement, you’ll see the leaves tag key has an appended _1:
+
+
+
# query the `grape` measurement
+SELECT * FROM <database_name>.<retention_policy>."grape"
+
+name: grape
+time leaves leaves_1
+---- -------- ----------
+1574128162128468000 6.00 species
+1574128238044155000 5.00
+
+
+
To query a duplicate key name, you must drop_1and include::tag or ::field after the key:
+
+
+
# query duplicate keys using the correct syntax
+SELECT "leaves"::tag, "leaves"::field FROM <database_name>.<retention_policy>."grape"
+
+name: grape
+time leaves leaves_1
+---- -------- ----------
+1574128162128468000 species 6.00
+1574128238044155000 5.00
+
Therefore, queries that reference leaves_1 don’t return values.
+
+
+
+
+
Warning: If you inadvertently add a duplicate key name, follow the steps
+below to remove a duplicate key. Because of memory
+requirements, if you have large amounts of data, we recommend chunking your data
+(while selecting it) by a specified interval (for example, date range) to fit
+the allotted memory.
Use the following queries to remove a duplicate key:
+
+
+
/* select each field key to keep in the original measurement and send to a temporary
+ measurement; then, group by the tag keys to keep (leave out the duplicate key) */
+SELECT"field_key","field_key2","field_key3"
+INTO<temporary_measurement>FROM<original_measurement>
+WHERE<daterange>GROUPBY"tag_key","tag_key2","tag_key3"
+
+/* verify the field keys and tags keys were successfully moved to the temporary
+measurement */
+SELECT*FROM"temporary_measurement"
+
+/* drop original measurement (with the duplicate key) */
+DROPMEASUREMENT"original_measurement"
+
+/* move data from temporary measurement back to original measurement you just dropped */
+SELECT*INTO"original_measurement"FROM"temporary_measurement"GROUPBY*
+
+/* verify the field keys and tags keys were successfully moved back to the original
+ measurement */
+SELECT*FROM"original_measurement"
+
+/* drop temporary measurement */
+DROPMEASUREMENT"temporary_measurement"
+
+
+
Why don’t my GROUP BY time() queries return timestamps that occur after now()?
To query data with timestamps that occur after now(), SELECT statements with
+a GROUP BY time() clause must provide an alternative upper bound in the
+WHERE clause.
+
In the following example, the first query covers data with timestamps between
+2015-09-18T21:30:00Z and now().
+The second query covers data with timestamps between 2015-09-18T21:30:00Z and 180 weeks from now().
+
+
+
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none)
+
+
+SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' AND time <= now() + 180w GROUP BY time(12m) fill(none)
+
Note that the WHERE clause must provide an alternative upper bound to
+override the default now() upper bound. The following query merely resets
+the lower bound to now() such that the query’s time range is between
+now() and now():
Can I perform mathematical operations against timestamps?
+
Currently, it is not possible to execute mathematical operators against timestamp values in InfluxDB.
+Most time calculations must be carried out by the client receiving the query results.
+
There is limited support for using InfluxQL functions against timestamp values.
+The function ELAPSED()
+returns the difference between subsequent timestamps in a single field.
+
Can I identify write precision from returned timestamps?
+
InfluxDB stores all timestamps as nanosecond values, regardless of the write precision supplied.
+It is important to note that when returning query results, the database silently drops trailing zeros from timestamps which obscures the initial write precision.
+
In the example below, the tags precision_supplied and timestamp_supplied show the time precision and timestamp that the user provided at the write.
+Because InfluxDB silently drops trailing zeros on returned timestamps, the write precision is not recognizable in the returned timestamps.
When should I single quote and when should I double quote in queries?
+
Single quote string values (for example, tag values) but do not single quote identifiers (database names, retention policy names, user names, measurement names, tag keys, and field keys).
+
Double quote identifiers if they start with a digit, contain characters other than [A-z,0-9,_], or if they are an InfluxQL keyword.
+Double quotes are not required for identifiers if they don’t fall into one of
+those categories but we recommend double quoting them anyway.
+
Examples:
+
Yes: SELECT bikes_available FROM bikes WHERE station_id='9'
+
Yes: SELECT "bikes_available" FROM "bikes" WHERE "station_id"='9'
+
Yes: SELECT MIN("avgrq-sz") AS "min_avgrq-sz" FROM telegraf
+
Yes: SELECT * from "cr@zy" where "p^e"='2'
+
No: SELECT 'bikes_available' FROM 'bikes' WHERE 'station_id'="9"
+
No: SELECT * from cr@zy where p^e='2'
+
Single quote date time strings. InfluxDB returns an error (ERR: invalid operation: time and *influxql.VarRef are not compatible) if you double quote
+a date time string.
+
Examples:
+
Yes: SELECT "water_level" FROM "h2o_feet" WHERE time > '2015-08-18T23:00:01.232000000Z' AND time < '2015-09-19'
+
No: SELECT "water_level" FROM "h2o_feet" WHERE time > "2015-08-18T23:00:01.232000000Z" AND time < "2015-09-19"
Why am I missing data after creating a new DEFAULT retention policy?
+
When you create a new DEFAULT retention policy (RP) on a database, the data written to the old DEFAULT RP remain in the old RP.
+Queries that do not specify an RP automatically query the new DEFAULT RP so the old data may appear to be missing.
+To query the old data you must fully qualify the relevant data in the query.
+
Example:
+
All of the data in the measurement fleeting fall under the DEFAULT RP called one_hour:
Why is my query with a WHERE OR time clause returning empty results?
+
Currently, InfluxDB does not support using OR in the WHERE clause to specify multiple time ranges.
+InfluxDB returns an empty response if the query’s WHERE clause uses OR
+with time intervals.
fill(previous) doesn’t fill the result for a time bucket if the previous value is outside the query’s time range.
+
In the following example, InfluxDB doesn’t fill the 2016-07-12T16:50:20Z-2016-07-12T16:50:30Z time bucket with the results from the 2016-07-12T16:50:00Z-2016-07-12T16:50:10Z time bucket because the query’s time range does not include the earlier time bucket.
While this is the expected behavior of fill(previous), an open feature request on GitHub proposes that fill(previous) should fill results even when previous values fall outside the query’s time range.
+
Why are my INTO queries missing data?
+
By default, INTO queries convert any tags in the initial data to fields in
+the newly written data.
+This can cause InfluxDB to overwrite points that were previously differentiated by a tag.
+Include GROUP BY * in all INTO queries to preserve tags in the newly written data.
+
Note that this behavior does not apply to queries that use the TOP() or BOTTOM() functions.
+See the TOP() and BOTTOM() documentation for more information.
+
Example
+
Initial data
+
The french_bulldogs measurement includes the color tag and the name field.
An INTO query without a GROUP BY * clause turns the color tag into
+a field in the newly written data.
+In the initial data the nugget point and the rumple points are differentiated only by the color tag.
+Once color becomes a field, InfluxDB assumes that the nugget point and the
+rumple point are duplicate points and it overwrites the nugget point with
+the rumple point.
+
+
+
>SELECT*INTO"all_dogs"FROM"french_bulldogs"
+name:result
+------------
+timewritten
+1970-01-01T00:00:00Z3
+
+>SELECT*FROM"all_dogs"
+name:all_dogs
+--------------
+timecolorname
+2016-05-25T00:05:00Zgreyrumple<---- no more nugget 🐶
+2016-05-25T00:10:00Zblackprince
+
INTO query with GROUP BY *
+
An INTO query with a GROUP BY * clause preserves color as a tag in the newly written data.
+In this case, the nugget point and the rumple point remain unique points and InfluxDB does not overwrite any data.
Currently, there is no way to perform cross-measurement math or grouping.
+All data must be under a single measurement to query it together.
+InfluxDB is not a relational database and mapping data across measurements is not currently a recommended schema.
+See GitHub Issue #3552 for a discussion of implementing JOIN in InfluxDB.
+
Does the order of the timestamps matter?
+
No.
+Our tests indicate that there is a only a negligible difference between the times
+it takes InfluxDB to complete the following queries:
InfluxDB maintains an in-memory index of every series in the system. As the number of unique series grows, so does the RAM usage. High series cardinality can lead to the operating system killing the InfluxDB process with an out of memory (OOM) exception. See SHOW CARDINALITY to learn about the InfluxSQL commands for series cardinality.
+
How can I remove series from the index?
+
To reduce series cardinality, series must be dropped from the index.
+DROP DATABASE,
+DROP MEASUREMENT, and
+DROP SERIES will all remove series from the index and reduce the overall series cardinality.
+
+
+
Note:DROP commands are usually CPU-intensive, as they frequently trigger a TSM compaction. Issuing DROP queries at a high frequency may significantly impact write and other query throughput.
+
+
+
How do I write integer field values?
+
Add a trailing i to the end of the field value when writing an integer.
+If you do not provide the i, InfluxDB will treat the field value as a float.
+
Writes an integer: value=100i
+Writes a float: value=100
+
How does InfluxDB handle duplicate points?
+
A point is uniquely identified by the measurement name, tag set, and timestamp.
+If you submit a new point with the same measurement, tag set, and timestamp as an existing point, the field set becomes the union of the old field set and the new field set, where any ties go to the new field set.
+This is the intended behavior.
+
For example:
+
Old point: cpu_load,hostname=server02,az=us_west val_1=24.5,val_2=7 1234567890000000
+
New point: cpu_load,hostname=server02,az=us_west val_1=5.24 1234567890000000
+
After you submit the new point, InfluxDB overwrites val_1 with the new field value and leaves the field val_2 alone:
What newline character does the InfluxDB API require?
+
The InfluxDB line protocol relies on line feed (\n, which is ASCII 0x0A) to indicate the end of a line and the beginning of a new line. Files or data that use a newline character other than \n will result in the following errors: bad timestamp, unable to parse.
+
Note that Windows uses carriage return and line feed (\r\n) as the newline character.
+
What words and characters should I avoid when writing data to InfluxDB?
+
InfluxQL keywords
+
If you use an InfluxQL keyword as an identifier you will need to double quote that identifier in every query.
+This can lead to non-intuitive errors.
+Identifiers are continuous query names, database names, field keys, measurement names, retention policy names, subscription names, tag keys, and user names.
+
time
+
The keyword time is a special case.
+time can be a
+continuous query name,
+database name,
+measurement name,
+retention policy name,
+subscription name, and
+user name.
+In those cases, time does not require double quotes in queries.
+time cannot be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
>INSERTmymeastime=1
+ERR:{"error":"partial write: invalid field name: input field \"time\" on measurement \"mymeas\" is invalid dropped=1"}
+
time is not a valid field key in InfluxDB.
+The system does does not write the point and returns a 400.
+
Write time as a tag key and attempt to query it
+
+
+
>INSERTmymeas,time=1value=1
+ERR:{"error":"partial write: invalid tag key: input tag \"time\" on measurement \"mymeas\" is invalid dropped=1"}
+
time is not a valid tag key in InfluxDB.
+The system does does not write the point and returns a 400.
+
Characters
+
To keep regular expressions and quoting simple, avoid using the following characters in identifiers:
+
\ backslash
+^ circumflex accent
+$ dollar sign
+' single quotation mark
+" double quotation mark
+= equal sign
+, comma
+
When should I single quote and when should I double quote when writing data?
+
+
+
Avoid single quoting and double quoting identifiers when writing data via the
+line protocol; see the examples below for how writing identifiers with quotes
+can complicate queries. Identifiers are database names, retention policy
+names, user names, measurement names, tag keys, and field keys.
+
Write with a double-quoted measurement: INSERT "bikes" bikes_available=3
+Applicable query: SELECT * FROM "\"bikes\""
+
Write with a single-quoted measurement: INSERT 'bikes' bikes_available=3
+Applicable query: SELECT * FROM "\'bikes\'"
+
Write with an unquoted measurement: INSERT bikes bikes_available=3
+Applicable query: SELECT * FROM "bikes"
+
+
+
Double quote field values that are strings.
+
Write: INSERT bikes happiness="level 2"
+Applicable query: SELECT * FROM "bikes" WHERE "happiness"='level 2'
+
+
+
Special characters should be escaped with a backslash and not placed in quotes–for example:
+
Write: INSERT wacky va\"ue=4
+Applicable query: SELECT "va\"ue" FROM "wacky"
The tradeoff is that identical points with duplicate timestamps, more likely to occur as precision gets coarser, may overwrite other points.
+
What are the configuration recommendations and schema guidelines for writing sparse, historical data?
+
For users who want to write sparse, historical data to InfluxDB, InfluxData recommends:
+
First, lengthening your retention policy‘s shard group duration to cover several years.
+The default shard group duration is one week and if your data cover several hundred years – well, that’s a lot of shards!
+Having an extremely high number of shards is inefficient for InfluxDB.
+Increase the shard group duration for your data’s retention policy with the ALTER RETENTION POLICY query.
+
Second, temporarily lowering the cache-snapshot-write-cold-duration configuration setting.
+If you’re writing a lot of historical data, the default setting (10m) can cause the system to hold all of your data in cache for every shard.
+Temporarily lowering the cache-snapshot-write-cold-duration setting to 10s while you write the historical data makes the process more efficient.
+
Where can I find InfluxDB Enterprise logs?
+
On systemd operating systems, service logs can be accessed using the journalctl command.
The journalctl output can be redirected to print the logs to a text file. With systemd, log retention depends on the system’s journald settings.
+
Why am I seeing a 503 Service Unavailable error in my meta node logs?
+
This is the expected behavior if you haven’t joined the meta node to the
+cluster.
+The 503 errors should stop showing up in the logs once you
+join the meta node to the cluster.
+
Why am I seeing a 409 error in some of my data node logs?
+
When you create a
+Continuous Query (CQ)
+on your cluster, every data node will ask for the CQ lease.
+Only one data node can accept the lease.
+That data node will have a 200 in its logs.
+All other data nodes will be denied the lease and have a 409 in their logs.
+This is the expected behavior.
+
Log output for a data node that is denied the lease:
+
+
+
[meta-http] 2016/09/19 09:08:53 172.31.4.132 - - [19/Sep/2016:09:08:53 +0000] GET /lease?name=continuous_querier&node_id=5 HTTP/1.2 409 105 - InfluxDB Meta Client b00e4943-7e48-11e6-86a6-000000000000 380.542µs
+
Log output for the data node that accepts the lease:
+
+
+
[meta-http] 2016/09/19 09:08:54 172.31.12.27 - - [19/Sep/2016:09:08:54 +0000] GET /lease?name=continuous_querier&node_id=0 HTTP/1.2 200 105 - InfluxDB Meta Client b05a3861-7e48-11e6-86a7-000000000000 8.87547ms
+
Why am I seeing hinted handoff queue not empty errors in my data node logs?
+
+
+
[write] 2016/10/18 10:35:21 write failed for shard 2382 on node 4: hinted handoff queue not empty
+
This error is informational only and does not necessarily indicate a problem in the cluster. It indicates that the node handling the write request currently has data in its local hinted handoff queue for the destination node. Coordinating nodes will not attempt direct writes to other nodes until the hinted handoff queue for the destination node has fully drained. New data is instead appended to the hinted handoff queue. This helps data arrive in chronological order for consistency of graphs and alerts and also prevents unnecessary failed connection attempts between the data nodes. Until the hinted handoff queue is empty, this message will continue to display in the logs. Monitor the size of the hinted handoff queues with ls -lRh /var/lib/influxdb/hh to ensure that they are decreasing in size.
+
Note that for some write consistency settings, InfluxDB may return a write error (500) for the write attempt, even if the points are successfully queued in hinted handoff. Some write clients may attempt to resend those points, leading to duplicate points being added to the hinted handoff queue and lengthening the time it takes for the queue to drain. If the queues are not draining, consider temporarily downgrading the write consistency setting, or pause retries on the write clients until the hinted handoff queues fully drain.
+
Why am I seeing error writing count stats ...: partial write errors in my data node logs?
The _internal database collects per-node and also cluster-wide information about the InfluxDB Enterprise cluster. The cluster metrics are replicated to other nodes using consistency=all. For a write consistency of all, InfluxDB returns a write error (500) for the write attempt even if the points are successfully queued in hinted handoff. Thus, if there are points still in hinted handoff, the _internal writes will fail the consistency check and log the error, even though the data is in the durable hinted handoff queue and should eventually persist.
+
Why am I seeing queue is full errors in my data node logs?
+
This error indicates that the coordinating node that received the write cannot add the incoming write to the hinted handoff queue for the destination node because it would exceed the maximum size of the queue. This error typically indicates a catastrophic condition for the cluster - one data node may have been offline or unable to accept writes for an extended duration.
+
The controlling configuration settings are in the [hinted-handoff] section of the file. max-size is the total size in bytes per hinted handoff queue. When max-size is exceeded, all new writes for that node are rejected until the queue drops below max-size. max-age is the maximum length of time a point will persist in the queue. Once this limit has been reached, points expire from the queue. The age is calculated from the write time of the point, not the timestamp of the point.
+
Why am I seeing unable to determine if "hostname" is a meta node when I try to add a meta node with influxd-ctl join?
+
Meta nodes use the /status endpoint to determine the current state of another meta node. A healthy meta node that is ready to join the cluster will respond with a 200 HTTP response code and a JSON string with the following format (assuming the default ports):
If you are getting an error message while attempting to influxd-ctl join a new meta node, it means that the JSON string returned from the /status endpoint is incorrect. This generally indicates that the meta node configuration file is incomplete or incorrect. Inspect the HTTP response with curl -v "http://<hostname>:8091/status" and make sure that the hostname, the bind-address, and the http-bind-address are correctly populated. Also check the license-key or license-path in the configuration file of the meta nodes. Finally, make sure that you specify the http-bind-address port in the join command, e.g. influxd-ctl join hostname:8091.
+
Why is InfluxDB reporting an out of memory (OOM) exception when my system has free memory?
+
mmap is a Unix system call that maps files into memory.
+As the number of shards in an InfluxDB Enterprise cluster increases, the number of memory maps increase.
+If the number of maps exceeds the configured maximum limit, the node reports that it is out of memory.
+
To check the current number of maps the influxd process is using:
+
+
+
wc -l /proc/$(pidof influxd)/maps
+
The max_map_count file contains the maximum number of memory map areas a process may have.
+The default limit is 65536.
+We recommend increasing this to 262144 (four times the default) by running the following:
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
Optional. InfluxDB uses the server’s local nanosecond timestamp in UTC if the timestamp is not included with the point.
+
The timestamp for the data point. InfluxDB accepts one timestamp per point.
+
Unix nanosecond timestamp. Specify alternative precisions with the InfluxDB API.
+
+
+
+
+
+
Performance tips:
+
+
Before sending data to InfluxDB, sort by tag key to match the results from the
+Go bytes.Compare function.
+
To significantly improve compression, use the coarsest precision possible for timestamps.
+
Use the Network Time Protocol (NTP) to synchronize time between hosts. InfluxDB uses a host’s local time in UTC to assign timestamps to data. If a host’s clock isn’t synchronized with NTP, the data that the host writes to InfluxDB may have inaccurate timestamps.
+
+
+
+
+
Data types
+
+
+
+
Datatype
+
Element(s)
+
Description
+
+
+
+
+
Float
+
Field values
+
Default numerical type. IEEE-754 64-bit floating-point numbers (except NaN or +/- Inf). Examples: 1, 1.0, 1.e+78, 1.E+78.
+
+
+
Integer
+
Field values
+
Signed 64-bit integers (-9223372036854775808 to 9223372036854775807). Specify an integer with a trailing i on the number. Example: 1i.
+
+
+
String
+
Measurements, tag keys, tag values, field keys, field values
+
Length limit 64KB.
+
+
+
Boolean
+
Field values
+
Stores TRUE or FALSE values.
TRUE write syntax:[t, T, true, True, TRUE].
FALSE write syntax:[f, F, false, False, FALSE]
+
+
+
Timestamp
+
Timestamps
+
A Unix time in nanoseconds since January 1, 1970 UTC. Specify alternative precisions with the InfluxDB API. The minimum valid timestamp is -9223372036854775806 or 1677-09-21T00:12:43.145224194Z. The maximum valid timestamp is 9223372036854775806 or 2262-04-11T23:47:16.854775806Z.
+
+
+
+
Boolean syntax for writes and queries
+
Acceptable Boolean syntax differs for data writes and data queries.
+For more information, see
+Frequently asked questions.
+
Field type discrepancies
+
In a measurement, a field’s type cannot differ in a shard, but can differ across
+shards.
Write the field value -1.234456e+78 as a float to InfluxDB
+
+
+
INSERTmymeasvalue=-1.234456e+78
+
InfluxDB supports field values specified in scientific notation.
+
Write a field value 1.0 as a float to InfluxDB
+
+
+
INSERTmymeasvalue=1.0
+
Write the field value 1 as a float to InfluxDB
+
+
+
INSERTmymeasvalue=1
+
Write the field value 1 as an integer to InfluxDB
+
+
+
INSERTmymeasvalue=1i
+
Write the field value stringing along as a string to InfluxDB
+
+
+
INSERTmymeasvalue="stringing along"
+
Always double quote string field values. More on quoting below.
+
Write the field value true as a Boolean to InfluxDB
+
+
+
INSERTmymeasvalue=true
+
Do not quote Boolean field values.
+The following statement writes true as a string field value to InfluxDB:
+
+
+
INSERTmymeasvalue="true"
+
Attempt to write a string to a field that previously accepted floats
+
If the timestamps on the float and string are stored in the same shard:
+
+
+
>INSERTmymeasvalue=31465934559000000000
+>INSERTmymeasvalue="stringing along"1465934559000000001
+ERR:{"error":"field type conflict: input field \"value\" on measurement \"mymeas\" is type string, already exists as type float"}
+
If the timestamps on the float and string are not stored in the same shard:
Quoting, special characters, and additional naming guidelines
+
Quoting
+
+
+
+
Element
+
Double quotes
+
Single quotes
+
+
+
+
+
Timestamp
+
Never
+
Never
+
+
+
Measurements, tag keys, tag values, field keys
+
Never*
+
Never*
+
+
+
Field values
+
Double quote string field values. Do not double quote floats, integers, or Booleans.
+
Never
+
+
+
+
* InfluxDB line protocol allows users to double and single quote measurement names, tag
+keys, tag values, and field keys.
+It will, however, assume that the double or single quotes are part of the name,
+key, or value.
+This can complicate query syntax (see the example below).
+
Examples
+
Invalid line protocol - Double quote the timestamp
+
+
+
>INSERTmymeasvalue=9"1466625759000000000"
+ERR:{"error":"unable to parse 'mymeas value=9 \"1466625759000000000\"': bad timestamp"}
+
Double quoting (or single quoting) the timestamp yields a bad timestamp
+error.
+
Semantic error - Double quote a Boolean field value
If you double quote a measurement in line protocol, any queries on that
+measurement require both double quotes and escaped (\) double quotes in the
+FROM clause.
+
Special characters
+
You must use a backslash character \ to escape the following special characters:
+
+
+
In string field values, you must escape:
+
+
+
double quotes: \" escapes double quote.
+
+
+
backslash character: If you use multiple backslashes, they must be escaped.
+InfluxDB interprets backslashes as follows:
+
+
\ or \\ interpreted as \
+
\\\ or \\\\ interpreted as \\
+
\\\\\ or \\\\\\ interpreted as \\\, and so on
+
+
+
+
+
+
In tag keys, tag values, and field keys, you must escape:
+
+
commas
+
equal signs
+
spaces
+
+
+
+
For example, \, escapes a comma.
+
+
In measurements, you must escape:
+
+
commas
+
spaces
+
+
+
+
You do not need to escape other special characters.
+
Examples
+
Write a point with special characters
+
+
+
INSERT"measurement\ with\ quo⚡️es\ and\ emoji",tag\key\with\sp🚀ces=tag\,value\,with"commas"field_k\ey="string field value, only \"needbeesc🍭ped"
+
The system writes a point where the measurement is "measurement with quo⚡️es and emoji", the tag key is tag key with sp🚀ces, the
+tag value is tag,value,with"commas", the field key is field_k\ey and the field value is string field value, only " need be esc🍭ped.
+
Additional naming guidelines
+
# at the beginning of the line is a valid comment character for line protocol.
+InfluxDB will ignore all subsequent characters until the next newline \n.
+
Measurement names, tag keys, tag values, field keys, and field values are
+case sensitive.
+
InfluxDB line protocol accepts
+InfluxQL keywords
+as identifier names.
+In general, we recommend avoiding using InfluxQL keywords in your schema as
+it can cause
+confusion when querying the data.
+
+
+
Note: Avoid using the reserved keys _field and _measurement. If these keys are included as a tag or field key, the associated point is discarded.
+
+
+
The keyword time is a special case.
+time can be a
+continuous query name,
+database name,
+measurement name,
+retention policy name,
+subscription name, and
+user name.
+In those cases, time does not require double quotes in queries.
+time cannot be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+See Frequently Asked Questions for more information.
+
InfluxDB line protocol in practice
+
To learn how to write line protocol to the database, see Tools.
+
Duplicate points
+
A point is uniquely identified by the measurement name, tag set, field set, and timestamp
+
If you write a point to a series with a timestamp that matches an existing point, the field set becomes a union of the old and new field set, and conflicts favor the new field set.
If you have a tag key and field key with the same name in a measurement, one of the keys will return appended with a _1 in query results (and as a column header in Chronograf). For example, location and location_1. To query a duplicate key, drop the _1 and use the InfluxQL ::tag or ::field syntax in your query, for example:
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
The InfluxDB line protocol is a text-based format for writing points to the
+database.
+Points must be in line protocol format for InfluxDB to successfully parse and
+write points (unless you’re using a service plugin).
+
Using fictional temperature data, this page introduces InfluxDB line protocol.
+It covers:
The final section, Writing data to InfluxDB,
+describes how to get data into InfluxDB and how InfluxDB handles Line
+Protocol duplicates.
+
Syntax
+
A single line of text in line protocol format represents one data point in InfluxDB.
+It informs InfluxDB of the point’s measurement, tag set, field set, and
+timestamp.
For best performance you should sort tags by key before sending them to the
+database.
+The sort should match the results from the
+Go bytes.Compare function.
+
First whitespace
+
Separate the measurement and the field set or, if you’re including a tag set
+with your data point, separate the tag set and the field set with a whitespace.
+The whitespace is required in line protocol.
+
Valid line protocol with no tag set:
+
+
+
weather temperature=82 1465839830100400200
+
Field set
+
The field(s) for your data point.
+Every data point requires at least one field in line protocol.
+
Separate field key-value pairs with an equals sign = and no spaces:
+
+
+
<field_key>=<field_value>
+
Separate multiple field-value pairs with a comma and no spaces:
Separate the field set and the optional timestamp with a whitespace.
+The whitespace is required in line protocol if you’re including a timestamp.
+
Timestamp
+
The timestamp for your data
+point in nanosecond-precision Unix time.
+The timestamp is optional in line protocol.
+If you do not specify a timestamp for your data point InfluxDB uses the server’s
+local nanosecond timestamp in UTC.
+
In the example, the timestamp is 1465839830100400200 (that’s
+2016-06-13T17:43:50.1004002Z in RFC3339 format).
+The line protocol below is the same data point but without the timestamp.
+When InfluxDB writes it to the database it uses your server’s
+local timestamp instead of 2016-06-13T17:43:50.1004002Z.
+
+
+
weather,location=us-midwest temperature=82
+
Use the InfluxDB API to specify timestamps with a precision other than nanoseconds,
+such as microseconds, milliseconds, or seconds.
+We recommend using the coarsest precision possible as this can result in
+significant improvements in compression.
+See the API Reference for more information.
+
+
+
Use NTP to synchronize time between hosts
+
Use the Network Time Protocol (NTP) to synchronize time between hosts.
+InfluxDB uses a host’s local time in UTC to assign timestamps to data; if
+hosts’ clocks aren’t synchronized with NTP, the timestamps on the data written
+to InfluxDB can be inaccurate.
Measurements, tag keys, tag values, and field keys are always strings.
+
+
+
Because InfluxDB stores tag values as strings, InfluxDB cannot perform math on
+tag values.
+In addition, InfluxQL functions
+do not accept a tag value as a primary argument.
+It’s a good idea to take into account that information when designing your
+schema.
+
+
+
+
Timestamps are Unix timestamps.
+The minimum valid timestamp is -9223372036854775806 or 1677-09-21T00:12:43.145224194Z.
+The maximum valid timestamp is 9223372036854775806 or 2262-04-11T23:47:16.854775806Z.
+As mentioned above, by default, InfluxDB assumes that timestamps have
+nanosecond precision.
+See the API Reference for how to specify
+alternative precisions.
+
Field values can be floats, integers, strings, or Booleans:
+
+
+
Floats: by default, InfluxDB assumes all numeric field values are floats.
Acceptable Boolean syntax differs for data writes and data
+queries. See Frequently Asked Questions
+for more information.
+
+
+
+
+
+
Within a measurement, a field’s type cannot differ within a
+shard, but it can differ across
+shards. For example, writing an integer to a field that previously accepted
+floats fails if InfluxDB attempts to store the integer in the same shard as the
+floats:
+
+
+
>INSERTweather,location=us-midwesttemperature=821465839830100400200
+>INSERTweather,location=us-midwesttemperature=81i1465839830100400300
+ERR:{"error":"field type conflict: input field \"temperature\" on measurement \"weather\" is type int64, already exists as type float"}
+
But, writing an integer to a field that previously accepted floats succeeds if
+InfluxDB stores the integer in a new shard:
This section covers when not to and when to double (") or single (')
+quote in line protocol.
+Moving from never quote to please do quote:
+
+
+
Never double or single quote the timestamp–for example:
+
+
+
>INSERTweather,location=us-midwesttemperature=82"1465839830100400200"
+ERR:{"error":"unable to parse 'weather,location=us-midwest temperature=82 \"1465839830100400200\"': bad timestamp"}
+
+
+
Never single quote field values, even if they’re strings–for example:
+
+
+
>INSERTweather,location=us-midwesttemperature='too warm'
+ERR:{"error":"unable to parse 'weather,location=us-midwest temperature='too warm'': invalid boolean"}
+
+
+
Do not double or single quote measurement names, tag keys, tag values, and field
+keys unless the quotes are part of the name–for example:
Line protocol accepts
+InfluxQL keywords
+as identifier names.
+In general, we recommend avoiding using InfluxQL keywords in your schema as
+it can cause
+confusion when querying the data.
+
The keyword time is a special case.
+time can be a
+continuous query name,
+database name,
+measurement name,
+retention policy name,
+subscription name, and
+user name.
+In those cases, time does not require double quotes in queries.
+time cannot be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+See Frequently Asked Questions for more information.
+
Writing data to InfluxDB
+
Getting data in the database
+
Now that you know all about the InfluxDB line protocol, how do you actually get the
+line protocol to InfluxDB?
+Here, we’ll give two quick examples and then point you to the
+Tools sections for further
+information.
+
InfluxDB API
+
Write data to InfluxDB using the InfluxDB API.
+Send a POST request to the /write endpoint and provide your line protocol in
+the request body:
For in-depth descriptions of query string parameters, status codes, responses,
+and more examples, see the API Reference.
+
CLI
+
Write data to InfluxDB using the InfluxDB command line interface (CLI).
+Launch the CLI, use the relevant
+database, and put INSERT in
+front of your line protocol:
You can also use the CLI to
+import Line
+Protocol from a file.
+
There are several ways to write data to InfluxDB.
+See the Tools section for more
+on the InfluxDB API, the
+CLI, and the available Service Plugins (
+UDP,
+Graphite,
+CollectD, and
+OpenTSDB).
+
Duplicate points
+
A point is uniquely identified by the measurement name, tag set, and timestamp.
+If you submit line protocol with the same measurement, tag set, and timestamp,
+but with a different field set, the field set becomes the union of the old
+field set and the new field set, where any conflicts favor the new field set.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
+
+
+
docker pull influxdb:2
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/pr-preview/pr-6948/fonts/alert-icons.eot b/pr-preview/pr-6948/fonts/alert-icons.eot
new file mode 100644
index 000000000..13e657b52
Binary files /dev/null and b/pr-preview/pr-6948/fonts/alert-icons.eot differ
diff --git a/pr-preview/pr-6948/fonts/alert-icons.svg b/pr-preview/pr-6948/fonts/alert-icons.svg
new file mode 100644
index 000000000..e205465e5
--- /dev/null
+++ b/pr-preview/pr-6948/fonts/alert-icons.svg
@@ -0,0 +1,15 @@
+
+
+
\ No newline at end of file
diff --git a/pr-preview/pr-6948/fonts/alert-icons.ttf b/pr-preview/pr-6948/fonts/alert-icons.ttf
new file mode 100644
index 000000000..ec4b22201
Binary files /dev/null and b/pr-preview/pr-6948/fonts/alert-icons.ttf differ
diff --git a/pr-preview/pr-6948/fonts/alert-icons.woff b/pr-preview/pr-6948/fonts/alert-icons.woff
new file mode 100644
index 000000000..293b00a1d
Binary files /dev/null and b/pr-preview/pr-6948/fonts/alert-icons.woff differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v2.eot b/pr-preview/pr-6948/fonts/icomoon-v2.eot
new file mode 100644
index 000000000..6be8b6309
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v2.eot differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v2.svg b/pr-preview/pr-6948/fonts/icomoon-v2.svg
new file mode 100644
index 000000000..2d78cafed
--- /dev/null
+++ b/pr-preview/pr-6948/fonts/icomoon-v2.svg
@@ -0,0 +1,86 @@
+
+
+
\ No newline at end of file
diff --git a/pr-preview/pr-6948/fonts/icomoon-v2.ttf b/pr-preview/pr-6948/fonts/icomoon-v2.ttf
new file mode 100644
index 000000000..6965427e0
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v2.ttf differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v2.woff b/pr-preview/pr-6948/fonts/icomoon-v2.woff
new file mode 100644
index 000000000..02db64219
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v2.woff differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v3.eot b/pr-preview/pr-6948/fonts/icomoon-v3.eot
new file mode 100644
index 000000000..267831470
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v3.eot differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v3.svg b/pr-preview/pr-6948/fonts/icomoon-v3.svg
new file mode 100644
index 000000000..e9c2e887e
--- /dev/null
+++ b/pr-preview/pr-6948/fonts/icomoon-v3.svg
@@ -0,0 +1,174 @@
+
+
+
\ No newline at end of file
diff --git a/pr-preview/pr-6948/fonts/icomoon-v3.ttf b/pr-preview/pr-6948/fonts/icomoon-v3.ttf
new file mode 100644
index 000000000..7bc737a47
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v3.ttf differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v3.woff b/pr-preview/pr-6948/fonts/icomoon-v3.woff
new file mode 100644
index 000000000..8c37c3917
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v3.woff differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v3.woff2 b/pr-preview/pr-6948/fonts/icomoon-v3.woff2
new file mode 100644
index 000000000..e7e1891bd
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v3.woff2 differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v4.eot b/pr-preview/pr-6948/fonts/icomoon-v4.eot
new file mode 100644
index 000000000..c9a1588b7
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v4.eot differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v4.ttf b/pr-preview/pr-6948/fonts/icomoon-v4.ttf
new file mode 100644
index 000000000..64416d44f
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v4.ttf differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v4.woff b/pr-preview/pr-6948/fonts/icomoon-v4.woff
new file mode 100644
index 000000000..65f3034fc
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v4.woff differ
diff --git a/pr-preview/pr-6948/fonts/icomoon-v4.woff2 b/pr-preview/pr-6948/fonts/icomoon-v4.woff2
new file mode 100644
index 000000000..a3ee077ab
Binary files /dev/null and b/pr-preview/pr-6948/fonts/icomoon-v4.woff2 differ
diff --git a/pr-preview/pr-6948/fonts/iconmoon-v4.svg b/pr-preview/pr-6948/fonts/iconmoon-v4.svg
new file mode 100644
index 000000000..1b8ba5422
--- /dev/null
+++ b/pr-preview/pr-6948/fonts/iconmoon-v4.svg
@@ -0,0 +1,98 @@
+
+
+
\ No newline at end of file
diff --git a/pr-preview/pr-6948/fonts/proxima-nova-bold.otf b/pr-preview/pr-6948/fonts/proxima-nova-bold.otf
new file mode 100644
index 000000000..b477a4656
Binary files /dev/null and b/pr-preview/pr-6948/fonts/proxima-nova-bold.otf differ
diff --git a/pr-preview/pr-6948/fonts/proxima-nova-medium.otf b/pr-preview/pr-6948/fonts/proxima-nova-medium.otf
new file mode 100644
index 000000000..ca1363c86
Binary files /dev/null and b/pr-preview/pr-6948/fonts/proxima-nova-medium.otf differ
diff --git a/pr-preview/pr-6948/fonts/proxima-nova-semibold.otf b/pr-preview/pr-6948/fonts/proxima-nova-semibold.otf
new file mode 100644
index 000000000..6cc9bf2ae
Binary files /dev/null and b/pr-preview/pr-6948/fonts/proxima-nova-semibold.otf differ
diff --git a/pr-preview/pr-6948/fonts/proxima-nova.otf b/pr-preview/pr-6948/fonts/proxima-nova.otf
new file mode 100644
index 000000000..43132265f
Binary files /dev/null and b/pr-preview/pr-6948/fonts/proxima-nova.otf differ
diff --git a/pr-preview/pr-6948/img/bg-texture-new.png b/pr-preview/pr-6948/img/bg-texture-new.png
new file mode 100644
index 000000000..4261860e6
Binary files /dev/null and b/pr-preview/pr-6948/img/bg-texture-new.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-admin-usermanagement-oss.png b/pr-preview/pr-6948/img/chronograf/1-6-admin-usermanagement-oss.png
new file mode 100644
index 000000000..59d5df866
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-admin-usermanagement-oss.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-alerts-conditions.png b/pr-preview/pr-6948/img/chronograf/1-6-alerts-conditions.png
new file mode 100644
index 000000000..b778ff7ee
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-alerts-conditions.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-annotations-example.png b/pr-preview/pr-6948/img/chronograf/1-6-annotations-example.png
new file mode 100644
index 000000000..950f40dce
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-annotations-example.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-clone-cell-cell-copy.png b/pr-preview/pr-6948/img/chronograf/1-6-clone-cell-cell-copy.png
new file mode 100644
index 000000000..2de1d4250
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-clone-cell-cell-copy.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-clone-cell-click-button.png b/pr-preview/pr-6948/img/chronograf/1-6-clone-cell-click-button.png
new file mode 100644
index 000000000..8549b67e7
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-clone-cell-click-button.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-clone-dashboard-clone.png b/pr-preview/pr-6948/img/chronograf/1-6-clone-dashboard-clone.png
new file mode 100644
index 000000000..d065c3e07
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-clone-dashboard-clone.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-clone-dashboard.png b/pr-preview/pr-6948/img/chronograf/1-6-clone-dashboard.png
new file mode 100644
index 000000000..b27f752ed
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-clone-dashboard.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-cluster-de.png b/pr-preview/pr-6948/img/chronograf/1-6-cluster-de.png
new file mode 100644
index 000000000..53c68ffaf
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-cluster-de.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-cluster-diagram.png b/pr-preview/pr-6948/img/chronograf/1-6-cluster-diagram.png
new file mode 100644
index 000000000..ba6bb5c77
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-cluster-diagram.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-cluster-hostlist.png b/pr-preview/pr-6948/img/chronograf/1-6-cluster-hostlist.png
new file mode 100644
index 000000000..9c211c008
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-cluster-hostlist.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-cluster-predash.gif b/pr-preview/pr-6948/img/chronograf/1-6-cluster-predash.gif
new file mode 100644
index 000000000..257ac3e36
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-cluster-predash.gif differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-cluster-welcome.png b/pr-preview/pr-6948/img/chronograf/1-6-cluster-welcome.png
new file mode 100644
index 000000000..bcc0cb1d4
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-cluster-welcome.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-connection-kapacitor.png b/pr-preview/pr-6948/img/chronograf/1-6-connection-kapacitor.png
new file mode 100644
index 000000000..385b69b8d
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-connection-kapacitor.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-connection-landing-page.png b/pr-preview/pr-6948/img/chronograf/1-6-connection-landing-page.png
new file mode 100644
index 000000000..b1cfe76fd
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-connection-landing-page.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-custom-meta-query-filtering.gif b/pr-preview/pr-6948/img/chronograf/1-6-custom-meta-query-filtering.gif
new file mode 100644
index 000000000..758fffea7
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-custom-meta-query-filtering.gif differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-dashboard-export.png b/pr-preview/pr-6948/img/chronograf/1-6-dashboard-export.png
new file mode 100644
index 000000000..5f0468e6a
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-dashboard-export.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-dashboard-import-reconcile.png b/pr-preview/pr-6948/img/chronograf/1-6-dashboard-import-reconcile.png
new file mode 100644
index 000000000..d24193951
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-dashboard-import-reconcile.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-dashboard-import.gif b/pr-preview/pr-6948/img/chronograf/1-6-dashboard-import.gif
new file mode 100644
index 000000000..99eae6427
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-dashboard-import.gif differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-faq-cluster-connection.png b/pr-preview/pr-6948/img/chronograf/1-6-faq-cluster-connection.png
new file mode 100644
index 000000000..30ebc80ac
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-faq-cluster-connection.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-g-dashboard-builder.png b/pr-preview/pr-6948/img/chronograf/1-6-g-dashboard-builder.png
new file mode 100644
index 000000000..302de6f51
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-g-dashboard-builder.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-g-dashboard-possibilities.png b/pr-preview/pr-6948/img/chronograf/1-6-g-dashboard-possibilities.png
new file mode 100644
index 000000000..9308bad2c
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-g-dashboard-possibilities.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-g-dashboard-visualization.png b/pr-preview/pr-6948/img/chronograf/1-6-g-dashboard-visualization.png
new file mode 100644
index 000000000..c0b2a5507
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-g-dashboard-visualization.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-intro-gs-diagram.png b/pr-preview/pr-6948/img/chronograf/1-6-intro-gs-diagram.png
new file mode 100644
index 000000000..76feac07c
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-intro-gs-diagram.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-logs-log-viewer-config-options.png b/pr-preview/pr-6948/img/chronograf/1-6-logs-log-viewer-config-options.png
new file mode 100644
index 000000000..6fca1c8c6
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-logs-log-viewer-config-options.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-logs-nav-log-viewer.png b/pr-preview/pr-6948/img/chronograf/1-6-logs-nav-log-viewer.png
new file mode 100644
index 000000000..abb0ea031
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-logs-nav-log-viewer.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-logs-severity-fmt-dot-text.png b/pr-preview/pr-6948/img/chronograf/1-6-logs-severity-fmt-dot-text.png
new file mode 100644
index 000000000..1fb23e474
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-logs-severity-fmt-dot-text.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-logs-severity-fmt-dot.png b/pr-preview/pr-6948/img/chronograf/1-6-logs-severity-fmt-dot.png
new file mode 100644
index 000000000..eb17438e7
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-logs-severity-fmt-dot.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-logs-severity-fmt-text.png b/pr-preview/pr-6948/img/chronograf/1-6-logs-severity-fmt-text.png
new file mode 100644
index 000000000..52286417e
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-logs-severity-fmt-text.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-presentation-mode.png b/pr-preview/pr-6948/img/chronograf/1-6-presentation-mode.png
new file mode 100644
index 000000000..b53e66bcf
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-presentation-mode.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-template-vars-custom-meta-query.png b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-custom-meta-query.png
new file mode 100644
index 000000000..1ad3892bd
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-custom-meta-query.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-template-vars-date-picker.png b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-date-picker.png
new file mode 100644
index 000000000..78786af48
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-date-picker.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-template-vars-fieldkey.png b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-fieldkey.png
new file mode 100644
index 000000000..de8566672
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-fieldkey.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-template-vars-interval-dropdown.png b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-interval-dropdown.png
new file mode 100644
index 000000000..2167f1435
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-interval-dropdown.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-template-vars-map-dropdown.png b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-map-dropdown.png
new file mode 100644
index 000000000..cf8d8dabc
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-map-dropdown.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-template-vars-measurement-var.png b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-measurement-var.png
new file mode 100644
index 000000000..4489c4369
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-measurement-var.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-template-vars-time-dropdown.png b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-time-dropdown.png
new file mode 100644
index 000000000..fe2c3976a
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-time-dropdown.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-template-vars-use.gif b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-use.gif
new file mode 100644
index 000000000..b1232223f
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-template-vars-use.gif differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-bar-graph-controls.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-bar-graph-controls.png
new file mode 100644
index 000000000..171c24c8b
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-bar-graph-controls.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-bar-graph-example.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-bar-graph-example.png
new file mode 100644
index 000000000..8c2c5b959
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-bar-graph-example.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-bar-graph-selector.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-bar-graph-selector.png
new file mode 100644
index 000000000..b80383230
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-bar-graph-selector.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-gauge-controls.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-gauge-controls.png
new file mode 100644
index 000000000..e4e776a08
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-gauge-controls.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-gauge-example.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-gauge-example.png
new file mode 100644
index 000000000..8bfabe64f
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-gauge-example.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-gauge-selector.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-gauge-selector.png
new file mode 100644
index 000000000..aa72deaca
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-gauge-selector.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-controls.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-controls.png
new file mode 100644
index 000000000..8ffa81ecd
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-controls.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-example.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-example.png
new file mode 100644
index 000000000..3aef2ba0c
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-example.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-selector.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-selector.png
new file mode 100644
index 000000000..3c08b8e5b
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-selector.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-single-stat-controls.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-single-stat-controls.png
new file mode 100644
index 000000000..226ecfefe
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-single-stat-controls.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-single-stat-example.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-single-stat-example.png
new file mode 100644
index 000000000..d10ffe67f
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-single-stat-example.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-single-stat-selector.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-single-stat-selector.png
new file mode 100644
index 000000000..dd6484c88
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-line-graph-single-stat-selector.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-single-stat-selector.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-single-stat-selector.png
new file mode 100644
index 000000000..3e43eeb82
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-single-stat-selector.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-stacked-graph-controls.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-stacked-graph-controls.png
new file mode 100644
index 000000000..467405957
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-stacked-graph-controls.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-stacked-graph-example.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-stacked-graph-example.png
new file mode 100644
index 000000000..aeb2598e2
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-stacked-graph-example.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-stacked-graph-selector.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-stacked-graph-selector.png
new file mode 100644
index 000000000..bafe4eeb7
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-stacked-graph-selector.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-step-plot-graph-controls.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-step-plot-graph-controls.png
new file mode 100644
index 000000000..9f5d4eace
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-step-plot-graph-controls.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-step-plot-graph-example.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-step-plot-graph-example.png
new file mode 100644
index 000000000..1172a1fb4
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-step-plot-graph-example.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-step-plot-graph-selector.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-step-plot-graph-selector.png
new file mode 100644
index 000000000..51bfdaa81
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-step-plot-graph-selector.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-table-controls.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-table-controls.png
new file mode 100644
index 000000000..9f8ad217b
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-table-controls.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-table-example.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-table-example.png
new file mode 100644
index 000000000..687e1d8a2
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-table-example.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-table-selector.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-table-selector.png
new file mode 100644
index 000000000..90eeaaeb2
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-table-selector.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-6-viz-types-selector.png b/pr-preview/pr-6948/img/chronograf/1-6-viz-types-selector.png
new file mode 100644
index 000000000..72f463d96
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-6-viz-types-selector.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-data-explorer-icon.png b/pr-preview/pr-6948/img/chronograf/1-7-data-explorer-icon.png
new file mode 100644
index 000000000..b6583e04c
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-data-explorer-icon.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-kapacitor-connection-config.png b/pr-preview/pr-6948/img/chronograf/1-7-kapacitor-connection-config.png
new file mode 100644
index 000000000..719fc9b9c
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-kapacitor-connection-config.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-dashboard.gif b/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-dashboard.gif
new file mode 100644
index 000000000..3eb0dad9d
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-dashboard.gif differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-overview.png b/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-overview.png
new file mode 100644
index 000000000..3a38ee0eb
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-overview.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-search-filter.gif b/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-search-filter.gif
new file mode 100644
index 000000000..0f9ca05f2
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-search-filter.gif differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-specific-time.gif b/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-specific-time.gif
new file mode 100644
index 000000000..2250b68b6
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-log-viewer-specific-time.gif differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-protoboard-kubernetes.png b/pr-preview/pr-6948/img/chronograf/1-7-protoboard-kubernetes.png
new file mode 100644
index 000000000..6bd6fe5c9
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-protoboard-kubernetes.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-protoboard-mysql.png b/pr-preview/pr-6948/img/chronograf/1-7-protoboard-mysql.png
new file mode 100644
index 000000000..43d465cb2
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-protoboard-mysql.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-protoboard-select.png b/pr-preview/pr-6948/img/chronograf/1-7-protoboard-select.png
new file mode 100644
index 000000000..d5b16ad28
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-protoboard-select.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-protoboard-system.png b/pr-preview/pr-6948/img/chronograf/1-7-protoboard-system.png
new file mode 100644
index 000000000..111501e82
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-protoboard-system.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-protoboard-vsphere.png b/pr-preview/pr-6948/img/chronograf/1-7-protoboard-vsphere.png
new file mode 100644
index 000000000..75ed7eeab
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-protoboard-vsphere.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-viz-note-controls.png b/pr-preview/pr-6948/img/chronograf/1-7-viz-note-controls.png
new file mode 100644
index 000000000..1d9ab8145
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-viz-note-controls.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-viz-note-example.png b/pr-preview/pr-6948/img/chronograf/1-7-viz-note-example.png
new file mode 100644
index 000000000..f6645b0dd
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-viz-note-example.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-7-viz-note-selector.png b/pr-preview/pr-6948/img/chronograf/1-7-viz-note-selector.png
new file mode 100644
index 000000000..6e16317c9
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-7-viz-note-selector.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-8-ha-architecture.svg b/pr-preview/pr-6948/img/chronograf/1-8-ha-architecture.svg
new file mode 100644
index 000000000..01d334d09
--- /dev/null
+++ b/pr-preview/pr-6948/img/chronograf/1-8-ha-architecture.svg
@@ -0,0 +1,153 @@
+
diff --git a/pr-preview/pr-6948/img/chronograf/1-8-influxdb-v1-connection-config.png b/pr-preview/pr-6948/img/chronograf/1-8-influxdb-v1-connection-config.png
new file mode 100644
index 000000000..e55ee57e1
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-8-influxdb-v1-connection-config.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-8-influxdb-v2-connection-config.png b/pr-preview/pr-6948/img/chronograf/1-8-influxdb-v2-connection-config.png
new file mode 100644
index 000000000..322452901
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-8-influxdb-v2-connection-config.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-9-dashboard-cell-add-data.png b/pr-preview/pr-6948/img/chronograf/1-9-dashboard-cell-add-data.png
new file mode 100644
index 000000000..58b4b53b8
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-9-dashboard-cell-add-data.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-9-template-var-title.png b/pr-preview/pr-6948/img/chronograf/1-9-template-var-title.png
new file mode 100644
index 000000000..2bf4f7e04
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-9-template-var-title.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-9-write-data.png b/pr-preview/pr-6948/img/chronograf/1-9-write-data.png
new file mode 100644
index 000000000..7d328a3f8
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-9-write-data.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-9-write-db-rp.png b/pr-preview/pr-6948/img/chronograf/1-9-write-db-rp.png
new file mode 100644
index 000000000..e064a2522
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-9-write-db-rp.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/1-9-write-precision.png b/pr-preview/pr-6948/img/chronograf/1-9-write-precision.png
new file mode 100644
index 000000000..39ead9de8
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/1-9-write-precision.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/v1-influxdb3/cloud-dedicated-no-mgmt.png b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/cloud-dedicated-no-mgmt.png
new file mode 100644
index 000000000..cc5b86294
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/cloud-dedicated-no-mgmt.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/v1-influxdb3/cloud-dedicated-with-mgmt.png b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/cloud-dedicated-with-mgmt.png
new file mode 100644
index 000000000..86c72735a
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/cloud-dedicated-with-mgmt.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/v1-influxdb3/cloud-serverless-connection.png b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/cloud-serverless-connection.png
new file mode 100644
index 000000000..124353952
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/cloud-serverless-connection.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/v1-influxdb3/clustered-connection.png b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/clustered-connection.png
new file mode 100644
index 000000000..cb19ce9df
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/clustered-connection.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/v1-influxdb3/core-connection.png b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/core-connection.png
new file mode 100644
index 000000000..285006e20
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/core-connection.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/v1-influxdb3/enterprise-connection.png b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/enterprise-connection.png
new file mode 100644
index 000000000..8c033a5b5
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/enterprise-connection.png differ
diff --git a/pr-preview/pr-6948/img/chronograf/v1-influxdb3/server-type-dropdown.png b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/server-type-dropdown.png
new file mode 100644
index 000000000..8efeb837a
Binary files /dev/null and b/pr-preview/pr-6948/img/chronograf/v1-influxdb3/server-type-dropdown.png differ
diff --git a/pr-preview/pr-6948/img/cloudformation1.png b/pr-preview/pr-6948/img/cloudformation1.png
new file mode 100644
index 000000000..96c002c2e
Binary files /dev/null and b/pr-preview/pr-6948/img/cloudformation1.png differ
diff --git a/pr-preview/pr-6948/img/cloudformation2.png b/pr-preview/pr-6948/img/cloudformation2.png
new file mode 100644
index 000000000..fd0725987
Binary files /dev/null and b/pr-preview/pr-6948/img/cloudformation2.png differ
diff --git a/pr-preview/pr-6948/img/enterprise/1-6-flapping-dashboard.gif b/pr-preview/pr-6948/img/enterprise/1-6-flapping-dashboard.gif
new file mode 100644
index 000000000..745f94f2b
Binary files /dev/null and b/pr-preview/pr-6948/img/enterprise/1-6-flapping-dashboard.gif differ
diff --git a/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-1.png b/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-1.png
new file mode 100644
index 000000000..494abbeaf
Binary files /dev/null and b/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-1.png differ
diff --git a/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-2.png b/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-2.png
new file mode 100644
index 000000000..377c6cfa2
Binary files /dev/null and b/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-2.png differ
diff --git a/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-3.png b/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-3.png
new file mode 100644
index 000000000..f40c70fce
Binary files /dev/null and b/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-3.png differ
diff --git a/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-4.png b/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-4.png
new file mode 100644
index 000000000..22c291ea7
Binary files /dev/null and b/pr-preview/pr-6948/img/enterprise/1-6-gcp-intro-4.png differ
diff --git a/pr-preview/pr-6948/img/enterprise/1-8-network-diagram.png b/pr-preview/pr-6948/img/enterprise/1-8-network-diagram.png
new file mode 100644
index 000000000..86ba784d8
Binary files /dev/null and b/pr-preview/pr-6948/img/enterprise/1-8-network-diagram.png differ
diff --git a/pr-preview/pr-6948/img/favicon.png b/pr-preview/pr-6948/img/favicon.png
new file mode 100644
index 000000000..07b99465e
Binary files /dev/null and b/pr-preview/pr-6948/img/favicon.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-aggregate-rate-output.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-aggregate-rate-output.png
new file mode 100644
index 000000000..4996b7543
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-aggregate-rate-output.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-derivative-output.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-derivative-output.png
new file mode 100644
index 000000000..e3d50d0aa
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-derivative-output.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-difference-output.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-difference-output.png
new file mode 100644
index 000000000..8fd6b146e
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-difference-output.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-increase-input.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-increase-input.png
new file mode 100644
index 000000000..a6626685b
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-increase-input.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-increase-output.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-increase-output.png
new file mode 100644
index 000000000..3b3b5b5b1
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-increase-output.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-normalized-input.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-normalized-input.png
new file mode 100644
index 000000000..2c1f278dd
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-counter-normalized-input.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-gauge-aggregate-rate-output.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-gauge-aggregate-rate-output.png
new file mode 100644
index 000000000..b4f2028d6
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-gauge-aggregate-rate-output.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-gauge-derivative-output.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-gauge-derivative-output.png
new file mode 100644
index 000000000..56147d947
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-gauge-derivative-output.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-gauge-input.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-gauge-input.png
new file mode 100644
index 000000000..81681267f
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-gauge-input.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-histogram-multiple-quantiles.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-histogram-multiple-quantiles.png
new file mode 100644
index 000000000..116aa4423
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-histogram-multiple-quantiles.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-histogram-quantile.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-histogram-quantile.png
new file mode 100644
index 000000000..6fe45e824
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-histogram-quantile.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-prometheus-summary-quantiles.png b/pr-preview/pr-6948/img/flux/0-x-prometheus-summary-quantiles.png
new file mode 100644
index 000000000..833c73bc4
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/0-x-prometheus-summary-quantiles.png differ
diff --git a/pr-preview/pr-6948/img/flux/0-x-water-process-dark.svg b/pr-preview/pr-6948/img/flux/0-x-water-process-dark.svg
new file mode 100644
index 000000000..86945c1e0
--- /dev/null
+++ b/pr-preview/pr-6948/img/flux/0-x-water-process-dark.svg
@@ -0,0 +1,2898 @@
+
+
+
diff --git a/pr-preview/pr-6948/img/flux/0-x-water-process-light.svg b/pr-preview/pr-6948/img/flux/0-x-water-process-light.svg
new file mode 100644
index 000000000..ecd10405d
--- /dev/null
+++ b/pr-preview/pr-6948/img/flux/0-x-water-process-light.svg
@@ -0,0 +1,2548 @@
+
+
+
diff --git a/pr-preview/pr-6948/img/flux/grouping-by-cpu-time.png b/pr-preview/pr-6948/img/flux/grouping-by-cpu-time.png
new file mode 100644
index 000000000..6c4390a9f
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/grouping-by-cpu-time.png differ
diff --git a/pr-preview/pr-6948/img/flux/grouping-by-time.png b/pr-preview/pr-6948/img/flux/grouping-by-time.png
new file mode 100644
index 000000000..dd0f5812e
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/grouping-by-time.png differ
diff --git a/pr-preview/pr-6948/img/flux/grouping-data-set.png b/pr-preview/pr-6948/img/flux/grouping-data-set.png
new file mode 100644
index 000000000..9af7c6914
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/grouping-data-set.png differ
diff --git a/pr-preview/pr-6948/img/flux/simple-unwindowed-data.png b/pr-preview/pr-6948/img/flux/simple-unwindowed-data.png
new file mode 100644
index 000000000..6b84ef467
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/simple-unwindowed-data.png differ
diff --git a/pr-preview/pr-6948/img/flux/simple-windowed-aggregate-data.png b/pr-preview/pr-6948/img/flux/simple-windowed-aggregate-data.png
new file mode 100644
index 000000000..4a16bfd04
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/simple-windowed-aggregate-data.png differ
diff --git a/pr-preview/pr-6948/img/flux/simple-windowed-data.png b/pr-preview/pr-6948/img/flux/simple-windowed-data.png
new file mode 100644
index 000000000..0c3df7288
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/simple-windowed-data.png differ
diff --git a/pr-preview/pr-6948/img/flux/windowed-aggregates-ungrouped.png b/pr-preview/pr-6948/img/flux/windowed-aggregates-ungrouped.png
new file mode 100644
index 000000000..510ec5006
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/windowed-aggregates-ungrouped.png differ
diff --git a/pr-preview/pr-6948/img/flux/windowed-aggregates.png b/pr-preview/pr-6948/img/flux/windowed-aggregates.png
new file mode 100644
index 000000000..9c51ee719
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/windowed-aggregates.png differ
diff --git a/pr-preview/pr-6948/img/flux/windowed-data.png b/pr-preview/pr-6948/img/flux/windowed-data.png
new file mode 100644
index 000000000..8589db679
Binary files /dev/null and b/pr-preview/pr-6948/img/flux/windowed-data.png differ
diff --git a/pr-preview/pr-6948/img/grafana/enterprise-influxdb-v1-grafana-flux.png b/pr-preview/pr-6948/img/grafana/enterprise-influxdb-v1-grafana-flux.png
new file mode 100644
index 000000000..a0a4d1a7d
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/enterprise-influxdb-v1-grafana-flux.png differ
diff --git a/pr-preview/pr-6948/img/grafana/enterprise-influxdb-v1-grafana-influxql.png b/pr-preview/pr-6948/img/grafana/enterprise-influxdb-v1-grafana-influxql.png
new file mode 100644
index 000000000..53c86108b
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/enterprise-influxdb-v1-grafana-influxql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/grafana-sql-insecure-connection.png b/pr-preview/pr-6948/img/grafana/grafana-sql-insecure-connection.png
new file mode 100644
index 000000000..ec39fc576
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/grafana-sql-insecure-connection.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb-cloud-grafana-flux.png b/pr-preview/pr-6948/img/grafana/influxdb-cloud-grafana-flux.png
new file mode 100644
index 000000000..f3ed30ef9
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb-cloud-grafana-flux.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb-cloud-grafana-influxql.png b/pr-preview/pr-6948/img/grafana/influxdb-cloud-grafana-influxql.png
new file mode 100644
index 000000000..228c8d8ae
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb-cloud-grafana-influxql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb-v1-grafana-flux.png b/pr-preview/pr-6948/img/grafana/influxdb-v1-grafana-flux.png
new file mode 100644
index 000000000..71da27b5d
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb-v1-grafana-flux.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb-v1-grafana-influxql.png b/pr-preview/pr-6948/img/grafana/influxdb-v1-grafana-influxql.png
new file mode 100644
index 000000000..97743afcd
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb-v1-grafana-influxql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb-v2-grafana-influxql-flux.png b/pr-preview/pr-6948/img/grafana/influxdb-v2-grafana-influxql-flux.png
new file mode 100644
index 000000000..00af8fc4c
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb-v2-grafana-influxql-flux.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb-v2-grafana-influxql.png b/pr-preview/pr-6948/img/grafana/influxdb-v2-grafana-influxql.png
new file mode 100644
index 000000000..9c66bde2d
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb-v2-grafana-influxql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb3-cloud-dedicated-grafana-influxql.png b/pr-preview/pr-6948/img/grafana/influxdb3-cloud-dedicated-grafana-influxql.png
new file mode 100644
index 000000000..53465f23f
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb3-cloud-dedicated-grafana-influxql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb3-cloud-dedicated-grafana-sql.png b/pr-preview/pr-6948/img/grafana/influxdb3-cloud-dedicated-grafana-sql.png
new file mode 100644
index 000000000..12add0308
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb3-cloud-dedicated-grafana-sql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb3-cloud-serverless-grafana-influxql.png b/pr-preview/pr-6948/img/grafana/influxdb3-cloud-serverless-grafana-influxql.png
new file mode 100644
index 000000000..93623ea0e
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb3-cloud-serverless-grafana-influxql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb3-cloud-serverless-grafana-sql.png b/pr-preview/pr-6948/img/grafana/influxdb3-cloud-serverless-grafana-sql.png
new file mode 100644
index 000000000..fd6d01496
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb3-cloud-serverless-grafana-sql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb3-clustered-grafana-influxql.png b/pr-preview/pr-6948/img/grafana/influxdb3-clustered-grafana-influxql.png
new file mode 100644
index 000000000..c7ec0587c
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb3-clustered-grafana-influxql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb3-clustered-grafana-sql.png b/pr-preview/pr-6948/img/grafana/influxdb3-clustered-grafana-sql.png
new file mode 100644
index 000000000..1742f749d
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb3-clustered-grafana-sql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb3-core-grafana-influxql.png b/pr-preview/pr-6948/img/grafana/influxdb3-core-grafana-influxql.png
new file mode 100644
index 000000000..f99f70158
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb3-core-grafana-influxql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb3-core-grafana-sql.png b/pr-preview/pr-6948/img/grafana/influxdb3-core-grafana-sql.png
new file mode 100644
index 000000000..34f4db136
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb3-core-grafana-sql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb3-enterprise-grafana-influxql.png b/pr-preview/pr-6948/img/grafana/influxdb3-enterprise-grafana-influxql.png
new file mode 100644
index 000000000..c4bdd46ed
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb3-enterprise-grafana-influxql.png differ
diff --git a/pr-preview/pr-6948/img/grafana/influxdb3-enterprise-grafana-sql.png b/pr-preview/pr-6948/img/grafana/influxdb3-enterprise-grafana-sql.png
new file mode 100644
index 000000000..6c1b5b167
Binary files /dev/null and b/pr-preview/pr-6948/img/grafana/influxdb3-enterprise-grafana-sql.png differ
diff --git a/pr-preview/pr-6948/img/influx-logo-cubo-dark.png b/pr-preview/pr-6948/img/influx-logo-cubo-dark.png
new file mode 100644
index 000000000..d87fc95a4
Binary files /dev/null and b/pr-preview/pr-6948/img/influx-logo-cubo-dark.png differ
diff --git a/pr-preview/pr-6948/img/influx-logo-cubo-white.png b/pr-preview/pr-6948/img/influx-logo-cubo-white.png
new file mode 100644
index 000000000..e5e562624
Binary files /dev/null and b/pr-preview/pr-6948/img/influx-logo-cubo-white.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/1-3-hw-first-step-1-2.png b/pr-preview/pr-6948/img/influxdb/1-3-hw-first-step-1-2.png
new file mode 100644
index 000000000..8e7ad7f3f
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/1-3-hw-first-step-1-2.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/1-3-hw-raw-data-1-2.png b/pr-preview/pr-6948/img/influxdb/1-3-hw-raw-data-1-2.png
new file mode 100644
index 000000000..dad78a76c
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/1-3-hw-raw-data-1-2.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/1-3-hw-second-step-1-2.png b/pr-preview/pr-6948/img/influxdb/1-3-hw-second-step-1-2.png
new file mode 100644
index 000000000..8b4745b11
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/1-3-hw-second-step-1-2.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/1-3-hw-third-step-1-2.png b/pr-preview/pr-6948/img/influxdb/1-3-hw-third-step-1-2.png
new file mode 100644
index 000000000..23f68b913
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/1-3-hw-third-step-1-2.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/1-5-calc-percentage-apple-variety.png b/pr-preview/pr-6948/img/influxdb/1-5-calc-percentage-apple-variety.png
new file mode 100644
index 000000000..dedb91bb3
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/1-5-calc-percentage-apple-variety.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/1-5-calc-percentage-hourly-apple-variety.png b/pr-preview/pr-6948/img/influxdb/1-5-calc-percentage-hourly-apple-variety.png
new file mode 100644
index 000000000..4c1e9070b
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/1-5-calc-percentage-hourly-apple-variety.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/1-7-flux-dashboard-add-filter.png b/pr-preview/pr-6948/img/influxdb/1-7-flux-dashboard-add-filter.png
new file mode 100644
index 000000000..349fb9951
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/1-7-flux-dashboard-add-filter.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/1-7-flux-dashboard-cell.png b/pr-preview/pr-6948/img/influxdb/1-7-flux-dashboard-cell.png
new file mode 100644
index 000000000..8ad80ef93
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/1-7-flux-dashboard-cell.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/1-7-flux-dashboard-view-raw.png b/pr-preview/pr-6948/img/influxdb/1-7-flux-dashboard-view-raw.png
new file mode 100644
index 000000000..05bd88dcc
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/1-7-flux-dashboard-view-raw.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/1-8-tools-vsflux-explore-schema.png b/pr-preview/pr-6948/img/influxdb/1-8-tools-vsflux-explore-schema.png
new file mode 100644
index 000000000..ff64eb1e6
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/1-8-tools-vsflux-explore-schema.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-controls-dark-light-mode.png b/pr-preview/pr-6948/img/influxdb/2-0-controls-dark-light-mode.png
new file mode 100644
index 000000000..8e699ef57
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-controls-dark-light-mode.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-controls-time-range.png b/pr-preview/pr-6948/img/influxdb/2-0-controls-time-range.png
new file mode 100644
index 000000000..7813e6c8c
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-controls-time-range.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-controls-timezone.png b/pr-preview/pr-6948/img/influxdb/2-0-controls-timezone.png
new file mode 100644
index 000000000..b8193f12c
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-controls-timezone.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-data-explorer.png b/pr-preview/pr-6948/img/influxdb/2-0-data-explorer.png
new file mode 100644
index 000000000..cdb82eafe
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-data-explorer.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-sql-dashboard-variable.png b/pr-preview/pr-6948/img/influxdb/2-0-sql-dashboard-variable.png
new file mode 100644
index 000000000..8632d0913
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-sql-dashboard-variable.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-tools-chronograf-v2-auth.png b/pr-preview/pr-6948/img/influxdb/2-0-tools-chronograf-v2-auth.png
new file mode 100644
index 000000000..b619dcdf4
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-tools-chronograf-v2-auth.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-tools-vsflux-errors-warnings.png b/pr-preview/pr-6948/img/influxdb/2-0-tools-vsflux-errors-warnings.png
new file mode 100644
index 000000000..dc82301fe
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-tools-vsflux-errors-warnings.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-tools-vsflux-explore-schema.png b/pr-preview/pr-6948/img/influxdb/2-0-tools-vsflux-explore-schema.png
new file mode 100644
index 000000000..a7f969fcf
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-tools-vsflux-explore-schema.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-variables-data-explorer-view.png b/pr-preview/pr-6948/img/influxdb/2-0-variables-data-explorer-view.png
new file mode 100644
index 000000000..6f27a03e6
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-variables-data-explorer-view.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-Band-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-Band-example.png
new file mode 100644
index 000000000..0977d1bcd
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-Band-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-dropdown.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-dropdown.png
new file mode 100644
index 000000000..5dd1ae686
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-dropdown.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-gauge-example-8.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-gauge-example-8.png
new file mode 100644
index 000000000..d90b599c0
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-gauge-example-8.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-gauge-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-gauge-example.png
new file mode 100644
index 000000000..eb7421584
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-gauge-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-gauge-pressure-8.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-gauge-pressure-8.png
new file mode 100644
index 000000000..5c48f4658
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-gauge-pressure-8.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-graph-linear-static.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-graph-linear-static.png
new file mode 100644
index 000000000..5951bcb39
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-graph-linear-static.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-graph-single-stat-mem-8.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-graph-single-stat-mem-8.png
new file mode 100644
index 000000000..a02a9b76c
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-graph-single-stat-mem-8.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-graph-smooth-hover.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-graph-smooth-hover.png
new file mode 100644
index 000000000..08fcece53
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-graph-smooth-hover.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-heatmap-correlation.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-heatmap-correlation.png
new file mode 100644
index 000000000..945e678fa
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-heatmap-correlation.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-heatmap-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-heatmap-example.png
new file mode 100644
index 000000000..11f897d4a
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-heatmap-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-heatmap-vs-scatter.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-heatmap-vs-scatter.png
new file mode 100644
index 000000000..fb2fc9525
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-heatmap-vs-scatter.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-histogram-errors.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-histogram-errors.png
new file mode 100644
index 000000000..77055b353
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-histogram-errors.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-histogram-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-histogram-example.png
new file mode 100644
index 000000000..f399ebd19
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-histogram-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-example-8.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-example-8.png
new file mode 100644
index 000000000..8a6a2b7dc
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-example-8.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-example.png
new file mode 100644
index 000000000..5060bee91
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-single-stat-example-8.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-single-stat-example-8.png
new file mode 100644
index 000000000..6a2f56892
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-single-stat-example-8.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-single-stat-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-single-stat-example.png
new file mode 100644
index 000000000..16e1d4670
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-single-stat-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-step-example-8.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-step-example-8.png
new file mode 100644
index 000000000..b22d53453
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-line-graph-step-example-8.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-map-circle-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-map-circle-example.png
new file mode 100644
index 000000000..e9e0664b1
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-map-circle-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-map-heat-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-map-heat-example.png
new file mode 100644
index 000000000..0c8affbbb
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-map-heat-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-map-point-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-map-point-example.png
new file mode 100644
index 000000000..8676ef6e0
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-map-point-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-mosaic-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-mosaic-example.png
new file mode 100644
index 000000000..e08b70da1
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-mosaic-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-scatter-correlation.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-scatter-correlation.png
new file mode 100644
index 000000000..e1b3df197
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-scatter-correlation.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-scatter-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-scatter-example.png
new file mode 100644
index 000000000..83e9c7b8e
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-scatter-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-single-stat-example-8.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-single-stat-example-8.png
new file mode 100644
index 000000000..ece3e1fd8
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-single-stat-example-8.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-single-stat-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-single-stat-example.png
new file mode 100644
index 000000000..a7bc1577d
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-single-stat-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-table-example.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-table-example.png
new file mode 100644
index 000000000..666413266
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-table-example.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-0-visualizations-table-human-readable.png b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-table-human-readable.png
new file mode 100644
index 000000000..a554b2351
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-0-visualizations-table-human-readable.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-1-migration-dashboard.png b/pr-preview/pr-6948/img/influxdb/2-1-migration-dashboard.png
new file mode 100644
index 000000000..a1a6b7b67
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-1-migration-dashboard.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-1-tools-vsflux-add-connection.png b/pr-preview/pr-6948/img/influxdb/2-1-tools-vsflux-add-connection.png
new file mode 100644
index 000000000..62362f8cf
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-1-tools-vsflux-add-connection.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-1-tools-vsflux-influxdb-pane.png b/pr-preview/pr-6948/img/influxdb/2-1-tools-vsflux-influxdb-pane.png
new file mode 100644
index 000000000..6bf82b9c2
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-1-tools-vsflux-influxdb-pane.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-4-get-started-visualize-query-builder.png b/pr-preview/pr-6948/img/influxdb/2-4-get-started-visualize-query-builder.png
new file mode 100644
index 000000000..1ce6383e9
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-4-get-started-visualize-query-builder.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-4-get-started-visualize-time-range.png b/pr-preview/pr-6948/img/influxdb/2-4-get-started-visualize-time-range.png
new file mode 100644
index 000000000..46ccc7d33
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-4-get-started-visualize-time-range.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-4-get-started-visualize-variable-select.png b/pr-preview/pr-6948/img/influxdb/2-4-get-started-visualize-variable-select.png
new file mode 100644
index 000000000..a5b25b073
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-4-get-started-visualize-variable-select.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-4-influxql-holtwinters-1.png b/pr-preview/pr-6948/img/influxdb/2-4-influxql-holtwinters-1.png
new file mode 100644
index 000000000..f355c4c14
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-4-influxql-holtwinters-1.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-4-influxql-holtwinters-2.png b/pr-preview/pr-6948/img/influxdb/2-4-influxql-holtwinters-2.png
new file mode 100644
index 000000000..6071275f1
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-4-influxql-holtwinters-2.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-4-influxql-holtwinters-3.png b/pr-preview/pr-6948/img/influxdb/2-4-influxql-holtwinters-3.png
new file mode 100644
index 000000000..83281d657
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-4-influxql-holtwinters-3.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/2-4-influxql-shell-table-format.png b/pr-preview/pr-6948/img/influxdb/2-4-influxql-shell-table-format.png
new file mode 100644
index 000000000..5b07811c8
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/2-4-influxql-shell-table-format.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/3-0-query-plan-tree.png b/pr-preview/pr-6948/img/influxdb/3-0-query-plan-tree.png
new file mode 100644
index 000000000..fad64092a
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/3-0-query-plan-tree.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/OSS-v1-grafana-product-dropdown-flux.png b/pr-preview/pr-6948/img/influxdb/OSS-v1-grafana-product-dropdown-flux.png
new file mode 100644
index 000000000..3b20ffbae
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/OSS-v1-grafana-product-dropdown-flux.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/OSS-v1-grafana-product-dropdown-influxql.png b/pr-preview/pr-6948/img/influxdb/OSS-v1-grafana-product-dropdown-influxql.png
new file mode 100644
index 000000000..87b3de6d4
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/OSS-v1-grafana-product-dropdown-influxql.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/cloud-controls-view-raw-data.png b/pr-preview/pr-6948/img/influxdb/cloud-controls-view-raw-data.png
new file mode 100644
index 000000000..70d7cffcb
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/cloud-controls-view-raw-data.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/cloud-internals-auth.png b/pr-preview/pr-6948/img/influxdb/cloud-internals-auth.png
new file mode 100644
index 000000000..1d3c4f6bf
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/cloud-internals-auth.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/cloud-internals-cluster.png b/pr-preview/pr-6948/img/influxdb/cloud-internals-cluster.png
new file mode 100644
index 000000000..bc616c436
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/cloud-internals-cluster.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/cloudformation1.png b/pr-preview/pr-6948/img/influxdb/cloudformation1.png
new file mode 100644
index 000000000..96c002c2e
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/cloudformation1.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/cloudformation2.png b/pr-preview/pr-6948/img/influxdb/cloudformation2.png
new file mode 100644
index 000000000..fd0725987
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/cloudformation2.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png b/pr-preview/pr-6948/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png
new file mode 100644
index 000000000..cb5d04957
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/influxdb-3-enterprise-high-availability.png b/pr-preview/pr-6948/img/influxdb/influxdb-3-enterprise-high-availability.png
new file mode 100644
index 000000000..f43eced79
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/influxdb-3-enterprise-high-availability.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/influxdb-3-enterprise-workload-isolation.png b/pr-preview/pr-6948/img/influxdb/influxdb-3-enterprise-workload-isolation.png
new file mode 100644
index 000000000..06769d342
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/influxdb-3-enterprise-workload-isolation.png differ
diff --git a/pr-preview/pr-6948/img/influxdb/user-icon.png b/pr-preview/pr-6948/img/influxdb/user-icon.png
new file mode 100644
index 000000000..68be783d7
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb/user-icon.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-observability-dashboard.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-observability-dashboard.png
new file mode 100644
index 000000000..ab0d1284d
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-observability-dashboard.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-account-switcher.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-account-switcher.png
new file mode 100644
index 000000000..9c0036cc6
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-account-switcher.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-all-accounts.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-all-accounts.png
new file mode 100644
index 000000000..dd6c06987
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-all-accounts.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-all-clusters.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-all-clusters.png
new file mode 100644
index 000000000..0bd998dd0
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-all-clusters.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-autoscaling.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-autoscaling.png
new file mode 100644
index 000000000..f68df6b3c
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-autoscaling.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-custom-partitioned-table.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-custom-partitioned-table.png
new file mode 100644
index 000000000..d644a58ba
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-custom-partitioned-table.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-database-token.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-database-token.png
new file mode 100644
index 000000000..0196eccc0
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-database-token.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-database.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-database.png
new file mode 100644
index 000000000..8dcf698d1
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-database.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-management-token.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-management-token.png
new file mode 100644
index 000000000..f36bd5365
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-management-token.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-table-custom-partitioning.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-table-custom-partitioning.png
new file mode 100644
index 000000000..2e1dbc264
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-table-custom-partitioning.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-table-default.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-table-default.png
new file mode 100644
index 000000000..873e1cc7b
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-create-table-default.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-database-token-options-menu.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-database-token-options-menu.png
new file mode 100644
index 000000000..daeef3bc8
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-database-token-options-menu.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-database-tokens.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-database-tokens.png
new file mode 100644
index 000000000..463d3293b
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-database-tokens.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-databases.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-databases.png
new file mode 100644
index 000000000..73d71b77e
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-databases.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-delete-database.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-delete-database.png
new file mode 100644
index 000000000..f16e0e4d1
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-delete-database.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-edit-database-token.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-edit-database-token.png
new file mode 100644
index 000000000..87bb83e2c
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-edit-database-token.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-help.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-help.png
new file mode 100644
index 000000000..ec01c4a77
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-help.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-list-databases.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-list-databases.png
new file mode 100644
index 000000000..04a20cf04
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-list-databases.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-login.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-login.png
new file mode 100644
index 000000000..f276e3654
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-login.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-management-tokens.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-management-tokens.png
new file mode 100644
index 000000000..1e6f3b6fb
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-management-tokens.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-overview.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-overview.png
new file mode 100644
index 000000000..25b3497f3
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-overview.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-query-log-detail-view.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-query-log-detail-view.png
new file mode 100644
index 000000000..22da99d63
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-query-log-detail-view.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-query-log-list-view.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-query-log-list-view.png
new file mode 100644
index 000000000..26ab18414
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-query-log-list-view.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-revoke-database-token.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-revoke-database-token.png
new file mode 100644
index 000000000..2472d5332
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-revoke-database-token.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-tables.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-tables.png
new file mode 100644
index 000000000..060204251
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-tables.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-users.png b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-users.png
new file mode 100644
index 000000000..b74fe5cfe
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-dedicated-admin-ui-users.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-serverless-migration-dashboard.png b/pr-preview/pr-6948/img/influxdb3/cloud-serverless-migration-dashboard.png
new file mode 100644
index 000000000..55078a65d
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-serverless-migration-dashboard.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-serverless-superset-connect.png b/pr-preview/pr-6948/img/influxdb3/cloud-serverless-superset-connect.png
new file mode 100644
index 000000000..779332b3d
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-serverless-superset-connect.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-serverless-superset-dashboard.png b/pr-preview/pr-6948/img/influxdb3/cloud-serverless-superset-dashboard.png
new file mode 100644
index 000000000..45817f446
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-serverless-superset-dashboard.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/cloud-serverless-superset-schema.png b/pr-preview/pr-6948/img/influxdb3/cloud-serverless-superset-schema.png
new file mode 100644
index 000000000..32fcde30e
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/cloud-serverless-superset-schema.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/core-mcp-influxdb3-plugin.png b/pr-preview/pr-6948/img/influxdb3/core-mcp-influxdb3-plugin.png
new file mode 100644
index 000000000..a966c96bf
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/core-mcp-influxdb3-plugin.png differ
diff --git a/pr-preview/pr-6948/img/influxdb3/influxdb3-core-enterprise-ingest-path-flow.png b/pr-preview/pr-6948/img/influxdb3/influxdb3-core-enterprise-ingest-path-flow.png
new file mode 100644
index 000000000..1e6daca92
Binary files /dev/null and b/pr-preview/pr-6948/img/influxdb3/influxdb3-core-enterprise-ingest-path-flow.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection01.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection01.png
new file mode 100644
index 000000000..2b6b46781
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection01.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection02.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection02.png
new file mode 100644
index 000000000..d9c7adc47
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection02.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection03.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection03.png
new file mode 100644
index 000000000..a55c95f8f
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection03.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection04.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection04.png
new file mode 100644
index 000000000..3271a3b17
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-add-kapacitor-connection04.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration01.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration01.png
new file mode 100644
index 000000000..51144bdd0
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration01.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration02.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration02.png
new file mode 100644
index 000000000..947abdb88
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration02.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration03.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration03.png
new file mode 100644
index 000000000..be8ad170a
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration03.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration04.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration04.png
new file mode 100644
index 000000000..8772a93d0
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration04.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration05.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration05.png
new file mode 100644
index 000000000..62f2eb390
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-configuration05.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert01.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert01.png
new file mode 100644
index 000000000..c16ee976e
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert01.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert02.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert02.png
new file mode 100644
index 000000000..bf7646f0a
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert02.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert03.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert03.png
new file mode 100644
index 000000000..fceab7689
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert03.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert04.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert04.png
new file mode 100644
index 000000000..f15da4b23
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert04.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert05.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert05.png
new file mode 100644
index 000000000..c39846c42
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert05.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert06.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert06.png
new file mode 100644
index 000000000..391428019
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert06.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert07.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert07.png
new file mode 100644
index 000000000..f0a3d9a22
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert07.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert08.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert08.png
new file mode 100644
index 000000000..3d39c7c12
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-create-alert08.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-delete-rule.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-delete-rule.png
new file mode 100644
index 000000000..f327960b7
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-delete-rule.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-editable-task-in-chrono01.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-editable-task-in-chrono01.png
new file mode 100644
index 000000000..5b64d4080
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-editable-task-in-chrono01.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-editable-task-in-chrono02.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-editable-task-in-chrono02.png
new file mode 100644
index 000000000..6c84ecb35
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-editable-task-in-chrono02.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-enable-disable-alerts01.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-enable-disable-alerts01.png
new file mode 100644
index 000000000..36cd5040a
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-enable-disable-alerts01.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints01.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints01.png
new file mode 100644
index 000000000..4b01398e3
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints01.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints02.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints02.png
new file mode 100644
index 000000000..e2a4098b0
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints02.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints03.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints03.png
new file mode 100644
index 000000000..26146f7cc
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints03.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints04.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints04.png
new file mode 100644
index 000000000..6d5c89220
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-update-endpoints04.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-use-alerts-db01.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-use-alerts-db01.png
new file mode 100644
index 000000000..aa7f7b9a4
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-use-alerts-db01.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-use-alerts-db02.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-use-alerts-db02.png
new file mode 100644
index 000000000..3906225d9
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-use-alerts-db02.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-view-alert-history01.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-view-alert-history01.png
new file mode 100644
index 000000000..8480aebcf
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-view-alert-history01.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-chrono-view-alert-history02.png b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-view-alert-history02.png
new file mode 100644
index 000000000..d0e7b214a
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-chrono-view-alert-history02.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-hipchat-token.png b/pr-preview/pr-6948/img/kapacitor/1-4-hipchat-token.png
new file mode 100644
index 000000000..159441a67
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-hipchat-token.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-4-pull-metrics.png b/pr-preview/pr-6948/img/kapacitor/1-4-pull-metrics.png
new file mode 100644
index 000000000..43fb8dd17
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-4-pull-metrics.png differ
diff --git a/pr-preview/pr-6948/img/kapacitor/1-5-comparing-two-measurements.png b/pr-preview/pr-6948/img/kapacitor/1-5-comparing-two-measurements.png
new file mode 100644
index 000000000..857e2fcea
Binary files /dev/null and b/pr-preview/pr-6948/img/kapacitor/1-5-comparing-two-measurements.png differ
diff --git a/pr-preview/pr-6948/img/platform/troubleshooting-disk-usage.png b/pr-preview/pr-6948/img/platform/troubleshooting-disk-usage.png
new file mode 100644
index 000000000..6dde0a528
Binary files /dev/null and b/pr-preview/pr-6948/img/platform/troubleshooting-disk-usage.png differ
diff --git a/pr-preview/pr-6948/img/platform/troubleshooting-oom-loop.png b/pr-preview/pr-6948/img/platform/troubleshooting-oom-loop.png
new file mode 100644
index 000000000..693bac546
Binary files /dev/null and b/pr-preview/pr-6948/img/platform/troubleshooting-oom-loop.png differ
diff --git a/pr-preview/pr-6948/img/resources/late-arriving-data.png b/pr-preview/pr-6948/img/resources/late-arriving-data.png
new file mode 100644
index 000000000..31ca167bb
Binary files /dev/null and b/pr-preview/pr-6948/img/resources/late-arriving-data.png differ
diff --git a/pr-preview/pr-6948/img/telegraf/controller-agents-list.png b/pr-preview/pr-6948/img/telegraf/controller-agents-list.png
new file mode 100644
index 000000000..b46ab5bfa
Binary files /dev/null and b/pr-preview/pr-6948/img/telegraf/controller-agents-list.png differ
diff --git a/pr-preview/pr-6948/img/telegraf/controller-code-editor.png b/pr-preview/pr-6948/img/telegraf/controller-code-editor.png
new file mode 100644
index 000000000..2d8d0c840
Binary files /dev/null and b/pr-preview/pr-6948/img/telegraf/controller-code-editor.png differ
diff --git a/pr-preview/pr-6948/img/telegraf/controller-command-builder.png b/pr-preview/pr-6948/img/telegraf/controller-command-builder.png
new file mode 100644
index 000000000..f4fd8ff78
Binary files /dev/null and b/pr-preview/pr-6948/img/telegraf/controller-command-builder.png differ
diff --git a/pr-preview/pr-6948/img/telegraf/controller-telegraf-builder.png b/pr-preview/pr-6948/img/telegraf/controller-telegraf-builder.png
new file mode 100644
index 000000000..9fb1aa72f
Binary files /dev/null and b/pr-preview/pr-6948/img/telegraf/controller-telegraf-builder.png differ
diff --git a/pr-preview/pr-6948/img/telegraf/new-citibike-query.png b/pr-preview/pr-6948/img/telegraf/new-citibike-query.png
new file mode 100644
index 000000000..c8287eb3c
Binary files /dev/null and b/pr-preview/pr-6948/img/telegraf/new-citibike-query.png differ
diff --git a/pr-preview/pr-6948/index.html b/pr-preview/pr-6948/index.html
new file mode 100644
index 000000000..09169bdf9
--- /dev/null
+++ b/pr-preview/pr-6948/index.html
@@ -0,0 +1,64 @@
+
+
+
+
+
+ PR Preview
+
+
+
+
InfluxQL is designed for working with time series data and includes features specifically for working with time.
+You can review the following ways to work with time and timestamps in your InfluxQL queries:
Currently, InfluxDB does not support using OR with absolute time in the WHERE
+clause. See the Frequently Asked Questions
+document and the GitHub Issue
+for more information.
+
rfc3339_date_time_string
+
+
+
'YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ'
+
.nnnnnnnnn is optional and is set to .000000000 if not included.
+The RFC3339 date-time string requires single quotes.
+
rfc3339_like_date_time_string
+
+
+
'YYYY-MM-DD HH:MM:SS.nnnnnnnnn'
+
HH:MM:SS.nnnnnnnnn.nnnnnnnnn is optional and is set to 00:00:00.000000000 if not included.
+The RFC3339-like date-time string requires single quotes.
+
epoch_time
+
Epoch time is the amount of time that has elapsed since 00:00:00
+Coordinated Universal Time (UTC), Thursday, 1 January 1970.
+
By default, InfluxDB assumes that all epoch timestamps are in nanoseconds. Include a duration literal at the end of the epoch timestamp to indicate a precision other than nanoseconds.
+
Basic arithmetic
+
All timestamp formats support basic arithmetic.
+Add (+) or subtract (-) a time from a timestamp with a duration literal.
+Note that InfluxQL requires a whitespace between the + or - and the
+duration literal.
+
Examples
+
+
+
+
+
+
+
+
+
+ Specify a time range with RFC3339 date-time strings
+
The query returns data with timestamps between August 18, 2019 at 00:00:00 and August 18, 2019
+at 00:12:00.
+The first date-time string does not include a time; InfluxDB assumes the time
+is 00:00:00.
+
Note that the single quotes around the RFC3339-like date-time strings are
+required.
The query returns data with timestamps that occur between August 1, 2019
+at 00:00:00 and August 19, 2019 at 00:12:00. By default InfluxDB assumes epoch timestamps are in nanoseconds.
+
+
+
+
+
+
+
+
+
+
+
+ Specify a time range with second-precision epoch timestamps
+
The query returns data with timestamps that occur between August 19, 2019
+at 00:00:00 and August 19, 2019 at 00:12:00.
+The s duration literal at the end of the epoch timestamps indicate that the epoch timestamps are in seconds.
+
+
+
+
+
+
+
+
+
+
+
+ Perform basic arithmetic on an RFC3339-like date-time string
+
The query returns data with timestamps that occur at least six minutes after
+September 17, 2019 at 21:24:00.
+Note that the whitespace between the + and 6m is required.
+
+
+
+
+
+
+
+
+
+
+
+ Perform basic arithmetic on an epoch timestamp
+
The query returns data with timestamps that occur at least six minutes before
+September 18, 2019 at 21:24:00. Note that the whitespace between the - and 6m is required. Note that the results above are partial as the dataset is large.
+
+
+
+
+
+
+
+
+
Relative time
+
Use now() to query data with timestamps relative to the server’s current timestamp.
now() is the Unix time of the server at the time the query is executed on that server.
+The whitespace between - or + and the duration literal is required.
The query returns data with timestamps that occur between September 17, 2019 at 21:18:00 and 1000 days from now(). The whitespace between + and 1000d is required.
+
+
+
+
+
+
+
+
+
The Time Zone clause
+
Use the tz() clause to return the UTC offset for the specified timezone.
By default, InfluxDB stores and returns timestamps in UTC.
+The tz() clause includes the UTC offset or, if applicable, the UTC Daylight Savings Time (DST) offset to the query’s returned timestamps. The returned timestamps must be in RFC3339 format for the UTC offset or UTC DST to appear.
+The time_zone parameter follows the TZ syntax in the Internet Assigned Numbers Authority time zone database and it requires single quotes.
To query data with timestamps that occur after now(), SELECT statements with
+a GROUP BY time() clause must provide an alternative upper bound in the
+WHERE clause.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
The INTO clause is optional.
+If the command does not include INTO, you must specify the
+database with USE <database_name> when using the InfluxQL shell
+or with the db query string parameter in the
+InfluxDB 1.x compatibility API request.
Delete all data associated with the measurement h2o_feet:
+
+
+
DELETEFROM"h2o_feet"
+
Delete data in a measurement that has a specific tag value
+
Delete all data associated with the measurement h2o_quality and where the tag randtag equals 3:
+
+
+
DELETEFROM"h2o_quality"WHERE"randtag"='3'
+
Delete data before or after specified time
+
Delete all data in the database that occur before January 01, 2020:
+
+
+
DELETEWHEREtime<'2020-01-01'
+
A successful DELETE query returns an empty result.
+
If you need to delete points in the future, you must specify the future time period because DELETE SERIES runs for time < now() by default.
+
Delete future points:
+
+
+
DELETEFROMdevice_dataWHERE"device"='sensor1" and time > now() and < '2024-01-14T01:00:00Z'
+
Delete points in the future within a specified time range:
+
+
+
DELETEFROMdevice_dataWHERE"device"='sensor15" and time >= '2024-01-01T12:00:00Z' and <= '2025-06-30T11:59:00Z'
+
Delete measurements with DROP MEASUREMENT
+
The DROP MEASUREMENT statement deletes all data and series from the specified measurement and deletes the measurement from the index.
+
Syntax
+
+
+
DROPMEASUREMENT<measurement_name>
+
Example
+
Delete the measurement h2o_feet:
+
+
+
DROPMEASUREMENT"h2o_feet"
+
A successful DROP MEASUREMENT query returns an empty result.
+
+
+
The DROP MEASUREMENT command is very resource intensive. We do not recommend this command for bulk data deletion. Use the DELETE FROM command instead, which is less resource intensive.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
The syntax is specified using Extended Backus-Naur Form (“EBNF”).
+EBNF is the same notation used in the Go programming language specification,
+which can be found here.
+
+
+
Production = production_name "=" [ Expression ] "." .
+Expression = Alternative { "|" Alternative } .
+Alternative = Term { Term } .
+Term = production_name | token [ "…" token ] | Group | Option | Repetition .
+Group = "(" Expression ")" .
+Option = "[" Expression "]" .
+Repetition = "{" Expression "}" .
+
Notation operators in order of increasing precedence:
+
+
+
| alternation
+() grouping
+[] option (0 or 1 times)
+{} repetition (0 to n times)
cpu
+_cpu_stats
+"1h"
+"anything really"
+"1_Crazy-1337.identifier>NAME👍"
+
Keywords
+
+
+
ALL ALTER ANY AS ASC BEGIN
+BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
+DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
+DURATION END EVERY EXPLAIN FIELD FOR
+FROM GRANT GRANTS GROUP GROUPS IN
+INF INSERT INTO KEY KEYS KILL
+LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET
+ON ORDER PASSWORD POLICY POLICIES PRIVILEGES
+QUERIES QUERY READ REPLICATION RESAMPLE RETENTION
+REVOKE SELECT SERIES SET SHARD SHARDS
+SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG
+TO USER USERS VALUES WHERE WITH
+WRITE
+
If you use an InfluxQL keywords as an
+identifier you will need to
+double quote that identifier in every query.
+
The keyword time is a special case.
+time can be a
+database name,
+measurement name,
+retention policy name,
+subscription name, and
+user name.
+In those cases, time does not require double quotes in queries.
+time cannot be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+See Frequently Asked Questions for more information.
+
Literals
+
Integers
+
InfluxQL supports decimal integer literals.
+Hexadecimal and octal literals are not currently supported.
+
+
+
int_lit = ( "1" … "9" ) { digit } .
+
Floats
+
InfluxQL supports floating-point literals.
+Exponents are not currently supported.
+
+
+
float_lit = int_lit "." int_lit .
+
Strings
+
String literals must be surrounded by single quotes.
+Strings may contain ' characters as long as they are escaped (i.e., \').
+
+
+
string_lit = `'` { unicode_char } `'` .
+
Durations
+
Duration literals specify a length of time.
+An integer literal followed immediately (with no spaces) by a duration unit listed below is interpreted as a duration literal.
+Durations can be specified with mixed units.
The date and time literal format is not specified in EBNF like the rest of this document.
+It is specified using Go’s date / time parsing format, which is a reference date written in the format required by InfluxQL.
+The reference date time is:
+
InfluxQL reference date time: January 2nd, 2006 at 3:04:05 PM
Parses and plans the query, and then prints a summary of estimated costs.
+
Many SQL engines use the EXPLAIN statement to show join order, join algorithms, and predicate and expression pushdown.
+Since InfluxQL does not support joins, the cost of a InfluxQL query is typically a function of the total series accessed, the number of iterator accesses to a TSM file, and the number of TSM blocks that need to be scanned.
Executes the specified SELECT statement and returns data on the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including execution time and planning time, and the iterator type and cursor type.
Note: EXPLAIN ANALYZE ignores query output, so the cost of serialization to JSON or CSV is not accounted for.
+
+
+
execution_time
+
Shows the amount of time the query took to execute, including reading the time series data, performing operations as data flows through iterators, and draining processed data from iterators. Execution time doesn’t include the time taken to serialize the output into JSON or other formats.
+
planning_time
+
Shows the amount of time the query took to plan.
+Planning a query in InfluxDB requires a number of steps. Depending on the complexity of the query, planning can require more work and consume more CPU and memory resources than the executing the query. For example, the number of series keys required to execute a query affects how quickly the query is planned and the required memory.
+
First, InfluxDB determines the effective time range of the query and selects the shards to access (in InfluxDB Enterprise, shards may be on remote nodes).
+Next, for each shard and each measurement, InfluxDB performs the following steps:
+
+
Select matching series keys from the index, filtered by tag predicates in the WHERE clause.
+
Group filtered series keys into tag sets based on the GROUP BY dimensions.
+
Enumerate each tag set and create a cursor and iterator for each series key.
+
Merge iterators and return the merged result to the query executor.
+
+
iterator type
+
EXPLAIN ANALYZE supports the following iterator types:
+
+
create_iterator node represents work done by the local influxd instance──a complex composition of nested iterators combined and merged to produce the final query output.
+
(InfluxDB Enterprise only) remote_iterator node represents work done on remote machines.
EXPLAIN ANALYZE distinguishes 3 cursor types. While the cursor types have the same data structures and equal CPU and I/O costs, each cursor type is constructed for a different reason and separated in the final output. Consider the following cursor types when tuning a statement:
+
+
cursor_ref: Reference cursor created for SELECT projections that include a function, such as last() or mean().
+
cursor_aux: Auxiliary cursor created for simple expression projections (not selectors or an aggregation). For example, SELECT foo FROM m or SELECT foo+bar FROM m, where foo and bar are fields.
+
cursor_cond: Condition cursor created for fields referenced in a WHERE clause.
EXPLAIN ANALYZE separates storage block types, and reports the total number of blocks decoded and their size (in bytes) on disk. The following block types are supported:
Refers to the group of commands used to estimate or count exactly the cardinality of measurements, series, tag keys, tag key values, and field keys.
+
The SHOW CARDINALITY commands are available in two variations: estimated and exact. Estimated values are calculated using sketches and are a safe default for all cardinality sizes. Exact values are counts directly from TSM (Time-Structured Merge Tree) data, but are expensive to run for high cardinality data. Unless required, use the estimated variety.
+
Filtering by time is only supported when Time Series Index (TSI) is enabled on a database.
+
See the specific SHOW CARDINALITY commands for details:
Estimates or counts exactly the cardinality of the field key set for the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when Time Series Index (TSI) is enabled and time is not supported in the WHERE clause.
+
+
+
+
+
show_field_key_cardinality_stmt="SHOW FIELD KEY CARDINALITY"[on_clause][from_clause][where_clause][group_by_clause][limit_clause][offset_clause]
+
+show_field_key_exact_cardinality_stmt="SHOW FIELD KEY EXACT CARDINALITY"[on_clause][from_clause][where_clause][group_by_clause][limit_clause][offset_clause]
+
Examples
+
+
+
-- show estimated cardinality of the field key set of current database
+SHOWFIELDKEYCARDINALITY
+-- show exact cardinality on field key set of specified database
+SHOWFIELDKEYEXACTCARDINALITYONmydb
+
SHOW FIELD KEYS
+
+
+
show_field_keys_stmt = "SHOW FIELD KEYS" [on_clause] [ from_clause ] .
+
Examples
+
+
+
-- show field keys and field value data types from all measurements
+SHOWFIELDKEYS
+
+-- show field keys and field value data types from specified measurement
+SHOWFIELDKEYSFROM"cpu"
-- show all measurements
+SHOWMEASUREMENTS
+
+-- show measurements where region tag = 'uswest' AND host tag = 'serverA'
+SHOWMEASUREMENTSWHERE"region"='uswest'AND"host"='serverA'
+
+-- show measurements that start with 'h2o'
+SHOWMEASUREMENTSWITHMEASUREMENT=~/h2o.*/
Series cardinality is the major factor that affects RAM requirements. For more information, see:
+
+
+
Don’t have too many series. As the number of unique series grows, so does the memory usage. High series cardinality can force the host operating system to kill the InfluxDB process with an out of memory (OOM) exception.
+
+
+
+
+
NOTE:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is not supported in the WHERE clause.
+
+
+
+
SHOW TAG KEY CARDINALITY
+
Estimates or counts exactly the cardinality of tag key set on the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when TSI (Time Series Index) is enabled and time is not supported in the WHERE clause.
-- show all tag keys
+SHOWTAGKEYS
+
+-- show all tag keys from the cpu measurement
+SHOWTAGKEYSFROM"cpu"
+
+-- show all tag keys from the cpu measurement where the region key = 'uswest'
+SHOWTAGKEYSFROM"cpu"WHERE"region"='uswest'
+
+-- show all tag keys where the host key = 'serverA'
+SHOWTAGKEYSWHERE"host"='serverA'
-- show all tag values across all measurements for the region tag
+SHOWTAGVALUESWITHKEY="region"
+
+-- show tag values from the cpu measurement for the region tag
+SHOWTAGVALUESFROM"cpu"WITHKEY="region"
+
+-- show tag values across all measurements for all tag keys that do not include the letter c
+SHOWTAGVALUESWITHKEY!~/.*c.*/
+
+-- show tag values from the cpu measurement for region & host tag keys where service = 'redis'
+SHOWTAGVALUESFROM"cpu"WITHKEYIN("region","host")WHERE"service"='redis'
+
SHOW TAG VALUES CARDINALITY
+
Estimates or counts exactly the cardinality of tag key values for the specified tag key on the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when TSI (Time Series Index) is enabled.
-- show estimated tag key values cardinality for a specified tag key
+SHOWTAGVALUESCARDINALITYWITHKEY="myTagKey"
+-- show estimated tag key values cardinality for a specified tag key
+SHOWTAGVALUESCARDINALITYWITHKEY="myTagKey"
+-- show exact tag key values cardinality for a specified tag key
+SHOWTAGVALUESEXACTCARDINALITYWITHKEY="myTagKey"
+-- show exact tag key values cardinality for a specified tag key
+SHOWTAGVALUESEXACTCARDINALITYWITHKEY="myTagKey"
Use comments with InfluxQL statements to describe your queries.
+
+
A single line comment begins with two hyphens (--) and ends where InfluxDB detects a line break.
+This comment type cannot span several lines.
+
A multi-line comment begins with /* and ends with */. This comment type can span several lines.
+Multi-line comments do not support nested multi-line comments.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
Authentication and authorization should not be relied upon to prevent access and protect data from malicious actors.
+If additional security or compliance features are desired, InfluxDB should be run behind a third-party service. If InfluxDB is
+being deployed on a publicly accessible endpoint, we strongly recommend authentication be enabled. Otherwise the data will be
+publicly available to any unauthenticated user.
+
+
Authentication
+
The InfluxDB API and the command line interface (CLI), which connects to the database using the API, include simple, built-in authentication based on user credentials.
+When you enable authentication, InfluxDB only executes HTTP requests that are sent with valid credentials.
+
+
+
+
+
Authentication only occurs at the HTTP request scope.
+Plugins do not currently have the ability to authenticate requests and service
+endpoints (for example, Graphite, collectd, etc.) are not authenticated.
If you enable authentication and have no users, InfluxDB will not enforce authentication
+and will only accept the query that creates a new admin user.
+
+
InfluxDB will enforce authentication once there is an admin user.
+
+
+
Enable authentication in your configuration file
+by setting the auth-enabled option to true in the [http] section:
+
+
+
[http]
+enabled=true
+bind-address=":8086"
+auth-enabled=true# Set to true
+log-enabled=true
+write-tracing=false
+pprof-enabled=true
+pprof-auth-enabled=true
+debug-pprof-enabled=false
+ping-auth-enabled=true
+https-enabled=true
+https-certificate="/etc/ssl/influxdb.pem"
+
+
+
+
+
If pprof-enabled is set to true, set pprof-auth-enabled and ping-auth-enabled
+to true to require authentication on profiling and ping endpoints.
+
+
+
+
Restart InfluxDB.
+Once restarted, InfluxDB checks user credentials on every request and only
+processes requests that have valid credentials for an existing user.
+
+
+
Authenticate requests
+
Authenticate with the InfluxDB API
+
There are two options for authenticating with the InfluxDB API.
+
If you authenticate with both Basic Authentication and the URL query parameters, the user credentials specified in the query parameters take precedence.
+The queries in the following examples assume that the user is an admin user.
+See the section on authorization for the different user types, their privileges, and more on user management.
+
+
+
Note: InfluxDB redacts passwords when you enable authentication.
There are three options for authenticating with the CLI.
+
Authenticate with environment variables
+
Use the INFLUX_USERNAME and INFLUX_PASSWORD environment variables to provide
+authentication credentials to the influx CLI.
+
+
+
exportINFLUX_USERNAME=todd
+exportINFLUX_PASSWORD=influxdb4ever
+echo$INFLUX_USERNAME$INFLUX_PASSWORD
+todd influxdb4ever
+
+influx
+Connected to http://localhost:8086 version 1.12.3
+InfluxDB shell 1.12.3
+
Authenticate with CLI flags
+
Use the -username and -password flags to provide authentication credentials
+to the influx CLI.
+
+
+
influx -username todd -password influxdb4ever
+Connected to http://localhost:8086 version 1.12.3
+InfluxDB shell 1.12.3
+
Authenticate with credentials in the influx shell
+
Start the influx shell and run the auth command.
+Enter your username and password when prompted.
+
+
+
> influx
+Connected to http://localhost:8086 version 1.12.3
+InfluxDB shell 1.12.3
+> auth
+username: todd
+password:
+>
+
Authenticate using JWT tokens
+
For a more secure alternative to using passwords, include JWT tokens with requests to the InfluxDB API.
+This is currently only possible through the InfluxDB HTTP API.
Add a shared secret in your InfluxDB configuration file
+
InfluxDB uses the shared secret to encode the JWT signature.
+By default, shared-secret is set to an empty string, in which case no JWT authentication takes place.
+Add a custom shared secret in your InfluxDB configuration file.
+The longer the secret string, the more secure it is:
+
+
+
[http]
+shared-secret="my super secret pass phrase"
+
Alternatively, to avoid keeping your secret phrase as plain text in your InfluxDB configuration file, set the value with the INFLUXDB_HTTP_SHARED_SECRET environment variable.
+
Generate your JWT token
+
Use an authentication service to generate a secure token using your InfluxDB username, an expiration time, and your shared secret.
+There are online tools, such as https://jwt.io/, that will do this for you.
+
The payload (or claims) of the token must be in the following format:
+
+
+
{
+"username":"myUserName",
+"exp":1516239022
+}
+
+
username - The name of your InfluxDB user.
+
exp - The expiration time of the token in UNIX epoch time.
+For increased security, keep token expiration periods short.
+For testing, you can manually generate UNIX timestamps using https://www.unixtimestamp.com/index.php.
+
+
Encode the payload using your shared secret.
+You can do this with either a JWT library in your own authentication server or by hand at https://jwt.io/.
+
The generated token follows this format: <header>.<payload>.<signature>
+
Include the token in HTTP requests
+
Include your generated token as part of the Authorization header in HTTP requests.
+Use the Bearer authorization scheme:
+
+
+
Authorization: Bearer <myToken>
+
+
+
+
+
Only unexpired tokens will successfully authenticate. Be sure your token has not expired.
Authenticating Telegraf requests to an InfluxDB instance with
+authentication enabled requires some additional steps.
+In the Telegraf configuration file (/etc/telegraf/telegraf.conf), uncomment
+and edit the username and password settings.
+
+
+
###############################################################################
+# OUTPUT PLUGINS #
+###############################################################################
+
+# ...
+
+[[outputs.influxdb]]
+# ...
+username="example-username"# Provide your username
+password="example-password"# Provide your password
+
+# ...
+
Restart Telegraf and you’re all set!
+
Authorization
+
Authorization is only enforced once you’ve enabled authentication.
+By default, authentication is disabled, all credentials are silently ignored, and all users have all privileges.
+
User types and privileges
+
Admin users
+
Admin users have READ and WRITE access to all databases and full access to the following administrative queries:
See below for a complete discussion of the user management commands.
+
Non-admin users
+
Non-admin users can have one of the following three privileges per database:
+
+
READ
+
WRITE
+
ALL (both READ and WRITE access)
+
+
READ, WRITE, and ALL privileges are controlled per user per database. A new non-admin user has no access to any database until they are specifically granted privileges to a database by an admin user.
+Non-admin users can SHOW the databases on which they have READ and/or WRITE permissions.
+
User management commands
+
Admin user management
+
When you enable HTTP authentication, InfluxDB requires you to create at least one admin user before you can interact with the system.
The user value must be wrapped in double quotes if it starts with a digit, is an InfluxQL keyword, contains a hyphen and or includes any special characters, for example: !@#$%^&*()-
+
The password string must be wrapped in single quotes.
+Do not include the single quotes when authenticating requests.
+We recommend avoiding the single quote (') and backslash (\) characters in passwords.
+For passwords that include these characters, escape the special character with a backslash (for example, (\') when creating the password and when submitting authentication requests.
+
Repeating the exact CREATE USER statement is idempotent. If any values change the database will return a duplicate user error. See GitHub Issue #6890 for details.
The password string must be wrapped in single quotes.
+Do not include the single quotes when authenticating requests.
+
We recommend avoiding the single quote (') and backslash (\) characters in passwords
+For passwords that include these characters, escape the special character with a backslash (for example, (\') when creating the password and when submitting authentication requests.
+
+
DROP a user
+
+
+
DROPUSER<username>
+
CLI example:
+
+
+
DROPUSER"todd"
+
Authentication and authorization HTTP errors
+
Requests with no authentication credentials or incorrect credentials yield the HTTP 401 Unauthorized response.
+
Requests by unauthorized users yield the HTTP 403 Forbidden response.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
Flux, at its core, is a scripting language designed specifically for working with data.
+This guide walks through a handful of simple expressions and how they are handled in Flux.
+
Use the influx CLI
+
Use the influx CLI in “Flux mode” as you follow this guide.
+When started with -type=flux and -path-prefix=/api/v2/query, the influx
+CLI is an interactive read-eval-print-loop (REPL) that supports Flux syntax.
+
Start in the influx CLI in Flux mode
+
+
+
influx -type=flux -path-prefix=/api/v2/query
+
+
+
If using the InfluxData Sandbox, use the ./sandbox enter
+command to enter the influxdb container, where you can start the influx CLI in Flux mode.
+You will also need to specify the host as influxdb to connect to InfluxDB over the Docker network.
+
+
+
+
+
./sandbox enter influxdb
+
+root@9bfc3c08579c:/# influx -host influxdb -type=flux -path-prefix=/api/v2/query
+
Basic Flux syntax
+
The code blocks below provide commands that illustrate the basic syntax of Flux.
+Run these commands in the influx CLI’s Flux REPL.
+
Simple expressions
+
Flux is a scripting language that supports basic expressions.
+For example, simple addition:
+
+
+
>1+1
+2
+
Variables
+
Assign an expression to a variable using the assignment operator, =.
+
+
+
s="this is a string"
+i=1// an integer
+f=2.0// a floating point number
+
+
Type the name of a variable to print its value:
+
+
+
>s
+thisisastring
+>i
+1
+>f
+2
+
Records
+
Flux also supports records. Each value in a record can be a different data type.
+
+
+
o={name:"Jim",age:42,"favorite color":"red"}
+
Use dot notation to access a properties of a record:
Use bracket notation to reference record properties with special or
+white space characters in the property key.
+
+
+
+
Lists
+
Flux supports lists. List values must be the same type.
+
+
+
>n=4
+>l=[1,2,3,n]
+>l
+[1,2,3,4]
+
Functions
+
Flux uses functions for most of its heavy lifting.
+Below is a simple function that squares a number, n.
+
+
+
>square=(n)=>n*n
+>square(n:3)
+9
+
+
+
Flux does not support positional arguments or parameters.
+Parameters must always be named when calling a function.
+
+
+
Pipe-forward operator
+
Flux uses the pipe-forward operator (|>) extensively to chain operations together.
+After each function or operation, Flux returns a table or collection of tables containing data.
+The pipe-forward operator pipes those tables into the next function where they are further processed or manipulated.
+
+
+
data|>someFunction()|>anotherFunction()
+
Real-world application of basic syntax
+
This likely seems familiar if you’ve already been through through the other getting started guides.
+Flux’s syntax is inspired by JavaScript and other functional scripting languages.
+As you begin to apply these basic principles in real-world use cases such as creating data stream variables,
+custom functions, etc., the power of Flux and its ability to query and process data will become apparent.
+
The examples below provide both multi-line and single-line versions of each input command.
+Carriage returns in Flux aren’t necessary, but do help with readability.
+Both single- and multi-line commands can be copied and pasted into the influx CLI running in Flux mode.
These variables can be used in other functions, such as join(), while keeping the syntax minimal and flexible.
+
Define custom functions
+
Create a function that returns the N number rows in the input stream with the highest _values.
+To do this, pass the input stream (tables) and the number of results to return (n) into a custom function.
+Then using Flux’s sort() and limit() functions to find the top n results in the data set.
These variables can be used in other functions, such as join(), while keeping the syntax minimal and flexible.
+
Define custom functions
+
Let’s create a function that returns the N number rows in the input data stream with the highest _values.
+To do this, pass the input stream (tables) and the number of results to return (n) into a custom function.
+Then using Flux’s sort() and limit() functions to find the top n results in the data set.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
InfluxDB can handle hundreds of thousands of data points per second. Working with that much data over a long period of time can create storage concerns.
+A natural solution is to downsample the data; keep the high precision raw data for only a limited time, and store the lower precision, summarized data longer.
+This guide describes how to automate the process of downsampling data and expiring old data using InfluxQL. To downsample and retain data using Flux and InfluxDB 2.0,
+see Process data with InfluxDB tasks.
+
Definitions
+
+
+
Continuous query (CQ) is an InfluxQL query that runs automatically and periodically within a database.
+CQs require a function in the SELECT clause and must include a GROUP BY time() clause.
+
+
+
Retention policy (RP) is the part of InfluxDB data structure that describes for how long InfluxDB keeps data.
+InfluxDB compares your local server’s timestamp to the timestamps on your data and deletes data older than the RP’s DURATION.
+A single database can have several RPs and RPs are unique per database.
+
+
+
This guide doesn’t go into detail about the syntax for creating and managing CQs and RPs or tasks.
+If you’re new to these concepts, we recommend reviewing the following:
This section uses fictional real-time data to track the number of food orders
+to a restaurant via phone and via website at ten second intervals.
+We store this data in a database or bucket called food_data, in
+the measurementorders, and
+in the fieldsphone and website.
Assume that, in the long run, we’re only interested in the average number of orders by phone
+and by website at 30 minute intervals.
+In the next steps, we use RPs and CQs to:
+
+
Automatically aggregate the ten-second resolution data to 30-minute resolution data
+
Automatically delete the raw, ten-second resolution data that are older than two hours
+
Automatically delete the 30-minute resolution data that are older than 52 weeks
+
+
Database preparation
+
We perform the following steps before writing the data to the database
+food_data.
+We do this before inserting any data because CQs only run against recent
+data; that is, data with timestamps that are no older than now() minus
+the FOR clause of the CQ, or now() minus the GROUP BY time() interval if
+the CQ has no FOR clause.
+
1. Create the database
+
+
+
CREATEDATABASE"food_data"
+
2. Create a two-hour DEFAULT retention policy
+
InfluxDB writes to the DEFAULT retention policy if we do not supply an explicit RP when
+writing a point to the database.
+We make the DEFAULT RP keep data for two hours, because we want InfluxDB to
+automatically write the incoming ten-second resolution data to that RP.
That query creates an RP called two_hours that exists in the database
+food_data.
+two_hours keeps data for a DURATION of two hours (2h) and it’s the DEFAULT
+RP for the database food_data.
+
+
+
The replication factor (REPLICATION 1) is a required parameter but must always
+be set to 1 for single node instances.
+
+
+
+
+
+
Note: When we created the food_data database in step 1, InfluxDB
+automatically generated an RP named autogen and set it as the DEFAULT
+RP for the database.
+The autogen RP has an infinite retention period.
+With the query above, the RP two_hours replaces autogen as the DEFAULT RP
+for the food_data database.
+
+
+
3. Create a 52-week retention policy
+
Next we want to create another retention policy that keeps data for 52 weeks and is not the
+DEFAULT retention policy (RP) for the database.
+Ultimately, the 30-minute rollup data will be stored in this RP.
That query creates a retention policy (RP) called a_year that exists in the database
+food_data.
+The a_year setting keeps data for a DURATION of 52 weeks (52w).
+Leaving out the DEFAULT argument ensures that a_year is not the DEFAULT
+RP for the database food_data.
+That is, write and read operations against food_data that do not specify an
+RP will still go to the two_hours RP (the DEFAULT RP).
+
4. Create the continuous query
+
Now that we’ve set up our RPs, we want to create a continuous query (CQ) that will automatically
+and periodically downsample the ten-second resolution data to the 30-minute
+resolution, and then store those results in a different measurement with a different
+retention policy.
That query creates a CQ called cq_30m in the database food_data.
+cq_30m tells InfluxDB to calculate the 30-minute average of the two fields
+website and phone in the measurement orders and in the DEFAULT RP
+two_hours.
+It also tells InfluxDB to write those results to the measurement
+downsampled_orders in the retention policy a_year with the field keys
+mean_website and mean_phone.
+InfluxDB will run this query every 30 minutes for the previous 30 minutes.
+
+
+
Note: Notice that we fully qualify (that is, we use the syntax
+"<retention_policy>"."<measurement>") the measurement in the INTO
+clause.
+InfluxDB requires that syntax to write data to an RP other than the DEFAULT
+RP.
+
+
+
Results
+
With the new CQ and two new RPs, food_data is ready to start receiving data.
+After writing data to our database and letting things run for a bit, we see
+two measurements: orders and downsampled_orders.
The data in orders are the raw, ten-second resolution data that reside in the
+two-hour RP.
+The data in downsampled_orders are the aggregated, 30-minute resolution data
+that are subject to the 52-week RP.
+
Notice that the first timestamps in downsampled_orders are older than the first
+timestamps in orders.
+This is because InfluxDB has already deleted data from orders with timestamps
+that are older than our local server’s timestamp minus two hours (assume we
+executed the SELECT queries at 2016-05-14T00:59:59Z).
+InfluxDB will only start dropping data from downsampled_orders after 52 weeks.
+
+
+
Notes:
+
+
+
+
Notice that we fully qualify (that is, we use the syntax
+"<retention_policy>"."<measurement>") downsampled_orders in
+the second SELECT statement. We must specify the RP in that query to SELECT
+data that reside in an RP other than the DEFAULT RP.
+
+
+
+
+
+
+
By default, InfluxDB checks to enforce an RP every 30 minutes.
+Between checks, orders may have data that are older than two hours.
+The rate at which InfluxDB checks to enforce an RP is a configurable setting,
+see
+Database Configuration.
+
+
Using a combination of RPs and CQs, we’ve successfully set up our database to
+automatically keep the high precision raw data for a limited time, create lower
+precision data, and store that lower precision data for a longer period of time.
+Now that you have a general understanding of how these features can work
+together, check out the detailed documentation on CQs and RPs
+to see all that they can do for you.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
With InfluxDB installed, you’re ready to start working with time series data.
+This guide uses the influxcommand line interface (CLI), which is included with InfluxDB
+and provides direct access to the database.
+The CLI communicates with InfluxDB through the HTTP API on port 8086.
+
+
+
+
+
Docker users: Access the CLI from your container using:
+
+
+
docker exec -it <container-name> influx
+
+
+
+
+
+
Directly access the API
+
You can also interact with InfluxDB using the HTTP API directly.
+See Writing Data and Querying Data for examples using curl.
+
+
Creating a database
+
After installing InfluxDB locally, the influx command is available from your terminal.
+Running influx starts the CLI and connects to your local InfluxDB instance
+(ensure InfluxDB is running with service influxdb start or influxd).
+To start the CLI and connect to the local InfluxDB instance, run the following command.
+The -precision argument specifies the format and precision of any returned timestamps.
+
+
+
$ influx -precision rfc3339
+Connected to http://localhost:8086 version 1.12.3
+InfluxDB shell 1.12.3
+>
+
The influx CLI connects to port localhost:8086 (the default).
+The timestamp precision rfc3339 tells InfluxDB to return timestamps in RFC3339 format (YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ).
+
To view available options for customizing CLI connection parameters or other settings, run influx --help in your terminal.
+
The command line is ready to take input in the form of the Influx Query Language (InfluxQL) statements.
+To exit the InfluxQL shell, type exit and hit return.
+
A fresh install of InfluxDB has no databases (apart from the system _internal),
+so creating one is our first task.
+You can create a database with the CREATE DATABASE <db-name> InfluxQL statement,
+where <db-name> is the name of the database you wish to create.
+Names of databases can contain any unicode character as long as the string is double-quoted.
+Names can also be left unquoted if they contain only ASCII letters,
+digits, or underscores and do not begin with a digit.
+
Throughout this guide, we’ll use the database name mydb:
+
+
+
CREATEDATABASEmydb
+
+
+
Note: After hitting enter, a new prompt appears and nothing else is displayed.
+In the CLI, this means the statement was executed and there were no errors to display.
+There will always be an error displayed if something went wrong.
+
+
+
Now that the mydb database is created, we’ll use the SHOW DATABASES statement
+to display all existing databases:
Note: The _internal database is created and used by InfluxDB to store internal runtime metrics.
+Check it out later to get an interesting look at how InfluxDB is performing under the hood.
+
+
+
Unlike SHOW DATABASES, most InfluxQL statements must operate against a specific database.
+You may explicitly name the database with each query,
+but the CLI provides a convenience statement, USE <db-name>,
+which will automatically set the database for all future requests. For example:
+
+
+
>USEmydb
+Usingdatabasemydb
+>
+
Now future commands will only be run against the mydb database.
+
Writing and exploring data
+
Now that we have a database, InfluxDB is ready to accept queries and writes.
+
First, a short primer on the datastore.
+Data in InfluxDB is organized by “time series”,
+which contain a measured value, like “cpu_load” or “temperature”.
+Time series have zero to many points, one for each discrete sample of the metric.
+Points consist of time (a timestamp), a measurement (“cpu_load”, for example),
+at least one key-value field (the measured value itself, e.g.
+“value=0.64”, or “temperature=21.2”), and zero to many key-value tags containing any metadata about the value (e.g.
+“host=server01”, “region=EMEA”, “dc=Frankfurt”).
+
Conceptually you can think of a measurement as an SQL table,
+where the primary index is always time.
+tags and fields are effectively columns in the table.
+tags are indexed, and fields are not.
+The difference is that, with InfluxDB, you can have millions of measurements,
+you don’t have to define schemas up-front, and null values aren’t stored.
+
Points are written to InfluxDB using the InfluxDB line protocol, which follows the following format:
Note: We did not supply a timestamp when writing our point.
+When no timestamp is supplied for a point, InfluxDB assigns the local current timestamp when the point is ingested.
+That means your timestamp will be different.
+
+
+
Let’s try storing another type of data, with two fields in the same measurement:
Warning: Using * without a LIMIT clause on a large database can cause performance issues.
+You can use Ctrl+C to cancel a query that is taking too long to respond.
+
+
+
+
InfluxQL has many features and keywords that are not covered here,
+including support for Go-style regex. For example:
This is all you need to know to write data into InfluxDB and query it back.
+To learn more about the InfluxDB write protocol,
+check out the guide on Writing Data.
+To further explore the query language,
+check out the guide on Querying Data.
+For more information on InfluxDB concepts, check out the Key Concepts page.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
Continuous queries (CQ) are InfluxQL queries that run automatically and
+periodically on realtime data and store query results in a
+specified measurement.
Note: Notice that the cq_query does not require a time range in a WHERE clause.
+InfluxDB automatically generates a time range for the cq_query when it executes the CQ.
+Any user-specified time ranges in the cq_query’s WHERE clause will be ignored
+by the system.
+
+
+
Schedule and coverage
+
Continuous queries operate on real-time data.
+They use the local server’s timestamp, the GROUP BY time() interval, and
+InfluxDB database’s preset time boundaries to determine when to execute and what time
+range to cover in the query.
+
CQs execute at the same interval as the cq_query’s GROUP BY time() interval,
+and they run at the start of the InfluxDB database’s preset time boundaries.
+If the GROUP BY time() interval is one hour, the CQ executes at the start of
+every hour.
+
When the CQ executes, it runs a single query for the time range between
+now() and now() minus the
+GROUP BY time() interval.
+If the GROUP BY time() interval is one hour and the current time is 17:00,
+the query’s time range is between 16:00 and 16:59.999999999.
+
Examples of basic syntax
+
The examples below use the following sample data in the transportation
+database.
+The measurement bus_data stores 15-minute resolution data on the number of bus
+passengers and complaints:
cq_basic calculates the average hourly number of passengers from the
+bus_data measurement and stores the results in the average_passengers
+measurement in the transportation database.
+
cq_basic executes at one-hour intervals, the same interval as the
+GROUP BY time() interval.
+Every hour, cq_basic runs a single query that covers the time range between
+now() and now() minus the GROUP BY time() interval, that is, the time
+range between now() and one hour prior to now().
+
Annotated log output on the morning of August 28, 2016:
cq_basic_rp calculates the average hourly number of passengers from the
+bus_data measurement and stores the results in the transportation database,
+the three_weeks RP, and the average_passengers measurement.
+
cq_basic_rp executes at one-hour intervals, the same interval as the
+GROUP BY time() interval.
+Every hour, cq_basic_rp runs a single query that covers the time range between
+now() and now() minus the GROUP BY time() interval, that is, the time
+range between now() and one hour prior to now().
+
Annotated log output on the morning of August 28, 2016:
cq_basic_rp uses CQs and retention policies to automatically downsample data
+and keep those downsampled data for an alternative length of time.
+See the Downsampling and Data Retention
+guide for an in-depth discussion about this CQ use case.
+
Automatically downsampling a database with backreferencing
+
Use a function with a wildcard (*) and INTO query’s
+backreferencing syntax
+to automatically downsample data from all measurements and numerical fields in
+a database.
cq_basic_br calculates the 30-minute average of passengers and complaints
+from every measurement in the transportation database (in this case, there’s only the
+bus_data measurement).
+It stores the results in the downsampled_transportation database.
+
cq_basic_br executes at 30 minutes intervals, the same interval as the
+GROUP BY time() interval.
+Every 30 minutes, cq_basic_br runs a single query that covers the time range
+between now() and now() minus the GROUP BY time() interval, that is,
+the time range between now() and 30 minutes prior to now().
+
Annotated log output on the morning of August 28, 2016:
cq_basic_offsetcalculates the average hourly number of passengers from the
+bus_data measurement and stores the results in the average_passengers
+measurement.
+
cq_basic_offset executes at one-hour intervals, the same interval as the
+GROUP BY time() interval.
+The 15 minute offset interval forces the CQ to execute 15 minutes after the
+default execution time; cq_basic_offset executes at 8:15 instead of 8:00.
+
Every hour, cq_basic_offset runs a single query that covers the time range
+between now() and now() minus the GROUP BY time() interval, that is, the
+time range between now() and one hour prior to now().
+The 15 minute offset interval shifts forward the generated preset time boundaries in the
+CQ’s WHERE clause; cq_basic_offset queries between 7:15 and 8:14.999999999 instead of 7:00 and 7:59.999999999.
+
Annotated log output on the morning of August 28, 2016:
+
+
+
>
+At **8:15** `cq_basic_offset` executes a query with the time range `time >= '7:15' AND time < '8:15'`.
+`cq_basic_offset` writes one point to the `average_passengers` measurement:
+>
+ name: average_passengers
+ ------------------------
+ time mean
+ 2016-08-28T07:15:00Z 7.75
+>
+At **9:15** `cq_basic_offset` executes a query with the time range `time >= '8:15' AND time < '9:15'`.
+`cq_basic_offset` writes one point to the `average_passengers` measurement:
+>
+ name: average_passengers
+ ------------------------
+ time mean
+ 2016-08-28T08:15:00Z 16.75
Notice that the timestamps are for 7:15 and 8:15 instead of 7:00 and 8:00.
+
Common issues with basic syntax
+
Handling time intervals with no data
+
CQs do not write any results for a time interval if no data fall within that
+time range.
+
Note that the basic syntax does not support using
+fill()
+to change the value reported for intervals with no data.
+Basic syntax CQs ignore fill() if it’s included in the CQ query.
+A possible workaround is to use the
+advanced CQ syntax.
+
Resampling previous time intervals
+
The basic CQ runs a single query that covers the time range between now()
+and now() minus the GROUP BY time() interval.
+See the advanced syntax for how to configure the query’s
+time range.
+
Backfilling results for older data
+
CQs operate on realtime data, that is, data with timestamps that occur
+relative to now().
+Use a basic
+INTO query
+to backfill results for data with older timestamps.
+
Missing tags in the CQ results
+
By default, all
+INTO queries
+convert any tags in the source measurement to fields in the destination
+measurement.
+
Include GROUP BY * in the CQ to preserve tags in the destination measurement.
+
Advanced syntax
+
+
+
CREATE CONTINUOUS QUERY <cq_name> ON <database_name>
+RESAMPLE EVERY <interval> FOR <interval>
+BEGIN
+ <cq_query>
+END
CQs operate on real-time data. With the advanced syntax, CQs use the local
+server’s timestamp, the information in the RESAMPLE clause, and the InfluxDB
+server’s preset time boundaries to determine when to execute and what time range to
+cover in the query.
+
CQs execute at the same interval as the EVERY interval in the RESAMPLE
+clause, and they run at the start of InfluxDB’s preset time boundaries.
+If the EVERY interval is two hours, InfluxDB executes the CQ at the top of
+every other hour.
+
When the CQ executes, it runs a single query for the time range between
+now() and now() minus the FOR interval in the RESAMPLE clause.
+If the FOR interval is two hours and the current time is 17:00, the query’s
+time range is between 15:00 and 16:59.999999999.
+
Both the EVERY interval and the FOR interval accept
+duration literals.
+The RESAMPLE clause works with either or both of the EVERY and FOR intervals
+configured.
+CQs default to the relevant
+basic syntax behavior
+if the EVERY interval or FOR interval is not provided (see the first issue in
+Common Issues with Advanced Syntax
+for an anomalous case).
+
Examples of advanced syntax
+
The examples below use the following sample data in the transportation database.
+The measurement bus_data stores 15-minute resolution data on the number of bus
+passengers:
cq_advanced_every calculates the one-hour average of passengers
+from the bus_data measurement and stores the results in the
+average_passengers measurement in the transportation database.
+
cq_advanced_every executes at 30-minute intervals, the same interval as the
+EVERY interval.
+Every 30 minutes, cq_advanced_every runs a single query that covers the time
+range for the current time bucket, that is, the one-hour time bucket that
+intersects with now().
+
Annotated log output on the morning of August 28, 2016:
Notice that cq_advanced_every calculates the result for the 8:00 time interval
+twice.
+First, it runs at 8:30 and calculates the average for every available data point
+between 8:00 and 9:00 (8,15, and 15).
+Second, it runs at 9:00 and calculates the average for every available data
+point between 8:00 and 9:00 (8, 15, 15, and 17).
+Because of the way InfluxDB
+handles duplicate points
+, the second result simply overwrites the first result.
+
Configuring time ranges for resampling
+
Use a FOR interval in the RESAMPLE clause to specify the length of the CQ’s
+time range.
cq_advanced_for calculates the 30-minute average of passengers
+from the bus_data measurement and stores the results in the average_passengers
+measurement in the transportation database.
+
cq_advanced_for executes at 30-minute intervals, the same interval as the
+GROUP BY time() interval.
+Every 30 minutes, cq_advanced_for runs a single query that covers the time
+range between now() and now() minus the FOR interval, that is, the time
+range between now() and one hour prior to now().
+
Annotated log output on the morning of August 28, 2016:
Notice that cq_advanced_for will calculate the result for every time interval
+twice.
+The CQ calculates the average for the 7:30 time interval at 8:00 and at 8:30,
+and it calculates the average for the 8:00 time interval at 8:30 and 9:00.
cq_advanced_every_for calculates the 30-minute average of
+passengers from the bus_data measurement and stores the results in the
+average_passengers measurement in the transportation database.
+
cq_advanced_every_for executes at one-hour intervals, the same interval as the
+EVERY interval.
+Every hour, cq_advanced_every_for runs a single query that covers the time
+range between now() and now() minus the FOR interval, that is, the time
+range between now() and 90 minutes prior to now().
+
Annotated log output on the morning of August 28, 2016:
Notice that cq_advanced_every_for will calculate the result for every time
+interval twice.
+The CQ calculates the average for the 7:30 interval at 8:00 and 9:00.
Configuring CQ time ranges and filling empty results
+
Use a FOR interval and fill() to change the value reported for time
+intervals with no data.
+Note that at least one data point must fall within the FOR interval for fill()
+to operate.
+If no data fall within the FOR interval the CQ writes no points to the
+destination measurement.
cq_advanced_for_fill calculates the one-hour average of passengers from the
+bus_data measurement and stores the results in the average_passengers
+measurement in the transportation database.
+Where possible, it writes the value 1000 for time intervals with no results.
+
cq_advanced_for_fill executes at one-hour intervals, the same interval as the
+GROUP BY time() interval.
+Every hour, cq_advanced_for_fill runs a single query that covers the time
+range between now() and now() minus the FOR interval, that is, the time
+range between now() and two hours prior to now().
+
Annotated log output on the morning of August 28, 2016:
+
+
+
>
+At**6:00**,`cq_advanced_for_fill`executesaquerywiththetimerange`WHEREtime>='4:00'ANDtime<'6:00'`.
+`cq_advanced_for_fill`writesnothingto`average_passengers`;`bus_data`hasnodata
+thatfallwithinthattimerange.
+>
+At**7:00**,`cq_advanced_for_fill`executesaquerywiththetimerange`WHEREtime>='5:00'ANDtime<'7:00'`.
+`cq_advanced_for_fill`writestwopointsto`average_passengers`:
+>
+name:average_passengers
+------------------------
+timemean
+2016-08-28T05:00:00Z1000<------ fill(1000)
+2016-08-28T06:00:00Z3<------ average of 2 and 4
+>
+[...]
+>
+At**11:00**,`cq_advanced_for_fill`executesaquerywiththetimerange`WHEREtime>='9:00'ANDtime<'11:00'`.
+`cq_advanced_for_fill`writestwopointsto`average_passengers`:
+>
+name:average_passengers
+------------------------
+2016-08-28T09:00:00Z20<------ average of 20
+2016-08-28T10:00:00Z1000<------ fill(1000)
+>
+
At 12:00, cq_advanced_for_fill executes a query with the time range WHERE time >= '10:00' AND time < '12:00'.
+cq_advanced_for_fill writes nothing to average_passengers; bus_data has no data
+that fall within that time range.
Note:fill(previous) doesn’t fill the result for a time interval if the
+previous value is outside the query’s time range.
+See Frequently Asked Questions
+for more information.
+
+
+
Common issues with advanced syntax
+
If the EVERY interval is greater than the GROUP BY time() interval
+
If the EVERY interval is greater than the GROUP BY time() interval, the CQ
+executes at the same interval as the EVERY interval and runs a single query
+that covers the time range between now() and now() minus the EVERY
+interval (not between now() and now() minus the GROUP BY time() interval).
+
For example, if the GROUP BY time() interval is 5m and the EVERY interval
+is 10m, the CQ executes every ten minutes.
+Every ten minutes, the CQ runs a single query that covers the time range
+between now() and now() minus the EVERY interval, that is, the time
+range between now() and ten minutes prior to now().
+
This behavior is intentional and prevents the CQ from missing data between
+execution times.
+
If the FOR interval is less than the execution interval
+
If the FOR interval is less than the GROUP BY time() interval or, if
+specified, the EVERY interval, InfluxDB returns the following error:
To avoid missing data between execution times, the FOR interval must be equal
+to or greater than the GROUP BY time() interval or, if specified, the EVERY
+interval.
+
Currently, this is the intended behavior.
+GitHub Issue #6963
+outlines a feature request for CQs to support gaps in data coverage.
Drop the idle_hands CQ from the telegraf database:
+
+
+
DROPCONTINUOUSQUERY"idle_hands"ON"telegraf"
+
Altering continuous queries
+
CQs cannot be altered once they’re created.
+To change a CQ, you must DROP and reCREATE it with the updated settings.
+
Continuous query statistics
+
If query-stats-enabled is set to true in your influxdb.conf or using the INFLUXDB_CONTINUOUS_QUERIES_QUERY_STATS_ENABLED environment variable, data will be written to _internal with information about when continuous queries ran and their duration.
+Information about CQ configuration settings is available in the Configuration documentation.
+
+
+
Note:_internal houses internal system data and is meant for internal use.
+The structure of and data stored in _internal can change at any time.
+Use of this data falls outside the scope of official InfluxData support.
+
+
+
Continuous query use cases
+
Downsampling and Data Retention
+
Use CQs with InfluxDB database
+retention policies
+(RPs) to mitigate storage concerns.
+Combine CQs and RPs to automatically downsample high precision data to a lower
+precision and remove the dispensable, high precision data from the database.
Shorten query runtimes by pre-calculating expensive queries with CQs.
+Use a CQ to automatically downsample commonly queried, high precision data to a
+lower precision.
+Queries on lower precision data require fewer resources and return faster.
+
Tip: Pre-calculate queries for your preferred graphing tool to accelerate
+the population of graphs and dashboards.
+
Substituting for a HAVING clause
+
InfluxQL does not support HAVING clauses.
+Get the same functionality by creating a CQ to aggregate the data and querying
+the CQ results to apply the HAVING clause.
+
+
+
Note: InfluxQL supports subqueries which also offer similar functionality to HAVING clauses.
+See Data Exploration for more information.
+
+
+
Example
+
InfluxDB does not accept the following query with a HAVING clause.
+The query calculates the average number of bees at 30 minute intervals and
+requests averages that are greater than 20.
This step performs the mean("bees") part of the query above.
+Because this step creates CQ you only need to execute it once.
+
The following CQ automatically calculates the average number of bees at
+30 minutes intervals and writes those averages to the mean_bees field in the
+aggregate_bees measurement.
Some InfluxQL functions
+support nesting
+of other functions.
+Most do not.
+If your function does not support nesting, you can get the same functionality using a CQ to calculate
+the inner-most function.
+Then simply query the CQ results to calculate the outer-most function.
+
+
+
Note: InfluxQL supports subqueries which also offer the same functionality as nested functions.
+See Data Exploration for more information.
+
+
+
Example
+
InfluxDB does not accept the following query with a nested function.
+The query calculates the number of non-null values
+of bees at 30 minute intervals and the average of those counts:
This step performs the count("bees") part of the nested function above.
+Because this step creates a CQ you only need to execute it once.
+
The following CQ automatically calculates the number of non-null values of bees at 30 minute intervals
+and writes those counts to the count_bees field in the aggregate_bees measurement.
To see how to combine two InfluxDB features, CQs, and retention policies,
+to periodically downsample data and automatically expire the dispensable high
+precision data, see Downsampling and data retention.
+
Kapacitor, InfluxData’s data processing engine, can do the same work as
+continuous queries in InfluxDB databases.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
InfluxQL is an SQL-like query language for interacting with data in InfluxDB.
+The following sections detail InfluxQL’s SELECT statement and useful query syntax
+for exploring your data.
$ influx -precision rfc3339 -database NOAA_water_database
+Connected to http://localhost:8086 version 1.12.3
+InfluxDB shell 1.12.3
+>
+
Next, get acquainted with this subsample of the data in the h2o_feet measurement:
+
name: h2o_feet
+
+
+
+
time
+
level description
+
location
+
water_level
+
+
+
+
+
2015-08-18T00:00:00Z
+
between 6 and 9 feet
+
coyote_creek
+
8.12
+
+
+
2015-08-18T00:00:00Z
+
below 3 feet
+
santa_monica
+
2.064
+
+
+
2015-08-18T00:06:00Z
+
between 6 and 9 feet
+
coyote_creek
+
8.005
+
+
+
2015-08-18T00:06:00Z
+
below 3 feet
+
santa_monica
+
2.116
+
+
+
2015-08-18T00:12:00Z
+
between 6 and 9 feet
+
coyote_creek
+
7.887
+
+
+
2015-08-18T00:12:00Z
+
below 3 feet
+
santa_monica
+
2.028
+
+
+
+
The data in the h2o_feetmeasurement
+occur at six-minute time intervals.
+The measurement has one tag key
+(location) which has two tag values:
+coyote_creek and santa_monica.
+The measurement also has two fields:
+level description stores string field values
+and water_level stores float field values.
+All of these data is in the NOAA_water_databasedatabase.
+
+
+
Disclaimer: The level description field isn’t part of the original NOAA data - we snuck it in there for the sake of having a field key with a special character and string field values.
+
+
+
The basic SELECT statement
+
The SELECT statement queries data from a particular measurement or measurements.
SELECT "<field_key>","<field_key>"
+ Returns more than one field.
+
SELECT "<field_key>","<tag_key>"
+ Returns a specific field and tag.
+The SELECT clause must specify at least one field when it includes a tag.
+
SELECT "<field_key>"::field,"<tag_key>"::tag
+ Returns a specific field and tag.
+The ::[field | tag] syntax specifies the identifier’s type.
+Use this syntax to differentiate between field keys and tag keys that have the same name.
The FROM clause supports several formats for specifying a measurement(s):
+
FROM <measurement_name>
+
+Returns data from a single measurement.
+If you’re using the CLI InfluxDB queries the measurement in the
+USEd
+database and the DEFAULTretention policy.
+If you’re using the InfluxDB API InfluxDB queries the
+measurement in the database specified in the db query string parameter
+and the DEFAULT retention policy.
+
FROM <measurement_name>,<measurement_name>
+
+Returns data from more than one measurement.
+
FROM <database_name>.<retention_policy_name>.<measurement_name>
+
+Returns data from a fully qualified measurement.
+Fully qualify a measurement by specifying its database and retention policy.
+
FROM <database_name>..<measurement_name>
+
+Returns data from a measurement in a user-specified database and the DEFAULT
+retention policy.
Identifiersmust be double quoted if they contain characters other than [A-z,0-9,_], if they
+begin with a digit, or if they are an InfluxQL keyword.
+While not always necessary, we recommend that you double quote identifiers.
If you’re using the CLI be sure to enter
+USE NOAA_water_database before you run the query.
+The CLI queries the data in the USEd database and the
+DEFAULTretention policy.
+If you’re using the InfluxDB API be sure to set the
+dbquery string parameter
+to NOAA_water_database.
+If you do not set the rp query string parameter, the InfluxDB API automatically
+queries the database’s DEFAULT retention policy.
+
Select specific tags and fields from a single measurement
The query selects the level description field, the location tag, and the
+water_level field.
+Note that the SELECT clause must specify at least one field when it includes
+a tag.
+
Select specific tags and fields from a single measurement, and provide their identifier type
The query selects the level description field, the location tag, and the
+water_level field from the h2o_feet measurement.
+The ::[field | tag] syntax specifies if the
+identifier is a field or tag.
+Use ::[field | tag] to differentiate between an identical field key and tag key .
+That syntax is not required for most use cases.
The query multiplies water_level’s field values by two and adds four to those
+values.
+Note that InfluxDB follows the standard order of operations.
+See Mathematical Operators
+for more on supported operators.
The query selects data in the NOAA_water_database, the autogen retention
+policy, and the measurement h2o_feet.
+
In the CLI, fully qualify a measurement to query data in a database other
+than the USEd database and in a retention policy other than the
+DEFAULT retention policy.
+In the InfluxDB API, fully qualify a measurement in place of using the db
+and rp query string parameters if desired.
+
Select all data from a measurement in a particular database
The query selects data in the NOAA_water_database, the DEFAULT retention
+policy, and the h2o_feet measurement.
+The .. indicates the DEFAULT retention policy for the specified database.
+
In the CLI, specify the database to query data in a database other than the
+USEd database.
+In the InfluxDB API, specify the database in place of using the db query
+string parameter if desired.
+
Common issues with the SELECT statement
+
Selecting tag keys in the SELECT clause
+
A query requires at least one field key
+in the SELECT clause to return data.
+If the SELECT clause only includes a single tag key or several tag keys, the
+query returns an empty response.
+This behavior is a result of how the system stores data.
+
Example
+
The following query returns no data because it specifies a single tag key (location) in
+the SELECT clause:
+
+
+
SELECT"location"FROM"h2o_feet"
+
To return any data associated with the location tag key, the query’s SELECT
+clause must include at least one field key (water_level):
Tired of reading? Check out this InfluxQL Short:
+
+
+
+
Syntax
+
+
+
SELECT_clause FROM_clause WHERE <conditional_expression> [(AND|OR) <conditional_expression> [...]]
+
The WHERE clause supports conditional_expressions on fields, tags, and
+timestamps.
+
+
+
Note InfluxDB does not support using OR in the WHERE clause to specify multiple time ranges. For example, InfluxDB returns an empty response for the following query:
+
+
+
> SELECT * FROM "absolutismus" WHERE time = '2016-07-31T20:07:00Z' OR time = '2016-07-31T23:07:17Z'
The WHERE clause supports comparisons against string, boolean, float,
+and integer field values.
+
Single quote string field values in the WHERE clause.
+Queries with unquoted string field values or double quoted string field values
+will not return any data and, in most cases,
+will not return an error.
Single quote tag values in
+the WHERE clause.
+Queries with unquoted tag values or double quoted tag values will not return
+any data and, in most cases,
+will not return an error.
The query returns data from the h2o_feet measurement with field values of
+level description that equal the below 3 feet string.
+InfluxQL requires single quotes around string field values in the WHERE
+clause.
+
Select data that have a specific field key-value and perform basic arithmetic
The query returns data from the h2o_feet measurement with field values of
+water_level plus two that are greater than 11.9.
+Note that InfluxDB follows the standard order of operations
+See Mathematical Operators
+for more on supported operators.
The query returns data from the h2o_feet measurement where the
+tag keylocation is set to santa_monica.
+InfluxQL requires single quotes around tag values in the WHERE clause.
+
Select data that have specific field key-values and tag key-values
The query returns data from the h2o_feet measurement where the tag key
+location is not set to santa_monica and where the field values of
+water_level are either less than -0.59 or greater than 9.95.
+The WHERE clause supports the operators AND and OR, and supports
+separating logic with parentheses.
+
Select data that have specific timestamps
+
+
+
SELECT*FROM"h2o_feet"WHEREtime>now()-7d
+
The query returns data from the h2o_feet measurement that have timestamps
+within the past seven days.
+The Time Syntax section on this page
+offers in-depth information on supported time syntax in the WHERE clause.
+
Common issues with the WHERE clause
+
A WHERE clause query unexpectedly returns no data
+
In most cases, this issue is the result of missing single quotes around
+tag values
+or string field values.
+Queries with unquoted or double quoted tag values or string field values will
+not return any data and, in most cases, will not return an error.
+
The first two queries in the code block below attempt to specify the tag value
+santa_monica without any quotes and with double quotes.
+Those queries return no results.
+The third query single quotes santa_monica (this is the supported syntax)
+and returns the expected results.
The first two queries in the code block below attempt to specify the string
+field value at or greater than 9 feet without any quotes and with double
+quotes.
+The first query returns an error because the string field value includes
+white spaces.
+The second query returns no results.
+The third query single quotes at or greater than 9 feet (this is the
+supported syntax) and returns the expected results.
+
+
+
>SELECT"level description"FROM"h2o_feet"WHERE"level description"=atorgreaterthan9feet
+
+ERR:errorparsingquery:foundthan,expected;atline1,char86
+
+>SELECT"level description"FROM"h2o_feet"WHERE"level description"="at or greater than 9 feet"
+
+>SELECT"level description"FROM"h2o_feet"WHERE"level description"='at or greater than 9 feet'
+
+name:h2o_feet
+--------------
+timeleveldescription
+2015-08-26T04:00:00Zatorgreaterthan9feet
+[...]
+2015-09-15T22:42:00Zatorgreaterthan9feet
The query uses an InfluxQL function
+to calculate the average water_level for each
+tag value of location in
+the h2o_feetmeasurement.
+InfluxDB returns results in two series: one for each tag value of location.
+
+
+
Note: In InfluxDB, epoch 0 (1970-01-01T00:00:00Z) is often used as a null timestamp equivalent.
+If you request a query that has no timestamp to return, such as an aggregation function with an unbounded time range, InfluxDB returns epoch 0 as the timestamp.
The query uses an InfluxQL function to calculate the average index for
+each combination of the locationtag and the randtag tag in the
+h2o_quality measurement.
+Separate multiple tags with a comma in the GROUP BY clause.
The query uses an InfluxQL function
+to calculate the average index for every possible
+tag combination in the h2o_quality
+measurement.
+
Note that the query results are identical to the results of the query in Example 2
+where we explicitly specified the location and randtag tag keys.
+This is because the h2o_quality measurement only has two tag keys.
+
GROUP BY time intervals
+
GROUP BY time() queries group query results by a user-specified time interval.
Basic GROUP BY time() queries require an InfluxQL function
+in the SELECT clause and a time range in the
+WHERE clause.
+Note that the GROUP BY clause must come after the WHERE clause.
+
time(time_interval)
+
The time_interval in the GROUP BY time() clause is a
+duration literal.
+It determines how InfluxDB groups query results over time.
+For example, a time_interval of 5m groups query results into five-minute
+time groups across the time range specified in the WHERE clause.
+
fill(<fill_option>)
+
fill(<fill_option>) is optional.
+It changes the value reported for time intervals that have no data.
+See GROUP BY time intervals and fill()
+for more information.
+
Coverage:
+
Basic GROUP BY time() queries rely on the time_interval and on the InfluxDB database’s
+preset time boundaries to determine the raw data included in each time interval
+and the timestamps returned by the query.
+
Examples of basic syntax
+
The examples below use the following subsample of the sample data:
The query uses an InfluxQL function
+to count the number of water_level points with the tag
+location = coyote_creek and it group results into 12 minute intervals.
+
The result for each timestamp
+represents a single 12 minute interval.
+The count for the first timestamp covers the raw data between 2015-08-18T00:00:00Z
+and up to, but not including, 2015-08-18T00:12:00Z.
+The count for the second timestamp covers the raw data between 2015-08-18T00:12:00Z
+and up to, but not including, 2015-08-18T00:24:00Z.
+
Group query results into 12 minutes intervals and by a tag key
The query uses an InfluxQL function
+to count the number of water_level points.
+It groups results by the location tag and into 12 minute intervals.
+Note that the time interval and the tag key are separated by a comma in the
+GROUP BY clause.
+
The query returns two series of results: one for each
+tag value of the location tag.
+The result for each timestamp represents a single 12 minute interval.
+The count for the first timestamp covers the raw data between 2015-08-18T00:00:00Z
+and up to, but not including, 2015-08-18T00:12:00Z.
+The count for the second timestamp covers the raw data between 2015-08-18T00:12:00Z
+and up to, but not including, 2015-08-18T00:24:00Z.
+
Common issues with basic syntax
+
Unexpected timestamps and values in query results
+
With the basic syntax, InfluxDB relies on the GROUP BY time() interval
+and on the system’s preset time boundaries to determine the raw data included
+in each time interval and the timestamps returned by the query.
+In some cases, this can lead to unexpected results.
The following query covers a 12-minute time range and groups results into 12-minute time intervals, but it returns two results:
+
+
+
>SELECTCOUNT("water_level")FROM"h2o_feet"WHERE"location"='coyote_creek'ANDtime>='2015-08-18T00:06:00Z'ANDtime<'2015-08-18T00:18:00Z'GROUPBYtime(12m)
+
+name:h2o_feet
+timecount
+---- -----
+2015-08-18T00:00:00Z1<----- Note that this timestamp occurs before the start of the query's time range
+2015-08-18T00:12:00Z1
+
Explanation:
+
InfluxDB uses preset round-number time boundaries for GROUP BY intervals that are
+independent of any time conditions in the WHERE clause.
+When it calculates the results, all returned data must occur within the query’s
+explicit time range but the GROUP BY intervals will be based on the preset
+time boundaries.
+
The table below shows the preset time boundary, the relevant GROUP BY time() interval, the
+points included, and the returned timestamp for each GROUP BY time()
+interval in the results.
+
+
+
+
Time Interval Number
+
Preset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:00:00Z AND time < 2015-08-18T00:12:00Z
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:12:00Z
+
8.005
+
2015-08-18T00:00:00Z
+
+
+
2
+
time >= 2015-08-12T00:12:00Z AND time < 2015-08-18T00:24:00Z
+
time >= 2015-08-12T00:12:00Z AND time < 2015-08-18T00:18:00Z
+
7.887
+
2015-08-18T00:12:00Z
+
+
+
+
The first preset 12-minute time boundary begins at 00:00 and ends just before
+00:12.
+Only one raw point (8.005) falls both within the query’s first GROUP BY time() interval and in that
+first time boundary.
+Note that while the returned timestamp occurs before the start of the query’s time range,
+the query result excludes data that occur before the query’s time range.
+
The second preset 12-minute time boundary begins at 00:12 and ends just before
+00:24.
+Only one raw point (7.887) falls both within the query’s second GROUP BY time() interval and in that
+second time boundary.
+
The advanced GROUP BY time() syntax allows users to shift
+the start time of the InfluxDB database’s preset time boundaries.
+Example 3
+in the Advanced Syntax section continues with the query shown here;
+it shifts forward the preset time boundaries by six minutes such that
+InfluxDB returns:
Advanced GROUP BY time() queries require an InfluxQL function
+in the SELECT clause and a time range in the
+WHERE clause.
+Note that the GROUP BY clause must come after the WHERE clause.
The offset_interval is a
+duration literal.
+It shifts forward or back the InfluxDB database’s preset time boundaries.
+The offset_interval can be positive or negative.
+
fill(<fill_option>)
+
fill(<fill_option>) is optional.
+It changes the value reported for time intervals that have no data.
+See GROUP BY time intervals and fill()
+for more information.
+
Coverage:
+
Advanced GROUP BY time() queries rely on the time_interval, the offset_interval
+, and on the InfluxDB database’s preset time boundaries to determine the raw data included in each time interval
+and the timestamps returned by the query.
+
Examples of advanced syntax
+
The examples below use the following subsample of the sample data:
The query uses an InfluxQL function
+to calculate the average water_level, grouping results into 18 minute
+time intervals, and offsetting the preset time boundaries by six minutes.
+
The time boundaries and returned timestamps for the query without the offset_interval adhere to the InfluxDB database’s preset time boundaries. Let’s first examine the results without the offset:
The time boundaries and returned timestamps for the query without the
+offset_interval adhere to the InfluxDB database’s preset time boundaries:
+
+
+
+
Time Interval Number
+
Preset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:00:00Z AND time < 2015-08-18T00:18:00Z
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:18:00Z
+
8.005,7.887
+
2015-08-18T00:00:00Z
+
+
+
2
+
time >= 2015-08-18T00:18:00Z AND time < 2015-08-18T00:36:00Z
+
<— same
+
7.762,7.635,7.5
+
2015-08-18T00:18:00Z
+
+
+
3
+
time >= 2015-08-18T00:36:00Z AND time < 2015-08-18T00:54:00Z
+
<— same
+
7.372,7.234,7.11
+
2015-08-18T00:36:00Z
+
+
+
4
+
time >= 2015-08-18T00:54:00Z AND time < 2015-08-18T01:12:00Z
+
time = 2015-08-18T00:54:00Z
+
6.982
+
2015-08-18T00:54:00Z
+
+
+
+
The first preset 18-minute time boundary begins at 00:00 and ends just before
+00:18.
+Two raw points (8.005 and 7.887) fall both within the first GROUP BY time() interval and in that
+first time boundary.
+Note that while the returned timestamp occurs before the start of the query’s time range,
+the query result excludes data that occur before the query’s time range.
+
The second preset 18-minute time boundary begins at 00:18 and ends just before
+00:36.
+Three raw points (7.762 and 7.635 and 7.5) fall both within the second GROUP BY time() interval and in that
+second time boundary. In this case, the boundary time range and the interval’s time range are the same.
+
The fourth preset 18-minute time boundary begins at 00:54 and ends just before
+1:12:00.
+One raw point (6.982) falls both within the fourth GROUP BY time() interval and in that
+fourth time boundary.
+
The time boundaries and returned timestamps for the query with the
+offset_interval adhere to the offset time boundaries:
+
+
+
+
Time Interval Number
+
Offset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:24:00Z
+
<— same
+
8.005,7.887,7.762
+
2015-08-18T00:06:00Z
+
+
+
2
+
time >= 2015-08-18T00:24:00Z AND time < 2015-08-18T00:42:00Z
+
<— same
+
7.635,7.5,7.372
+
2015-08-18T00:24:00Z
+
+
+
3
+
time >= 2015-08-18T00:42:00Z AND time < 2015-08-18T01:00:00Z
+
<— same
+
7.234,7.11,6.982
+
2015-08-18T00:42:00Z
+
+
+
4
+
time >= 2015-08-18T01:00:00Z AND time < 2015-08-18T01:18:00Z
+
NA
+
NA
+
NA
+
+
+
+
The six-minute offset interval shifts forward the preset boundary’s time range
+such that the boundary time ranges and the relevant GROUP BY time() interval time ranges are
+always the same.
+With the offset, each interval performs the calculation on three points, and
+the timestamp returned matches both the start of the boundary time range and the
+start of the GROUP BY time() interval time range.
+
Note that offset_interval forces the fourth time boundary to be outside
+the query’s time range so the query returns no results for that last interval.
+
Group query results into 12 minute intervals and shift the preset time boundaries back
The query uses an InfluxQL function
+to calculate the average water_level, grouping results into 18 minute
+time intervals, and offsetting the preset time boundaries by -12 minutes.
+
+
+
Note: The query in Example 2 returns the same results as the query in Example 1, but
+the query in Example 2 uses a negative offset_interval instead of a positive
+offset_interval.
+There are no performance differences between the two queries; feel free to choose the most
+intuitive option when deciding between a positive and negative offset_interval.
+
+
+
The time boundaries and returned timestamps for the query without the offset_interval adhere to InfluxDB database’s preset time boundaries. Let’s first examine the results without the offset:
The time boundaries and returned timestamps for the query without the
+offset_interval adhere to the InfluxDB database’s preset time boundaries:
+
+
+
+
Time Interval Number
+
Preset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:00:00Z AND time < 2015-08-18T00:18:00Z
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:18:00Z
+
8.005,7.887
+
2015-08-18T00:00:00Z
+
+
+
2
+
time >= 2015-08-18T00:18:00Z AND time < 2015-08-18T00:36:00Z
+
<— same
+
7.762,7.635,7.5
+
2015-08-18T00:18:00Z
+
+
+
3
+
time >= 2015-08-18T00:36:00Z AND time < 2015-08-18T00:54:00Z
+
<— same
+
7.372,7.234,7.11
+
2015-08-18T00:36:00Z
+
+
+
4
+
time >= 2015-08-18T00:54:00Z AND time < 2015-08-18T01:12:00Z
+
time = 2015-08-18T00:54:00Z
+
6.982
+
2015-08-18T00:54:00Z
+
+
+
+
The first preset 18-minute time boundary begins at 00:00 and ends just before
+00:18.
+Two raw points (8.005 and 7.887) fall both within the first GROUP BY time() interval and in that
+first time boundary.
+Note that while the returned timestamp occurs before the start of the query’s time range,
+the query result excludes data that occur before the query’s time range.
+
The second preset 18-minute time boundary begins at 00:18 and ends just before
+00:36.
+Three raw points (7.762 and 7.635 and 7.5) fall both within the second GROUP BY time() interval and in that
+second time boundary. In this case, the boundary time range and the interval’s time range are the same.
+
The fourth preset 18-minute time boundary begins at 00:54 and ends just before
+1:12:00.
+One raw point (6.982) falls both within the fourth GROUP BY time() interval and in that
+fourth time boundary.
+
The time boundaries and returned timestamps for the query with the
+offset_interval adhere to the offset time boundaries:
+
+
+
+
Time Interval Number
+
Offset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-17T23:48:00Z AND time < 2015-08-18T00:06:00Z
+
NA
+
NA
+
NA
+
+
+
2
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:24:00Z
+
<— same
+
8.005,7.887,7.762
+
2015-08-18T00:06:00Z
+
+
+
3
+
time >= 2015-08-18T00:24:00Z AND time < 2015-08-18T00:42:00Z
+
<— same
+
7.635,7.5,7.372
+
2015-08-18T00:24:00Z
+
+
+
4
+
time >= 2015-08-18T00:42:00Z AND time < 2015-08-18T01:00:00Z
+
<— same
+
7.234,7.11,6.982
+
2015-08-18T00:42:00Z
+
+
+
+
The negative 12-minute offset interval shifts back the preset boundary’s time range
+such that the boundary time ranges and the relevant GROUP BY time() interval time ranges are always the
+same.
+With the offset, each interval performs the calculation on three points, and
+the timestamp returned matches both the start of the boundary time range and the
+start of the GROUP BY time() interval time range.
+
Note that offset_interval forces the first time boundary to be outside
+the query’s time range so the query returns no results for that first interval.
+
Group query results into 12 minute intervals and shift the preset time boundaries forward
The query uses an InfluxQL function
+to count the number of water_level points, grouping results into 12 minute
+time intervals, and offsetting the preset time boundaries by six minutes.
+
The time boundaries and returned timestamps for the query without the offset_interval adhere to InfluxDB database’s preset time boundaries. Let’s first examine the results without the offset:
The time boundaries and returned timestamps for the query without the
+offset_interval adhere to InfluxDB database’s preset time boundaries:
+
+
+
+
Time Interval Number
+
Preset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:00:00Z AND time < 2015-08-18T00:12:00Z
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:12:00Z
+
8.005
+
2015-08-18T00:00:00Z
+
+
+
2
+
time >= 2015-08-12T00:12:00Z AND time < 2015-08-18T00:24:00Z
+
time >= 2015-08-12T00:12:00Z AND time < 2015-08-18T00:18:00Z
+
7.887
+
2015-08-18T00:12:00Z
+
+
+
+
The first preset 12-minute time boundary begins at 00:00 and ends just before
+00:12.
+Only one raw point (8.005) falls both within the query’s first GROUP BY time() interval and in that
+first time boundary.
+Note that while the returned timestamp occurs before the start of the query’s time range,
+the query result excludes data that occur before the query’s time range.
+
The second preset 12-minute time boundary begins at 00:12 and ends just before
+00:24.
+Only one raw point (7.887) falls both within the query’s second GROUP BY time() interval and in that
+second time boundary.
+
The time boundaries and returned timestamps for the query with the
+offset_interval adhere to the offset time boundaries:
+
+
+
+
Time Interval Number
+
Offset Time Boundary
+
GROUP BY time() Interval
+
Points Included
+
Returned Timestamp
+
+
+
+
+
1
+
time >= 2015-08-18T00:06:00Z AND time < 2015-08-18T00:18:00Z
+
<— same
+
8.005,7.887
+
2015-08-18T00:06:00Z
+
+
+
2
+
time >= 2015-08-18T00:18:00Z AND time < 2015-08-18T00:30:00Z
+
NA
+
NA
+
NA
+
+
+
+
The six-minute offset interval shifts forward the preset boundary’s time range
+such that the preset boundary time range and the relevant GROUP BY time() interval time range are the
+same.
+With the offset, the query returns a single result, and the timestamp returned
+matches both the start of the boundary time range and the start of the GROUP BY time() interval
+time range.
+
Note that offset_interval forces the second time boundary to be outside
+the query’s time range so the query returns no results for that second interval.
+
GROUP BY time intervals and fill()
+
fill() changes the value reported for time intervals that have no data.
By default, a GROUP BY time() interval with no data reports null as its
+value in the output column.
+fill() changes the value reported for time intervals that have no data.
+Note that fill() must go at the end of the GROUP BY clause if you’re
+GROUP(ing) BY several things (for example, both tags and a time interval).
+
fill_option
+
Any numerical value
+
+Reports the given numerical value for time intervals with no data.
+
linear
+
+Reports the results of linear interpolation for time intervals with no data.
+
none
+
+
+
+Reports no timestamp and no value for time intervals with no data.
+
null
+
+
+
+Reports null for time intervals with no data but returns a timestamp. This is the same as the default behavior.
+
previous
+
+
+
+Reports the value from the previous time interval for time intervals with no data.
fill(previous) changes the value reported for the time interval with no data to 3.235,
+the value from the previous time interval.
+
+
+
+
+
+
+
Common issues with fill()
+
Queries with fill() when no data fall within the query’s time range
+
Currently, queries ignore fill() if no data fall within the query’s time range.
+This is the expected behavior. An open
+feature request on GitHub
+proposes that fill() should force a return of values even if the query’s time
+range covers no data.
+
Example
+
The following query returns no data because water_level has no points within
+the query’s time range.
+Note that fill(800) has no effect on the query results.
Queries with fill(previous) when the previous result falls outside the query’s time range
+
fill(previous) doesn’t fill the result for a time interval if the previous
+value is outside the query’s time range.
+
Example
+
The following query covers the time range between 2015-09-18T16:24:00Z and 2015-09-18T16:54:00Z.
+Note that fill(previous) fills the result for 2015-09-18T16:36:00Z with the
+result from 2015-09-18T16:24:00Z.
The next query shortens the time range in the previous query.
+It now covers the time between 2015-09-18T16:36:00Z and 2015-09-18T16:54:00Z.
+Note that fill(previous) doesn’t fill the result for 2015-09-18T16:36:00Z with the
+result from 2015-09-18T16:24:00Z; the result for 2015-09-18T16:24:00Z is outside the query’s
+shorter time range.
fill(linear) when the previous or following result falls outside the query’s time range
+
fill(linear) doesn’t fill the result for a time interval with no data if the
+previous result or the following result is outside the query’s time range.
+
Example
+
The following query covers the time range between 2016-11-11T21:24:00Z and
+2016-11-11T22:06:00Z. Note that fill(linear) fills the results for the
+2016-11-11T21:36:00Z time interval and the 2016-11-11T21:48:00Z time interval
+using the values from the 2016-11-11T21:24:00Z time interval and the
+2016-11-11T22:00:00Z time interval.
The next query shortens the time range in the previous query.
+It now covers the time between 2016-11-11T21:36:00Z and 2016-11-11T22:06:00Z.
+Note that fill() previous doesn’t fill the results for the 2016-11-11T21:36:00Z
+time interval and the 2016-11-11T21:48:00Z time interval; the result for
+2016-11-11T21:24:00Z is outside the query’s shorter time range and InfluxDB
+cannot perform the linear interpolation.
The INTO clause supports several formats for specifying a measurement:
+
INTO <measurement_name>
+
+Writes data to the specified measurement.
+If you’re using the CLI InfluxDB writes the data to the measurement in the
+USEd
+database and the DEFAULTretention policy.
+If you’re using the InfluxDB API InfluxDB writes the data to the
+measurement in the database specified in the db query string parameter
+and the DEFAULT retention policy.
+
INTO <database_name>.<retention_policy_name>.<measurement_name>
+
+Writes data to a fully qualified measurement.
+Fully qualify a measurement by specifying its database and retention policy.
+
INTO <database_name>..<measurement_name>
+
+Writes data to a measurement in a user-specified database and the DEFAULT
+retention policy.
+
INTO <database_name>.<retention_policy_name>.:MEASUREMENT FROM /<regular_expression>/
+
+Writes data to all measurements in the user-specified database and
+retention policy that match the regular expression in the FROM clause.
+:MEASUREMENT is a backreference to each measurement matched in the FROM clause.
Directly renaming a database in InfluxDB is not possible, so a common use for the INTO clause is to move data from one database to another.
+The query above writes all data in the NOAA_water_database and autogen retention policy to the copy_NOAA_water_database database and the autogen retention policy.
+
The backreference syntax (:MEASUREMENT) maintains the source measurement names in the destination database.
+Note that both the copy_NOAA_water_database database and its autogen retention policy must exist prior to running the INTO query.
+See Database Management
+for how to manage databases and retention policies.
+
The GROUP BY * clause preserves tags in the source database as tags in the destination database.
+The following query does not maintain the series context for tags; tags will be stored as fields in the destination database (copy_NOAA_water_database):
When moving large amounts of data, to avoid running out of memory, sequentially
+run INTO queries for different measurements and time boundaries.
+Use the WHERE clause to define time boundaries for each query.
+
+
+
INTO queries without time boundaries fail with the error: ERR: no data received.
+
+
+
+
Move large amounts of data with sequential queries
The query writes its results a new measurement: h2o_feet_copy_1.
+If you’re using the CLI, InfluxDB writes the data to
+the USEd database and the DEFAULTretention policy.
+If you’re using the InfluxDB API, InfluxDB writes the
+data to the database and retention policy specified in the db and rp
+query string parameters.
+If you do not set the rp query string parameter, the InfluxDB API automatically
+writes the data to the database’s DEFAULT retention policy.
+
The response shows the number of points (7605) that InfluxDB writes to h2o_feet_copy_1.
+The timestamp in the response is meaningless; InfluxDB uses epoch 0
+(1970-01-01T00:00:00Z) as a null timestamp equivalent.
+
Write the results of a query to a fully qualified measurement
The query writes its results to a new measurement: h2o_feet_copy_2.
+InfluxDB writes the data to the where_else database and to the autogen
+retention policy.
+Note that both where_else and autogen must exist prior to running the INTO
+query.
+See Database Management
+for how to manage databases and retention policies.
+
The response shows the number of points (7605) that InfluxDB writes to h2o_feet_copy_2.
+The timestamp in the response is meaningless; InfluxDB uses epoch 0
+(1970-01-01T00:00:00Z) as a null timestamp equivalent.
+
Write aggregated results to a measurement (downsampling)
The query aggregates data using an
+InfluxQL function and a GROUP BY time() clause.
+It also writes its results to the all_my_averages measurement.
+
The response shows the number of points (3) that InfluxDB writes to all_my_averages.
+The timestamp in the response is meaningless; InfluxDB uses epoch 0
+(1970-01-01T00:00:00Z) as a null timestamp equivalent.
+
The query is an example of downsampling: taking higher precision data,
+aggregating those data to a lower precision, and storing the lower precision
+data in the database.
+Downsampling is a common use case for the INTO clause.
+
Write aggregated results for more than one measurement to a different database (downsampling with backreferencing)
The query aggregates data using an
+InfluxQL function and a GROUP BY time() clause.
+It aggregates data in every measurement that matches the regular expression
+in the FROM clause and writes the results to measurements with the same name in the
+where_else database and the autogen retention policy.
+Note that both where_else and autogen must exist prior to running the INTO
+query.
+See Database management
+for how to manage databases and retention policies.
+
The response shows the number of points (5) that InfluxDB writes to the where_else
+database and the autogen retention policy.
+The timestamp in the response is meaningless; InfluxDB uses epoch 0
+(1970-01-01T00:00:00Z) as a null timestamp equivalent.
+
The query is an example of downsampling with backreferencing.
+It takes higher precision data from more than one measurement,
+aggregates those data to a lower precision, and stores the lower precision
+data in the database.
+Downsampling with backreferencing is a common use case for the INTO clause.
+
Common issues with the INTO clause
+
Missing data
+
If an INTO query includes a tag key in the SELECT clause, the query converts tags in the current
+measurement to fields in the destination measurement.
+This can cause InfluxDB to overwrite points that were previously differentiated
+by a tag value.
+Note that this behavior does not apply to queries that use the TOP() or BOTTOM() functions.
+The
+Frequently Asked Questions
+document describes that behavior in detail.
+
To preserve tags in the current measurement as tags in the destination measurement,
+GROUP BY the relevant tag key or GROUP BY * in the INTO query.
+
Automating queries with the INTO clause
+
The INTO clause section in this document shows how to manually implement
+queries with an INTO clause.
+See the Continuous Queries
+documentation for how to automate INTO clause queries on realtime data.
+Among other uses,
+Continuous Queries automate the downsampling process.
+
ORDER BY time DESC
+
By default, InfluxDB returns results in ascending time order; the first point
+returned has the oldest timestamp and
+the last point returned has the most recent timestamp.
+ORDER BY time DESC reverses that order such that InfluxDB returns the points
+with the most recent timestamps first.
ORDER by time DESC must appear after the GROUP BY clause
+if the query includes a GROUP BY clause.
+ORDER by time DESC must appear after the WHERE clause
+if the query includes a WHERE clause and no GROUP BY clause.
The query returns the points with the most recent timestamps from the
+h2o_feetmeasurement first.
+Without ORDER by time DESC, the query would return 2015-08-18T00:00:00Z
+first and 2015-09-18T21:42:00Z last.
+
Return the newest points first and include a GROUP BY time() clause
The query uses an InfluxQL function
+and a time interval in the GROUP BY clause
+to calculate the average water_level for each twelve-minute
+interval in the query’s time range.
+ORDER BY time DESC returns the most recent 12-minute time intervals
+first.
+
Without ORDER BY time DESC, the query would return
+2015-08-18T00:00:00Z first and 2015-08-18T00:36:00Z last.
+
The LIMIT and SLIMIT clauses
+
LIMIT and SLIMIT limit the number of
+points and the number of
+series returned per query.
N specifies the number of points to return from the specified measurement.
+If N is greater than the number of points in a measurement, InfluxDB returns
+all points from that series.
+
Note that the LIMIT clause must appear in the order outlined in the syntax above.
The query uses an InfluxQL function
+and a GROUP BY clause
+to calculate the average water_level for each tag and for each twelve-minute
+interval in the query’s time range.
+LIMIT 2 requests the two oldest twelve-minute averages (determined by timestamp).
+
Note that without LIMIT 2, the query would return four points per series;
+one for each twelve-minute interval in the query’s time range.
N specifies the number of series to return from the specified measurement.
+If N is greater than the number of series in a measurement, InfluxDB returns
+all series from that measurement.
+
There is an ongoing issue that requires queries with SLIMIT to include GROUP BY *.
+Note that the SLIMIT clause must appear in the order outlined in the syntax above.
The query uses an InfluxQL function
+and a time interval in the GROUP BY clause
+to calculate the average water_level for each twelve-minute
+interval in the query’s time range.
+SLIMIT 1 requests a single series associated with the h2o_feet measurement.
+
Note that without SLIMIT 1, the query would return results for the two series
+associated with the h2o_feet measurement: location=coyote_creek and
+location=santa_monica.
+
LIMIT and SLIMIT
+
LIMIT <N> followed by SLIMIT <N> returns the first <N> points from <N> series in the specified measurement.
N1 specifies the number of points to return per measurement.
+If N1 is greater than the number of points in a measurement, InfluxDB returns all points from that measurement.
+
N2 specifies the number of series to return from the specified measurement.
+If N2 is greater than the number of series in a measurement, InfluxDB returns all series from that measurement.
+
There is an ongoing issue that requires queries with LIMIT and SLIMIT to include GROUP BY *.
+Note that the LIMIT and SLIMIT clauses must appear in the order outlined in the syntax above.
The query uses an InfluxQL function
+and a time interval in the GROUP BY clause
+to calculate the average water_level for each twelve-minute
+interval in the query’s time range.
+LIMIT 2 requests the two oldest twelve-minute averages (determined by
+timestamp) and SLIMIT 1 requests a single series
+associated with the h2o_feet measurement.
+
Note that without LIMIT 2 SLIMIT 1, the query would return four points
+for each of the two series associated with the h2o_feet measurement.
+
The OFFSET and SOFFSET clauses
+
OFFSET and SOFFSET paginates points and series returned.
Note: InfluxDB returns no results if the WHERE clause includes a time
+range and the OFFSET clause would cause InfluxDB to return points with
+timestamps outside of that time range.
The query returns the fourth, fifth, and sixth points from the h2o_feetmeasurement.
+If the query did not include OFFSET 3, it would return the first, second,
+and third points from that measurement.
This example is pretty involved, so here’s the clause-by-clause breakdown:
+
The SELECT clause specifies an InfluxQL function.
+The FROM clause specifies a single measurement.
+The WHERE clause specifies the time range for the query.
+The GROUP BY clause groups results by all tags (*) and into 12-minute intervals.
+The ORDER BY time DESC clause returns results in descending timestamp order.
+The LIMIT 2 clause limits the number of points returned to two.
+The OFFSET 2 clause excludes the first two averages from the query results.
+The SLIMIT 1 clause limits the number of series returned to one.
+
Without OFFSET 2, the query would return the first two averages of the query results:
N specifies the number of series to paginate.
+The SOFFSET clause requires an SLIMIT clause.
+Using the SOFFSET clause without an SLIMIT clause can cause inconsistent
+query results.
+There is an ongoing issue that requires queries with SLIMIT to include GROUP BY *.
+
+
+
Note: InfluxDB returns no results if the SOFFSET clause paginates
+through more than the total number of series.
The query returns data for the series associated with the h2o_feet
+measurement and the location = santa_monicatag.
+Without SOFFSET 1, the query returns data for the series associated with the
+h2o_feet measurement and the location = coyote_creek tag.
This example is pretty involved, so here’s the clause-by-clause breakdown:
+
The SELECT clause specifies an InfluxQL function.
+The FROM clause specifies a single measurement.
+The WHERE clause specifies the time range for the query.
+The GROUP BY clause groups results by all tags (*) and into 12-minute intervals.
+The ORDER BY time DESC clause returns results in descending timestamp order.
+The LIMIT 2 clause limits the number of points returned to two.
+The OFFSET 2 clause excludes the first two averages from the query results.
+The SLIMIT 1 clause limits the number of series returned to one.
+The SOFFSET 1 clause paginates the series returned.
+
Without SOFFSET 1, the query would return the results for a different series:
By default, InfluxDB stores and returns timestamps in UTC.
+The tz() clause includes the UTC offset or, if applicable, the UTC Daylight Savings Time (DST) offset to the query’s returned timestamps.
+The returned timestamps must be in RFC3339 format for the UTC offset or UTC DST to appear.
+The time_zone parameter follows the TZ syntax in the Internet Assigned Numbers Authority time zone database and it requires single quotes.
Currently, InfluxDB does not support using OR with absolute time in the WHERE
+clause. See the Frequently Asked Questions
+document and the GitHub Issue
+for more information.
+
rfc3339_date_time_string
+
+
+
'YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ'
+
.nnnnnnnnn is optional and is set to .000000000 if not included.
+The RFC3339 date-time string requires single quotes.
+
rfc3339_like_date_time_string
+
+
+
'YYYY-MM-DD HH:MM:SS.nnnnnnnnn'
+
HH:MM:SS.nnnnnnnnn.nnnnnnnnn is optional and is set to 00:00:00.000000000 if not included.
+The RFC3339-like date-time string requires single quotes.
+
epoch_time
+
Epoch time is the amount of time that has elapsed since 00:00:00
+Coordinated Universal Time (UTC), Thursday, 1 January 1970.
+
By default, InfluxDB assumes that all epoch timestamps are in nanoseconds.
+Include a duration literal
+at the end of the epoch timestamp to indicate a precision other than nanoseconds.
+
Basic arithmetic
+
All timestamp formats support basic arithmetic.
+Add (+) or subtract (-) a time from a timestamp with a duration literal.
+Note that InfluxQL requires a whitespace between the + or - and the
+duration literal.
+
Examples
+
Specify a time range with RFC3339 date-time strings
The query returns data with timestamps between August 18, 2015 at 00:00:00.000000000 and
+August 18, 2015 at 00:12:00.
+The nanosecond specification in the first timestamp (.000000000)
+is optional.
+
Note that the single quotes around the RFC3339 date-time strings are required.
+
Specify a time range with RFC3339-like date-time strings
The query returns data with timestamps between August 18, 2015 at 00:00:00 and August 18, 2015
+at 00:12:00.
+The first date-time string does not include a time; InfluxDB assumes the time
+is 00:00:00.
+
Note that the single quotes around the RFC3339-like date-time strings are
+required.
The query returns data with timestamps that occur between August 18, 2015
+at 00:00:00 and August 18, 2015 at 00:12:00.
+By default InfluxDB assumes epoch timestamps are in nanoseconds.
+
Specify a time range with second-precision epoch timestamps
The query returns data with timestamps that occur between August 18, 2015
+at 00:00:00 and August 18, 2015 at 00:12:00.
+The sduration literal at the
+end of the epoch timestamps indicate that the epoch timestamps are in seconds.
+
Perform basic arithmetic on an RFC3339-like date-time string
The query returns data with timestamps that occur at least six minutes after
+September 18, 2015 at 21:24:00.
+Note that the whitespace between the + and 6m is required.
The query returns data with timestamps that occur at least six minutes before
+September 18, 2015 at 21:24:00.
+Note that the whitespace between the - and 6m is required.
+
Relative time
+
Use now() to query data with timestamps relative to the server’s current timestamp.
now() is the Unix time of the server at the time the query is executed on that server.
+The whitespace between - or + and the duration literal is required.
The query returns data with timestamps that occur between September 18, 2015
+at 21:18:00 and 1000 days from now().
+The whitespace between + and 1000d is required.
+
Common issues with time syntax
+
Using OR to select time multiple time intervals
+
InfluxDB does not support using the OR operator in the WHERE clause to specify multiple time intervals.
To query data with timestamps that occur after now(), SELECT statements with
+a GROUP BY time() clause must provide an alternative upper bound in the
+WHERE clause.
+
Example
+
Use the CLI to write a point to the NOAA_water_database that occurs after now():
Note that the WHERE clause must provide an alternative upper bound to
+override the default now() upper bound. The following query merely resets
+the lower bound to now() such that the query’s time range is between
+now() and now():
Currently, InfluxQL does not support using regular expressions to match
+non-string field values in the
+WHERE clause,
+databases, and
+retention polices.
+
+
+
Note: Regular expression comparisons are more computationally intensive than exact
+string comparisons; queries with regular expressions are not as performant
+as those without.
The query selects all field keys
+and tag keys that include an l.
+Note that the regular expression in the SELECT clause must match at least one
+field key in order to return results for a tag key that matches the regular
+expression.
+
Currently, there is no syntax to distinguish between regular expressions for
+field keys and regular expressions for tag keys in the SELECT clause.
+The syntax /<regular_expression>/::[field | tag] is not supported.
+
Use a regular expression to specify measurements in the FROM clause
The query uses an InfluxQL function
+to calculate the average degrees for every measurement in the NOAA_water_database
+database that contains the word temperature.
+
Use a regular expression to specify tag values in the WHERE clause
The query uses an InfluxQL function
+to calculate the average water_level where the tag value of location
+includes an m and water_level is greater than three.
+
Use a regular expression to specify a tag with no value in the WHERE clause
+
+
+
SELECT*FROM"h2o_feet"WHERE"location"!~/./
+
The query selects all data from the h2o_feet measurement where the location
+tag has no value.
+Every data point in the NOAA_water_database has a tag value for location.
+
It’s possible to perform this same query without a regular expression.
+See the
+Frequently Asked Questions
+document for more information.
+
Use a regular expression to specify a tag with a value in the WHERE clause
The query uses an InfluxQL function
+to calculate the average water_level for all data where the field value of
+level description includes the word between.
+
Use a regular expression to specify tag keys in the GROUP BY clause
Field values can be floats, integers, strings, or booleans.
+The :: syntax allows users to specify the field’s type in a query.
+
+
+
Note: Generally, it is not necessary to specify the field value
+type in the SELECT clause.
+In most cases, InfluxDB rejects any writes that attempt to write a field value
+to a field that previously accepted field values of a different type.
+
+
+
It is possible for field value types to differ across shard groups.
+In these cases, it may be necessary to specify the field value type in the
+SELECT clause.
+Please see the
+Frequently Asked Questions
+document for more information on how InfluxDB handles field value type discrepancies.
+
Syntax
+
+
+
SELECT_clause<field_key>::<type>FROM_clause
+
type can be float, integer, string, or boolean.
+In most cases, InfluxDB returns no data if the field_key does not store data of the specified
+type. See Cast Operations for more information.
The query returns values of the water_level field key that are floats.
+
Cast operations
+
The :: syntax allows users to perform basic cast operations in queries.
+Currently, InfluxDB supports casting field values from integers to
+floats or from floats to integers.
+
Syntax
+
+
+
SELECT_clause<field_key>::<type>FROM_clause
+
type can be float or integer.
+
InfluxDB returns no data if the query attempts to cast an integer or float to a
+string or boolean.
The h2o_feetmeasurement in the NOAA_water_database is part of two series.
+The first series is made up of the h2o_feet measurement and the location = coyote_creektag.
+The second series is made of up the h2o_feet measurement and the location = santa_monica tag.
+
The following query automatically merges those two series when it calculates the averagewater_level:
A subquery is a query that is nested in the FROM clause of another query.
+Use a subquery to apply a query as a condition in the enclosing query.
+Subqueries offer functionality similar to nested functions and SQL
+HAVING clauses.
+
Syntax
+
+
+
SELECT_clauseFROM(SELECT_statement)[...]
+
InfluxDB performs the subquery first and the main query second.
+
The main query surrounds the subquery and requires at least the SELECT clause and the FROM clause.
+The main query supports all clauses listed in this document.
+
The subquery appears in the main query’s FROM clause, and it requires surrounding parentheses.
+The subquery supports all clauses listed in this document.
+
InfluxQL supports multiple nested subqueries per main query.
+Sample syntax for multiple subqueries:
To improve the performance of InfluxQL queries with time-bound subqueries,
+apply the WHERE time clause to the outer query instead of the inner query.
+For example, the following queries return the same results, but the query with
+time bounds on the outer query is more performant than the query with time
+bounds on the inner query:
Next, InfluxDB performs the main query and calculates the sum of those maximum values: 9.964 + 7.205 = 17.169.
+Notice that the main query specifies max, not water_level, as the field key in the SUM() function.
+
Calculate the MEAN() difference between two fields
The query returns the average of the differences between the number of cats and dogs in the pet_daycare measurement.
+
InfluxDB first performs the subquery.
+The subquery calculates the difference between the values in the cats field and the values in the dogs field,
+and it names the output column difference:
Next, InfluxDB performs the main query and calculates the average of those differences.
+Notice that the main query specifies difference as the field key in the MEAN() function.
+
Calculate several MEAN() values and place a condition on those mean values
The query returns all mean values of the water_level field that are greater than five.
+
InfluxDB first performs the subquery.
+The subquery calculates MEAN() values of water_level from 2015-08-18T00:00:00Z through 2015-08-18T00:30:00Z and groups the results into 12-minute intervals.
+It also names the output column all_the_means:
Next, InfluxDB performs the main query and returns only those mean values that are greater than five.
+Notice that the main query specifies all_the_means as the field key in the SELECT clause.
The query returns the sum of the derivative of average water_level values for each tag value of location.
+
InfluxDB first performs the subquery.
+The subquery calculates the derivative of average water_level values taken at 12-minute intervals.
+It performs that calculation for each tag value of location and names the output column water_level_derivative:
Next, InfluxDB performs the main query and calculates the sum of the water_level_derivative values for each tag value of location.
+Notice that the main query specifies water_level_derivative, not water_level or derivative, as the field key in the SUM() function.
+
Common issues with subqueries
+
Multiple SELECT statements in a subquery
+
InfluxQL supports multiple nested subqueries per main query:
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
If you’re looking for SHOW queries (for example, SHOW DATABASES or SHOW RETENTION POLICIES), see Schema Exploration.
+
The examples in the sections below use the InfluxDB Command Line Interface (CLI).
+You can also execute the commands using the InfluxDB API; simply send a GET request to the /query endpoint and include the command in the URL parameter q.
+For more on using the InfluxDB API, see Querying data.
+
+
+
Note: When authentication is enabled, only admin users can execute most of the commands listed on this page.
+See the documentation on authentication and authorization for more information.
The WITH, DURATION, REPLICATION, SHARD DURATION, FUTURE LIMIT,
+PAST LIMIT, and NAME clauses are optional and create a single
+retention policy
+associated with the created database.
+If you do not specify one of the clauses after WITH, the relevant behavior
+defaults to the autogen retention policy settings.
+The created retention policy automatically serves as the database’s default retention policy.
+For more information about those clauses, see
+Retention Policy Management.
+
A successful CREATE DATABASE query returns an empty result.
+If you attempt to create a database that already exists, InfluxDB does nothing and does not return an error.
+
Examples
+
Create a database
+
+
+
CREATE DATABASE "NOAA_water_database"
+
The query creates a database called NOAA_water_database.
+By default, InfluxDB also creates the autogen retention policy and associates it with the NOAA_water_database.
+
Create a database with a specific retention policy
+
+
+
CREATE DATABASE "NOAA_water_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "liquid"
+
The query creates a database called NOAA_water_database.
+It also creates a default retention policy for NOAA_water_database with a DURATION of three days, a replication factor of one, a shard group duration of one hour, and with the name liquid.
+
Delete a database with DROP DATABASE
+
The DROP DATABASE query deletes all of the data, measurements, series, continuous queries, and retention policies from the specified database.
+The query takes the following form:
+
+
+
DROPDATABASE<database_name>
+
Drop the database NOAA_water_database:
+
+
+
DROP DATABASE "NOAA_water_database"
+
A successful DROP DATABASE query returns an empty result.
+If you attempt to drop a database that does not exist, InfluxDB does not return an error.
+
Drop series from the index with DROP SERIES
+
The DROP SERIES query deletes all points from a series in a database,
+and it drops the series from the index.
+
The query takes the following form, where you must specify either the FROM clause or the WHERE clause:
Delete all data associated with the measurement h2o_feet:
+
+
+
DELETEFROM"h2o_feet"
+
Delete all data associated with the measurement h2o_quality and where the tag randtag equals 3:
+
+
+
DELETEFROM"h2o_quality"WHERE"randtag"='3'
+
Delete all data in the database that occur before January 01, 2020:
+
+
+
DELETEWHEREtime<'2020-01-01'
+
Delete all data associated with the measurement h2o_feet in retention policy one_day:
+
+
+
DELETEFROM"one_day"."h2o_feet"
+
A successful DELETE query returns an empty result.
+
Things to note about DELETE:
+
+
DELETE supports
+regular expressions
+in the FROM clause when specifying measurement names and in the WHERE clause
+when specifying tag values. It does not support regular expressions for the
+retention policy in the FROM clause.
+If deleting a series in a retention policy, DELETE requires that you define
+only one retention policy in the FROM clause.
+
DELETE does not support fields in
+the WHERE clause.
+
If you need to delete points in the future, you must specify that time period
+as DELETE SERIES runs for time < now() by default.
+
+
Delete measurements with DROP MEASUREMENT
+
The DROP MEASUREMENT query deletes all data and series from the specified measurement and deletes the
+measurement from the index.
+
The query takes the following form:
+
+
+
DROPMEASUREMENT<measurement_name>
+
Delete the measurement h2o_feet:
+
+
+
DROPMEASUREMENT"h2o_feet"
+
+
+
Note:DROP MEASUREMENT drops all data and series in the measurement.
+It does not drop the associated continuous queries.
+
+
+
A successful DROP MEASUREMENT query returns an empty result.
+
+
+
Currently, InfluxDB does not support regular expressions with DROP MEASUREMENT.
+See GitHub Issue #4275 for more information.
+
+
+
+
Delete a shard with DROP SHARD
+
The DROP SHARD query deletes a shard. It also drops the shard from the
+metastore.
+The query takes the following form:
+
+
+
DROPSHARD<shard_id_number>
+
Delete the shard with the id 1:
+
+
+
DROP SHARD 1
+
A successful DROP SHARD query returns an empty result.
+InfluxDB does not return an error if you attempt to drop a shard that does not
+exist.
+
Retention policy management
+
The following sections cover how to create, alter, and delete retention policies.
+Note that when you create a database, InfluxDB automatically creates a retention policy named autogen which has infinite retention.
+You may disable its auto-creation in the configuration file.
+
Create retention policies with CREATE RETENTION POLICY
The DURATION clause determines how long InfluxDB keeps the data.
+The <duration> is a duration literal
+or INF (infinite).
+The minimum duration for a retention policy is one hour and the maximum
+duration is INF.
+
+
REPLICATION
+
+
+
The REPLICATION clause determines how many independent copies of each point
+are stored in the cluster.
+
+
+
By default, the replication factor n usually equals the number of data nodes. However, if you have four or more data nodes, the default replication factor n is 3.
+
+
+
To ensure data is immediately available for queries, set the replication factor n to less than or equal to the number of data nodes in the cluster.
+
+
+
+
+
Important: If you have four or more data nodes, verify that the database replication factor is correct.
+
+
+
+
Replication factors do not serve a purpose with single node instances.
+
+
SHARD DURATION
+
+
Optional. The SHARD DURATION clause determines the time range covered by a shard group.
+
The <duration> is a duration literal
+and does not support an INF (infinite) duration.
+
By default, the shard group duration is determined by the retention policy’s
+DURATION:
+
+
+
+
+
Retention Policy’s DURATION
+
Shard Group Duration
+
+
+
+
+
< 2 days
+
1 hour
+
+
+
>= 2 days and <= 6 months
+
1 day
+
+
+
> 6 months
+
7 days
+
+
+
+
The minimum allowable SHARD GROUP DURATION is 1h.
+If the CREATE RETENTION POLICY query attempts to set the SHARD GROUP DURATION to less than 1h and greater than 0s, InfluxDB automatically sets the SHARD GROUP DURATION to 1h.
+If the CREATE RETENTION POLICY query attempts to set the SHARD GROUP DURATION to 0s, InfluxDB automatically sets the SHARD GROUP DURATION according to the default settings listed above.
The FUTURE LIMIT clause defines a time boundary after and relative to now
+in which points written to the retention policy are accepted. If a point has a
+timestamp after the specified boundary, the point is rejected and the write
+request returns a partial write error.
+
For example, if a write request tries to write data to a retention policy with a
+FUTURE LIMIT 6h and there are points in the request with future timestamps
+greater than 6 hours from now, those points are rejected.
+
PAST LIMIT
+
The PAST LIMIT clause defines a time boundary before and relative to now
+in which points written to the retention policy are accepted. If a point has a
+timestamp before the specified boundary, the point is rejected and the write
+request returns a partial write error.
+
For example, if a write request tries to write data to a retention policy with a
+PAST LIMIT 6h and there are points in the request with timestamps older than
+6 hours, those points are rejected.
+
DEFAULT
+
Sets the new retention policy as the default retention policy for the database.
+This setting is optional.
+
Examples
+
Create a retention policy
+
+
+
CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 1d REPLICATION 1
+
The query creates a retention policy called one_day_only for the database
+NOAA_water_database with a one day duration and a replication factor of one.
The query creates the same retention policy as the one in the example above, but
+sets it as the default retention policy for the database.
+
A successful CREATE RETENTION POLICY query returns an empty response.
+If you attempt to create a retention policy identical to one that already exists, InfluxDB does not return an error.
+If you attempt to create a retention policy with the same name as an existing retention policy but with differing attributes, InfluxDB returns an error.
Modify retention policies with ALTER RETENTION POLICY
+
The ALTER RETENTION POLICY query takes the following form, where you must declare at least one of the retention policy attributes DURATION, REPLICATION, SHARD DURATION, FUTURE LIMIT, PAST LIMIT, or DEFAULT:
Delete the retention policy what_is_time in the NOAA_water_database database:
+
+
+
DROP RETENTION POLICY "what_is_time" ON "NOAA_water_database"
+
A successful DROP RETENTION POLICY query returns an empty result.
+If you attempt to drop a retention policy that does not exist, InfluxDB does not return an error.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
In order to explore the query language further, these instructions help you create a database,
+download and write data to that database within your InfluxDB installation.
+The sample data is then used and referenced in Data Exploration,
+Schema Exploration, and Functions.
+
Creating a database
+
If you’ve installed InfluxDB locally, the influx command should be available via the command line.
+Executing influx will start the CLI and automatically connect to the local InfluxDB instance
+(assuming you have already started the server with service influxdb start or by running influxd directly).
+The output should look like this:
+
+
+
$ influx -precision rfc3339
+Connected to http://localhost:8086 version 1.12.3
+InfluxDB shell 1.12.3
+>
+
+
+
Notes:
+
+
+
+
The InfluxDB API runs on port 8086 by default.
+Therefore, influx will connect to port 8086 and localhost by default.
+If you need to alter these defaults, run influx --help.
+
The -precision argument specifies the format/precision of any returned timestamps.
+In the example above, rfc3339 tells InfluxDB to return timestamps in RFC3339 format (YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ).
+
+
The command line is now ready to take input in the form of the Influx Query Language (a.k.a InfluxQL) statements.
+To exit the InfluxQL shell, type exit and hit return.
+
A fresh install of InfluxDB has no databases (apart from the system _internal),
+so creating one is our first task.
+You can create a database with the CREATE DATABASE <db-name> InfluxQL statement,
+where <db-name> is the name of the database you wish to create.
+Names of databases can contain any unicode character as long as the string is double-quoted.
+Names can also be left unquoted if they contain only ASCII letters,
+digits, or underscores and do not begin with a digit.
+
Throughout the query language exploration, we’ll use the database name NOAA_water_database:
+
+
+
CREATE DATABASE NOAA_water_database
+exit
+
Download and write the data to InfluxDB
+
From your terminal, download the text file that contains the data in line protocol format:
Note that the measurements average_temperature, h2o_pH, h2o_quality, and h2o_temperature contain fictional data.
+Those measurements serve to illuminate query functionality in Schema Exploration.
+
The h2o_feet measurement is the only measurement that contains the NOAA data.
+Please note that the level description field isn’t part of the original NOAA data - we snuck it in there for the sake of having a field key with a special character and string field values.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
cpu
+_cpu_stats
+"1h"
+"anything really"
+"1_Crazy-1337.identifier>NAME👍"
+
Keywords
+
+
+
ALL ALTER ANY AS ASC BEGIN
+BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
+DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
+DURATION END EVERY EXPLAIN FIELD FOR
+FROM FUTURE GRANT GRANTS GROUP GROUPS
+IN INF INSERT INTO KEY KEYS
+KILL LIMIT SHOW MEASUREMENT MEASUREMENTS NAME
+OFFSET ON ORDER PASSWORD PAST POLICY
+POLICIES PRIVILEGES QUERIES QUERY READ REPLICATION
+RESAMPLE RETENTION REVOKE SELECT SERIES SET
+SHARD SHARDS SLIMIT SOFFSET STATS SUBSCRIPTION
+SUBSCRIPTIONS TAG TO USER USERS VALUES
+WHERE WITH WRITE
+
If you use an InfluxQL keyword as an
+identifier you will need to
+double quote that identifier in every query.
+
The keyword time is a special case.
+time can be a
+continuous query name,
+database name,
+measurement name,
+retention policy name,
+subscription name, and
+user name.
+In those cases, time does not require double quotes in queries.
+time cannot be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+For more information, see Frequently Asked Questions.
+
Literals
+
Integers
+
InfluxQL supports decimal integer literals.
+Hexadecimal and octal literals are not currently supported.
+
+
+
int_lit = ( "1" … "9" ) { digit } .
+
Floats
+
InfluxQL supports floating-point literals.
+Exponents are not currently supported.
+
+
+
float_lit = int_lit "." int_lit .
+
Strings
+
String literals must be surrounded by single quotes.
+Strings may contain ' characters as long as they are escaped (i.e., \').
+
+
+
string_lit = `'` { unicode_char } `'` .
+
Durations
+
Duration literals specify a length of time.
+An integer literal followed immediately (with no spaces) by a duration unit listed below is interpreted as a duration literal.
+Durations can be specified with mixed units.
The date and time literal format is not specified in EBNF like the rest of this document.
+It is specified using Go’s date / time parsing format, which is a reference date written in the format required by InfluxQL.
+The reference date time is:
+
InfluxQL reference date time: January 2nd, 2006 at 3:04:05 PM
Currently, InfluxQL does not support using regular expressions to match
+non-string field values in the
+WHERE clause,
+databases, and
+retention policies.
+
+
Queries
+
A query is composed of one or more statements separated by a semicolon.
-- Set default retention policy for mydb to 1h.cpu.
+ALTERRETENTIONPOLICY"1h.cpu"ON"mydb"DEFAULT
+
+-- Change duration and replication factor.
+-- REPLICATION (replication factor) not valid for OSS instances.
+ALTERRETENTIONPOLICY"policy1"ON"somedb"DURATION1hREPLICATION4
+
+-- Change future and past limits.
+ALTERRETENTIONPOLICY"policy1"ON"somedb"FUTURELIMIT6hPASTLIMIT6h
-- selects from DEFAULT retention policy and writes into 6_months retention policy
+CREATECONTINUOUSQUERY"10m_event_count"
+ON"db_name"
+BEGIN
+SELECTcount("value")
+INTO"6_months"."events"
+FROM"events"
+GROUP(10m)
+END;
+
+-- this selects from the output of one continuous query in one retention policy and outputs to another series in another retention policy
+CREATECONTINUOUSQUERY"1h_event_count"
+ON"db_name"
+BEGIN
+SELECTsum("count")as"count"
+INTO"2_years"."events"
+FROM"6_months"."events"
+GROUPBYtime(1h)
+END;
+
+-- this customizes the resample interval so the interval is queried every 10s and intervals are resampled until 2m after their start time
+-- when resample is used, at least one of "EVERY" or "FOR" must be used
+CREATECONTINUOUSQUERY"cpu_mean"
+ON"db_name"
+RESAMPLEEVERY10sFOR2m
+BEGIN
+SELECTmean("value")
+INTO"cpu_mean"
+FROM"cpu"
+GROUPBYtime(1m)
+END;
When using both FUTURE LIMIT and PAST LIMIT clauses, FUTURE LIMIT must appear before PAST LIMIT.
+
+
+
+
+
+
Replication factors do not serve a purpose with single node instances.
+
+
Examples
+
+
+
-- Create a database called foo
+CREATEDATABASE"foo"
+
+-- Create a database called bar with a new DEFAULT retention policy and specify
+-- the duration, replication, shard group duration, and name of that retention policy
+CREATEDATABASE"bar"WITHDURATION1dREPLICATION1SHARDDURATION30mNAME"myrp"
+
+-- Create a database called mydb with a new DEFAULT retention policy and specify
+-- the name of that retention policy
+CREATEDATABASE"mydb"WITHNAME"myrp"
+
+-- Create a database called bar with a new retention policy named "myrp", and
+-- specify the duration, future and past limits, and name of that retention policy
+CREATEDATABASE"bar"WITHDURATION1dFUTURELIMIT6hPASTLIMIT6hNAME"myrp"
When using both FUTURE LIMIT and PAST LIMIT clauses, FUTURE LIMIT must appear before PAST LIMIT.
+
+
+
+
+
+
Replication factors do not serve a purpose with single node instances.
+
+
Examples
+
+
+
-- Create a retention policy.
+CREATERETENTIONPOLICY"10m.events"ON"somedb"DURATION60mREPLICATION2
+
+-- Create a retention policy and set it as the DEFAULT.
+CREATERETENTIONPOLICY"10m.events"ON"somedb"DURATION60mREPLICATION2DEFAULT
+
+-- Create a retention policy and specify the shard group duration.
+CREATERETENTIONPOLICY"10m.events"ON"somedb"DURATION60mREPLICATION2SHARDDURATION30m
+
+-- Create a retention policy and specify future and past limits.
+CREATERETENTIONPOLICY"10m.events"ON"somedb"DURATION12hFUTURELIMIT6hPASTLIMIT6h
+
CREATE SUBSCRIPTION
+
Subscriptions tell InfluxDB to send all the data it receives to Kapacitor.
-- Create a SUBSCRIPTION on database 'mydb' and retention policy 'autogen' that send data to 'example.com:9090' via UDP.
+CREATESUBSCRIPTION"sub0"ON"mydb"."autogen"DESTINATIONSALL'udp://example.com:9090'
+
+-- Create a SUBSCRIPTION on database 'mydb' and retention policy 'autogen' that round robins the data to 'h1.example.com:9090' and 'h2.example.com:9090'.
+CREATESUBSCRIPTION"sub0"ON"mydb"."autogen"DESTINATIONSANY'udp://h1.example.com:9090','udp://h2.example.com:9090'
-- Create a normal database user.
+CREATEUSER"jdoe"WITHPASSWORD'1337password'
+
+-- Create an admin user.
+-- Note: Unlike the GRANT statement, the "PRIVILEGES" keyword is required here.
+CREATEUSER"jdoe"WITHPASSWORD'1337password'WITHALLPRIVILEGES
+
+
+
Note: The password string must be wrapped in single quotes.
Parses and plans the query, and then prints a summary of estimated costs.
+
Many SQL engines use the EXPLAIN statement to show join order, join algorithms, and predicate and expression pushdown.
+Since InfluxQL does not support joins, the cost of a InfluxQL query is typically a function of the total series accessed, the number of iterator accesses to a TSM file, and the number of TSM blocks that need to be scanned.
Executes the specified SELECT statement and returns data on the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including execution time and planning time, and the iterator type and cursor type.
Note: EXPLAIN ANALYZE ignores query output, so the cost of serialization to JSON or CSV is not accounted for.
+
+
+
execution_time
+
Shows the amount of time the query took to execute, including reading the time series data, performing operations as data flows through iterators, and draining processed data from iterators. Execution time doesn’t include the time taken to serialize the output into JSON or other formats.
+
planning_time
+
Shows the amount of time the query took to plan.
+Planning a query in InfluxDB requires a number of steps. Depending on the complexity of the query, planning can require more work and consume more CPU and memory resources than the executing the query. For example, the number of series keys required to execute a query affects how quickly the query is planned and the required memory.
+
First, InfluxDB determines the effective time range of the query and selects the shards to access (in InfluxDB Enterprise, shards may be on remote nodes).
+Next, for each shard and each measurement, InfluxDB performs the following steps:
+
+
Select matching series keys from the index, filtered by tag predicates in the WHERE clause.
+
Group filtered series keys into tag sets based on the GROUP BY dimensions.
+
Enumerate each tag set and create a cursor and iterator for each series key.
+
Merge iterators and return the merged result to the query executor.
+
+
iterator type
+
EXPLAIN ANALYZE supports the following iterator types:
+
+
create_iterator node represents work done by the local influxd instance──a complex composition of nested iterators combined and merged to produce the final query output.
+
(InfluxDB Enterprise only) remote_iterator node represents work done on remote machines.
EXPLAIN ANALYZE distinguishes 3 cursor types. While the cursor types have the same data structures and equal CPU and I/O costs, each cursor type is constructed for a different reason and separated in the final output. Consider the following cursor types when tuning a statement:
+
+
cursor_ref: Reference cursor created for SELECT projections that include a function, such as last() or mean().
+
cursor_aux: Auxiliary cursor created for simple expression projections (not selectors or an aggregation). For example, SELECT foo FROM m or SELECT foo+bar FROM m, where foo and bar are fields.
+
cursor_cond: Condition cursor created for fields referenced in a WHERE clause.
EXPLAIN ANALYZE separates storage block types, and reports the total number of blocks decoded and their size (in bytes) on disk. The following block types are supported:
Refers to the group of commands used to estimate or count exactly the cardinality of measurements, series, tag keys, tag key values, and field keys.
+
The SHOW CARDINALITY commands are available in two variations: estimated and exact. Estimated values are calculated using sketches and are a safe default for all cardinality sizes. Exact values are counts directly from TSM (Time-Structured Merge Tree) data, but are expensive to run for high cardinality data. Unless required, use the estimated variety.
+
Filtering by time is only supported when Time Series Index (TSI) is enabled on a database.
+
See the specific SHOW CARDINALITY commands for details:
Estimates or counts exactly the cardinality of the field key set for the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when Time Series Index (TSI) is enabled and time is not supported in the WHERE clause.
+
+
+
+
+
show_field_key_cardinality_stmt="SHOW FIELD KEY CARDINALITY"[on_clause][from_clause][where_clause][group_by_clause][limit_clause][offset_clause]
+
+show_field_key_exact_cardinality_stmt="SHOW FIELD KEY EXACT CARDINALITY"[on_clause][from_clause][where_clause][group_by_clause][limit_clause][offset_clause]
+
Examples
+
+
+
-- show estimated cardinality of the field key set of current database
+SHOWFIELDKEYCARDINALITY
+-- show exact cardinality on field key set of specified database
+SHOWFIELDKEYEXACTCARDINALITYONmydb
+
SHOW FIELD KEYS
+
+
+
show_field_keys_stmt = "SHOW FIELD KEYS" [on_clause] [ from_clause ] .
+
Examples
+
+
+
-- show field keys and field value data types from all measurements
+SHOWFIELDKEYS
+
+-- show field keys and field value data types from specified measurement
+SHOWFIELDKEYSFROM"cpu"
+
SHOW GRANTS
+
+
+
show_grants_stmt = "SHOW GRANTS FOR" user_name .
+
Example
+
+
+
-- show grants for jdoe
+SHOWGRANTSFOR"jdoe"
+
SHOW MEASUREMENT CARDINALITY
+
Estimates or counts exactly the cardinality of the measurement set for the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when TSI (Time Series Index) is enabled and time is not supported in the WHERE clause.
-- show estimated cardinality of measurement set on current database
+SHOWMEASUREMENTCARDINALITY
+-- show exact cardinality of measurement set on specified database
+SHOWMEASUREMENTEXACTCARDINALITYONmydb
-- show all measurements
+SHOWMEASUREMENTS
+
+-- show measurements where region tag = 'uswest' AND host tag = 'serverA'
+SHOWMEASUREMENTSWHERE"region"='uswest'AND"host"='serverA'
+
+-- show measurements that start with 'h2o'
+SHOWMEASUREMENTSWITHMEASUREMENT=~/h2o.*/
+
SHOW QUERIES
+
+
+
show_queries_stmt = "SHOW QUERIES" .
+
Example
+
+
+
-- show all currently-running queries
+SHOWQUERIES
+--
ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is not supported in the WHERE clause.
-- show estimated cardinality of the series on current database
+SHOWSERIESCARDINALITY
+-- show estimated cardinality of the series on specified database
+SHOWSERIESCARDINALITYONmydb
+-- show exact series cardinality
+SHOWSERIESEXACTCARDINALITY
+-- show series cardinality of the series on specified database
+SHOWSERIESEXACTCARDINALITYONmydb
id column: Shard IDs that belong to the specified database and retention policy.
+
shard_group column: Group number that a shard belongs to. Shards in the same shard group have the same start_time and end_time. This interval indicates how long the shard is active, and the expiry_time columns shows when the shard group expires. No timestamps will show under expiry_time if the retention policy duration is set to infinite.
+
owners column: Shows the data nodes that own a shard. The number of nodes that own a shard is equal to the replication factor. In this example, the replication factor is 3, so 3 nodes own each shard.
+
+
SHOW STATS
+
Returns detailed statistics on available components of an InfluxDB node and available (enabled) components.
+
Statistics returned by SHOW STATS are stored in memory and reset to zero when the node is restarted,
+but SHOW STATS is triggered every 10 seconds to populate the _internal database.
+
The SHOW STATS command does not list index memory usage –
+use the SHOW STATS FOR 'indexes' command.
For the specified component (<component>), the command returns available statistics.
+For the runtime component, the command returns an overview of memory usage by the InfluxDB system,
+using the Go runtime package.
+
SHOW STATS FOR ‘indexes’
+
Returns an estimate of memory use of all indexes.
+Index memory use is not reported with SHOW STATS because it is a potentially expensive operation.
+
SHOW SUBSCRIPTIONS
+
+
+
show_subscriptions_stmt = "SHOW SUBSCRIPTIONS" .
+
Example
+
+
+
SHOWSUBSCRIPTIONS
+
SHOW TAG KEY CARDINALITY
+
Estimates or counts exactly the cardinality of tag key set on the current database unless a database is specified using the ON <database> option.
+
+
+
+
+
ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when TSI (Time Series Index) is enabled and time is not supported in the WHERE clause.
-- show all tag keys
+SHOWTAGKEYS
+
+-- show all tag keys from the cpu measurement
+SHOWTAGKEYSFROM"cpu"
+
+-- show all tag keys from the cpu measurement where the region key = 'uswest'
+SHOWTAGKEYSFROM"cpu"WHERE"region"='uswest'
+
+-- show all tag keys where the host key = 'serverA'
+SHOWTAGKEYSWHERE"host"='serverA'
+
+-- show specific tag keys
+SHOWTAGKEYSWITHKEYIN("region","host")
-- show all tag values across all measurements for the region tag
+SHOWTAGVALUESWITHKEY="region"
+
+-- show tag values from the cpu measurement for the region tag
+SHOWTAGVALUESFROM"cpu"WITHKEY="region"
+
+-- show tag values across all measurements for all tag keys that do not include the letter c
+SHOWTAGVALUESWITHKEY!~/.*c.*/
+
+-- show tag values from the cpu measurement for region & host tag keys where service = 'redis'
+SHOWTAGVALUESFROM"cpu"WITHKEYIN("region","host")WHERE"service"='redis'
+
SHOW TAG VALUES CARDINALITY
+
Estimates or counts exactly the cardinality of tag key values for the specified tag key on the current database unless a database is specified using the ON <database> option.
+
+
+
+
+
ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when TSI (Time Series Index) is enabled.
-- show estimated tag key values cardinality for a specified tag key
+SHOWTAGVALUESCARDINALITYWITHKEY="myTagKey"
+-- show estimated tag key values cardinality for a specified tag key
+SHOWTAGVALUESCARDINALITYWITHKEY="myTagKey"
+-- show exact tag key values cardinality for a specified tag key
+SHOWTAGVALUESEXACTCARDINALITYWITHKEY="myTagKey"
+-- show exact tag key values cardinality for a specified tag key
+SHOWTAGVALUESEXACTCARDINALITYWITHKEY="myTagKey"
Use comments with InfluxQL statements to describe your queries.
+
+
A single line comment begins with two hyphens (--) and ends where InfluxDB detects a line break.
+This comment type cannot span several lines.
+
A multi-line comment begins with /* and ends with */. This comment type can span several lines.
+Multi-line comments do not support nested multi-line comments.
Once you understand the language itself, it’s important to know how these
+language constructs are implemented in the query engine. This gives you an
+intuitive sense for how results will be processed and how to create efficient
+queries.
+
The life cycle of a query looks like this:
+
+
+
InfluxQL query string is tokenized and then parsed into an abstract syntax
+tree (AST). This is the code representation of the query itself.
+
+
+
The AST is passed to the QueryExecutor which directs queries to the
+appropriate handlers. For example, queries related to meta data are executed
+by the meta service and SELECT statements are executed by the shards
+themselves.
+
+
+
The query engine then determines the shards that match the SELECT
+statement’s time range. From these shards, iterators are created for each
+field in the statement.
+
+
+
Iterators are passed to the emitter which drains them and joins the resulting
+points. The emitter’s job is to convert simple time/value points into the
+more complex result objects that are returned to the client.
+
+
+
Understanding iterators
+
Iterators are at the heart of the query engine. They provide a simple interface
+for looping over a set of points. For example, this is an iterator over Float
+points:
+
+
+
type FloatIterator interface {
+ Next() *FloatPoint
+}
+
These iterators are created through the IteratorCreator interface:
+
+
+
type IteratorCreator interface {
+ CreateIterator(opt *IteratorOptions) (Iterator, error)
+}
+
The IteratorOptions provide arguments about field selection, time ranges,
+and dimensions that the iterator creator can use when planning an iterator.
+The IteratorCreator interface is used at many levels such as the Shards,
+Shard, and Engine. This allows optimizations to be performed when applicable
+such as returning a precomputed COUNT().
+
Iterators aren’t just for reading raw data from storage though. Iterators can be
+composed so that they provided additional functionality around an input
+iterator. For example, a DistinctIterator can compute the distinct values for
+each time window for an input iterator. Or a FillIterator can generate
+additional points that are missing from an input iterator.
+
This composition also lends itself well to aggregation. For example, a statement
+such as this:
+
+
+
SELECTMEAN(value)FROMcpuGROUPBYtime(10m)
+
In this case, MEAN(value) is a MeanIterator wrapping an iterator from the
+underlying shards. However, if we can add an additional iterator to determine
+the derivative of the mean:
+
+
+
SELECT DERIVATIVE(MEAN(value), 20m) FROM cpu GROUP BY time(10m)
+
Understanding cursors
+
A cursor identifies data by shard in tuples (time, value) for a single series (measurement, tag set and field). The cursor trasverses data stored as a log-structured merge-tree and handles deduplication across levels, tombstones for deleted data, and merging the cache (Write Ahead Log). A cursor sorts the (time, value) tuples by time in ascending or descending order.
+
For example, a query that evaluates one field for 1,000 series over 3 shards constructs a minimum of 3,000 cursors (1,000 per shard).
+
Understanding auxiliary fields
+
Because InfluxQL allows users to use selector functions such as FIRST(),
+LAST(), MIN(), and MAX(), the engine must provide a way to return related
+data at the same time with the selected point.
+
For example, in this query:
+
+
+
SELECTFIRST(value),hostFROMcpuGROUPBYtime(1h)
+
We are selecting the first value that occurs every hour but we also want to
+retrieve the host associated with that point. Since the Point types only
+specify a single typed Value for efficiency, we push the host into the
+auxiliary fields of the point. These auxiliary fields are attached to the point
+until it is passed to the emitter where the fields get split off to their own
+iterator.
+
Built-in iterators
+
InfluxDB provides many helper iterators for building queries:
+
+
+
Merge Iterator - This iterator combines one or more iterators into a single
+new iterator of the same type. This iterator guarantees that all points
+within a window will be output before starting the next window but does not
+provide ordering guarantees within the window. This allows for fast access
+for aggregate queries which do not need stronger sorting guarantees.
+
+
+
Sorted Merge Iterator - This iterator also combines one or more iterators
+into a new iterator of the same type. However, this iterator guarantees
+time ordering of every point. This makes it slower than the MergeIterator
+but this ordering guarantee is required for non-aggregate queries which
+return the raw data points.
+
+
+
Limit Iterator - This iterator limits the number of points per name/tag
+group. This is the implementation of the LIMIT & OFFSET syntax.
+
+
+
Fill Iterator - This iterator injects extra points if they are missing from
+the input iterator. It can provide null points, points with the previous
+value, or points with a specific value.
+
+
+
Buffered Iterator - This iterator provides the ability to “unread” a point
+back onto a buffer so it can be read again next time. This is used extensively
+to provide lookahead for windowing.
+
+
+
Reduce Iterator - This iterator calls a reduction function for each point in
+a window. When the window is complete then all points for that window are
+output. This is used for simple aggregate functions such as COUNT().
+
+
+
Reduce Slice Iterator - This iterator collects all points for a window first
+and then passes them all to a reduction function at once. The results are
+returned from the iterator. This is used for aggregate functions such as
+DERIVATIVE().
+
+
+
Transform Iterator - This iterator calls a transform function for each point
+from an input iterator. This is used for executing binary expressions.
+
+
+
Dedupe Iterator - This iterator only outputs unique points. It is resource
+intensive so it is only used for small queries such as meta query statements.
+
+
+
Call iterators
+
Function calls in InfluxQL are implemented at two levels. Some calls can be
+wrapped at multiple layers to improve efficiency. For example, a COUNT() can
+be performed at the shard level and then multiple CountIterators can be
+wrapped with another CountIterator to compute the count of all shards. These
+iterators can be created using NewCallIterator().
+
Some iterators are more complex or need to be implemented at a higher level.
+For example, the DERIVATIVE() needs to retrieve all points for a window first
+before performing the calculation. This iterator is created by the engine itself
+and is never requested to be created by the lower levels.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
The influx command line interface (CLI) provides an interactive shell for the HTTP API associated with influxd.
+Use influx to write data (manually or from a file), query data interactively, view query output in different formats, and manage resources in InfluxDB.
To access the CLI, first launch the influxd database process and then launch influx in your terminal.
+
+
+
influx
+
If successfully connected to an InfluxDB node, the output is the following:
+
+
+
Connected to http://localhost:8086 version 1.12.3
+InfluxDB shell version: 1.12.3
+>
+
The versions of InfluxDB and the CLI should be identical. If not, parsing issues can occur with queries.
+
In the prompt, you can enter InfluxQL queries as well as CLI-specific commands.
+Enter help to get a list of available commands.
+Use Ctrl+C to cancel if you want to cancel a long-running InfluxQL query.
+
Environment Variables
+
The following environment variables can be used to configure settings used by the influx client. They can be specified in lower or upper case, however the upper case version takes precedence.
+
HTTP_PROXY
+
Defines the proxy server to use for HTTP.
+
Value format:[protocol://]<host>[:port]
+
+
+
HTTP_PROXY=http://localhost:1234
+
HTTPS_PROXY
+
Defines the proxy server to use for HTTPS. Takes precedence over HTTP_PROXY for HTTPS.
+
Value format:[protocol://]<host>[:port]
+
+
+
HTTPS_PROXY=https://localhost:1443
+
NO_PROXY
+
List of host names that should not go through any proxy. If set to an asterisk ‘*’ only, it matches all hosts.
+
Value format: comma-separated list of hosts
+
+
+
NO_PROXY=123.45.67.89,123.45.67.90
+
influx Arguments
+
Arguments specify connection, write, import, and output options for the CLI session.
+
influx provides the following arguments:
+
-h, -help
+List influx arguments
+
-compressed
+Set to true if the import file is compressed.
+Use with -import.
+
-consistency 'any|one|quorum|all'
+Set the write consistency level.
+
-database 'database name'
+The database to which influx connects.
+
-execute 'command'
+Execute an InfluxQL command and quit.
+See -execute.
+
-format 'json|csv|column'
+Specifies the format of the server responses.
+See -format.
+
-host 'host name'
+The host to which influx connects.
+By default, InfluxDB runs on localhost.
-password 'password'
+The password influx uses to connect to the server.
+influx will prompt for a password if you leave it blank (-password '').
+Alternatively, set the password for the CLI with the INFLUX_PASSWORD environment
+variable.
+
-path
+The path to the file to import.
+Use with-import.
+
-port 'port #'
+The port to which influx connects.
+By default, InfluxDB runs on port 8086.
+
-pps
+How many points per second the import will allow.
+By default, pps is zero and influx will not throttle importing.
+Use with -import.
+
-precision 'rfc3339|h|m|s|ms|u|ns'
+Specifies the format/precision of the timestamp: rfc3339 (YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ), h (hours), m (minutes), s (seconds), ms (milliseconds), u (microseconds), ns (nanoseconds).
+Precision defaults to nanoseconds.
+
+
+
Note: Setting the precision to rfc3339 (-precision rfc3339) works with the -execute option, but it does not work with the -import option. All other precision formats (e.g., h,m,s,ms,u, and ns) work with the -execute and -import options.
+
+
+
-pretty
+Turns on pretty print for the json format.
+
-ssl
+Use HTTPS for requests.
+
-unsafeSsl
+Disables SSL certificate verification.
+Use when connecting over HTTPS with a self-signed certificate.
+
-username 'username'
+The username that influx uses to connect to the server.
+Alternatively, set the username for the CLI with the INFLUX_USERNAME environment variable.
+
-version
+Display the InfluxDB version and exit.
+
The following sections provide detailed examples for some arguments, including -execute, -format, and -import.
Optional: DDL (Data Definition Language): Contains the InfluxQL commands for creating the relevant database and managing the retention policy.
+If your database and retention policy already exist, your file can skip this section.
+
DML (Data Manipulation Language): Context metadata that specifies the database and (if desired) retention policy for the import and contains the data in line protocol.
For large datasets, influx writes out a status message every 100,000 points.
+
For example:
+
+
+
2015/08/21 14:48:01 Processed 3100000 lines.
+Time elapsed: 56.740578415s.
+Points per second (PPS): 54634
+
+
Things to note about -import:
+
+
To throttle the import, use -pps to set the number of points per second to ingest. By default, pps is zero and influx does not throttle importing.
+
To import a file compressed with gzip (GNU zip), include the -compressed flag.
+
Include timestamps in the data file.
+If points don’t include a timestamp, InfluxDB assigns the same timestamp to those points, which can result in unintended duplicate points or overwrites.
+
If your data file contains more than 5,000 points, consider splitting it into smaller files to write data to InfluxDB in batches.
+We recommend writing points in batches of 5,000 to 10,000 for optimal performance.
+Writing smaller batches increases the number of HTTP requests, which can negatively impact performance.
+By default, the HTTP request times out after five seconds. Although InfluxDB continues attempting to write the points after a timeout, you won’t receive confirmation of a successful write.
Enter help in the CLI for a partial list of the available commands.
+
Commands
+
The list below offers a brief discussion of each command.
+We provide detailed information on insert at the end of this section.
+
auth
+Prompts you for your username and password.
+influx uses those credentials when querying a database.
+Alternatively, set the username and password for the CLI with the
+INFLUX_USERNAME and INFLUX_PASSWORD environment variables.
+
chunked
+Turns on chunked responses from the server when issuing queries.
+This setting is enabled by default.
+
chunk size <size>
+Sets the size of the chunked responses.
+The default size is 10,000.
+Setting it to 0 resets chunk size to its default value.
+
clear [ database | db | retention policy | rp ]
+Clears the current context for the database or retention policy.
+
connect <host:port>
+Connect to a different server without exiting the shell.
+By default, influx connects to localhost:8086.
+If you do not specify either the host or the port, influx assumes the default setting for the missing attribute.
+
consistency <level>
+Sets the write consistency level: any, one, quorum, or all.
+
Ctrl+C
+Terminates the currently running query. Useful when an interactive query is taking too long to respond
+because it is trying to return too much data.
+
exitquitCtrl+D
+Quits the influx shell.
+
format <format>
+Specifies the format of the server responses: json, csv, or column.
+See the description of -format for examples of each format.
+
history
+Displays your command history.
+To use the history while in the shell, simply use the “up” arrow.
+influx stores your last 1,000 commands in your home directory in .influx_history.
+
insert
+Write data using line protocol.
+See insert.
+
precision <format>
+Specifies the format/precision of the timestamp: rfc3339 (YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ), h (hours), m (minutes), s (seconds), ms (milliseconds), u (microseconds), ns (nanoseconds).
+Precision defaults to nanoseconds.
+
pretty
+Turns on pretty print for the json format.
+
settings
+Outputs the current settings for the shell including the Host, Username, Database, Retention Policy, Pretty status, Chunked status, Chunk Size, Format, and Write Consistency.
+
use [ "<database_name>" | "<database_name>"."<retention policy_name>" ]
+Sets the current database and/or retention policy.
+Once influx sets the current database and/or retention policy, there is no need to specify that database and/or retention policy in queries.
+If you do not specify the retention policy, influx automatically queries the used database’s DEFAULT retention policy.
+
Write data to InfluxDB with insert
+
Enter insert followed by the data in line protocol to write data to InfluxDB.
+Use insert into <retention policy> <line protocol> to write data to a specific retention policy.
+
Write data to a single field in the measurement treasures with the tag captain_id = pirate_king.
+influx automatically writes the point to the database’s DEFAULT retention policy.
+
+
+
INSERT treasures,captain_id=pirate_king value=2
+
Write the same point to the already-existing retention policy oneday:
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
This page documents errors, their descriptions, and, where applicable,
+common resolutions.
+
+
+
+
+
Disclaimer: This document does not contain an exhaustive list of all possible InfluxDB errors.
+
+
error: database name required
+
The database name required error occurs when certain SHOW queries do
+not specify a database.
+Specify a database with an ON clause in the SHOW query, with USE <database_name> in the
+CLI, or with the db query string parameter in
+the InfluxDB API request.
+
The relevant SHOW queries include SHOW RETENTION POLICIES, SHOW SERIES,
+SHOW MEASUREMENTS, SHOW TAG KEYS, SHOW TAG VALUES, and SHOW FIELD KEYS.
The max series per database exceeded error occurs when a write causes the
+number of series in a database to
+exceed the maximum allowable series per database.
+The maximum allowable series per database is controlled by the
+max-series-per-database setting in the [data] section of the configuration
+file.
+
The information in the < > shows the measurement and the tag set of the series
+that exceeded max-series-per-database.
+
By default max-series-per-database is set to one million.
+Changing the setting to 0 allows an unlimited number of series per database.
error parsing query: found < >, expected identifier at line < >, char < >
+
InfluxQL syntax
+
The expected identifier error occurs when InfluxDB anticipates an identifier
+in a query but doesn’t find it.
+Identifiers are tokens that refer to continuous query names, database names,
+field keys, measurement names, retention policy names, subscription names,
+tag keys, and user names.
+The error is often a gentle reminder to double-check your query’s syntax.
Query 2 is missing a measurement name between FROM and WHERE.
+
InfluxQL keywords
+
In some cases the expected identifier error occurs when one of the
+identifiers in the query is an
+InfluxQL Keyword.
+To successfully query an identifier that’s also a keyword, enclose that
+identifier in double quotes.
error parsing query: found < >, expected string at line < >, char < >
+
The expected string error occurs when InfluxDB anticipates a string
+but doesn’t find it.
+In most cases, the error is a result of forgetting to quote the password
+string in the CREATE USER statement.
error parsing query: mixing aggregate and non-aggregate queries is not supported
+
The mixing aggregate and non-aggregate error occurs when a SELECT statement
+includes both an aggregate function
+and a standalone field key or
+tag key.
+
Aggregate functions return a single calculated value and there is no obvious
+single value to return for any unaggregated fields or tags.
+
Example
+
Raw data:
+
The peg measurement has two fields (square and round) and one tag
+(force):
Query 1 includes an aggregate function and a standalone field.
+
mean("square") returns a single aggregated value calculated from the four values
+of square in the peg measurement, and there is no obvious single field value
+to return from the four unaggregated values of the round field.
Query 2 includes an aggregate function and a standalone tag.
+
mean("square") returns a single aggregated value calculated from the four values
+of square in the peg measurement, and there is no obvious single tag value
+to return from the four unaggregated values of the force tag.
invalid operation: time and \*influxql.VarRef are not compatible
+
The time and \*influxql.VarRef are not compatible error occurs when
+date-time strings are double quoted in queries.
+Date-time strings require single quotes.
The bad timestamp error occurs when the
+line protocol includes a
+timestamp in a format other than a UNIX timestamp.
+
Example
+
+
+
>INSERTpineapplevalue=1'2015-08-18T23:00:00Z'
+ERR:{"error":"unable to parse 'pineapple value=1 '2015-08-18T23:00:00Z'': bad timestamp"}
+
The line protocol above uses an RFC3339
+timestamp.
+Replace the timestamp with a UNIX timestamp to avoid the error and successfully
+write the point to InfluxDB:
In some cases, the bad timestamp error occurs with more general syntax errors
+in the InfluxDB line protocol.
+Line protocol is whitespace sensitive; misplaced spaces can cause InfluxDB
+to assume that a field or tag is an invalid timestamp.
+
Example
+
Write 1
+
+
+
>INSERThenslocation=2value=9
+ERR:{"error":"unable to parse 'hens location=2 value=9': bad timestamp"}
+
The line protocol in Write 1 separates the hen measurement from the location=2
+tag with a space instead of a comma.
+InfluxDB assumes that the value=9 field is the timestamp and returns an error.
+
Use a comma instead of a space between the measurement and tag to avoid the error:
+
+
+
INSERThens,location=2value=9
+
Write 2
+
+
+
>INSERTcows,name=daisymilk_prod=3happy=3
+ERR:{"error":"unable to parse 'cows,name=daisy milk_prod=3 happy=3': bad timestamp"}
+
The line protocol in Write 2 separates the milk_prod=3 field and the
+happy=3 field with a space instead of a comma.
+InfluxDB assumes that the happy=3 field is the timestamp and returns an error.
+
Use a comma instead of a space between the two fields to avoid the error:
The time outside range error occurs when the timestamp in the
+InfluxDB line protocol
+falls outside the valid time range for InfluxDB.
+
The minimum valid timestamp is -9223372036854775806 or 1677-09-21T00:12:43.145224194Z.
+The maximum valid timestamp is 9223372036854775806 or 2262-04-11T23:47:16.854775806Z.
write failed for shard < >: engine: cache maximum memory size exceeded
+
The cache maximum memory size exceeded error occurs when the cached
+memory size increases beyond the
+cache-max-memory-size setting
+in the configuration file.
+
By default, cache-max-memory-size is set to 512mb.
+This value is fine for most workloads, but is too small for larger write volumes
+or for datasets with higher series cardinality.
+If you have lots of RAM you could set it to 0 to disable the cached memory
+limit and never get this error.
+You can also examine the memBytes field in thecache measurement in the
+_internal database
+to get a sense of how big the caches are in memory.
The already killed error occurs when a query has already been killed, but
+there are subsequent kill attempts before the query has exited.
+When a query is killed, it may not exit immediately.
+It will be in the killed state, which means the signal has been sent, but the
+query itself has not hit an interrupt point.
This error occurs when fields in an imported measurement have inconsistent data types. Make sure all fields in a measurement have the same data type, such as float64, int64, and so on.
This error occurs when an imported data point is older than the specified retention policy and dropped. Verify the correct retention policy is specified in the import file.
+
Unnamed import file
+
Error:reading standard input: /path/to/directory: is a directory
+
This error occurs when the -import command doesn’t include the name of an import file. Specify the file to import, for example: $ influx -import -path={filename}.txt -precision=s
+
Docker container cannot read host files
+
Error:open /path/to/file: no such file or directory
+
This error occurs when the Docker container cannot read files on the host machine. To make host machine files readable, complete the following procedure.
+
Make host machine files readable to Docker
+
+
+
Create a directory, and then copy files to import into InfluxDB to this directory.
+
+
+
When you launch the Docker container, mount the new directory on the InfluxDB container by running the following command:
+
+
+
docker run -v /dir/path/on/host:/dir/path/in/container
+
+
+
Verify the Docker container can read host machine files by running the following command:
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
This page addresses frequent sources of confusion and places where InfluxDB behaves in an unexpected way relative to other database systems.
+Where applicable, it links to outstanding issues on GitHub.
On System V operating systems logs are stored under /var/log/influxdb/.
+
On systemd operating systems you can access the logs using journalctl.
+Use journalctl -u influxdb to view the logs in the journal or journalctl -u influxdb > influxd.log to print the logs to a text file. With systemd, log retention depends on your system’s journald settings.
+
What is the relationship between shard group durations and retention policies?
+
InfluxDB stores data in shard groups.
+A single shard group covers a specific time interval; InfluxDB determines that time interval by looking at the DURATION of the relevant retention policy (RP).
+The table below outlines the default relationship between the DURATION of an RP and the time interval of a shard group:
Why aren’t data dropped after I’ve altered a retention policy?
+
Several factors explain why data may not be immediately dropped after a
+retention policy (RP) change.
+
The first and most likely cause is that, by default, InfluxDB checks to enforce
+an RP every 30 minutes.
+You may need to wait for the next RP check for InfluxDB to drop data that are
+outside the RP’s new DURATION setting.
+The 30 minute interval is
+configurable.
+
Second, altering both the DURATION and SHARD DURATION of an RP can result in
+unexpected data retention.
+InfluxDB stores data in shard groups which cover a specific RP and time
+interval.
+When InfluxDB enforces an RP it drops entire shard groups, not individual data
+points.
+InfluxDB cannot divide shard groups.
+
If the RP’s new DURATION is less than the old SHARD DURATION and InfluxDB is
+currently writing data to one of the old, longer shard groups, the system is
+forced to keep all of the data in that shard group.
+This occurs even if some of the data in that shard group are outside of the new
+DURATION.
+InfluxDB will drop that shard group once all of its data is outside the new
+DURATION.
+The system will then begin writing data to shard groups that have the new,
+shorter SHARD DURATION preventing any further unexpected data retention.
+
Why does InfluxDB fail to parse microsecond units in the configuration file?
+
The syntax for specifying microsecond duration units differs for configuration settings, writes, queries, and setting the precision in the InfluxDB Command Line Interface (CLI).
+The table below shows the supported syntax for each category:
+
+
+
+
+
Configuration File
+
InfluxDB API Writes
+
All Queries
+
CLI Precision Command
+
+
+
+
+
u
+
❌
+
👍
+
👍
+
👍
+
+
+
us
+
👍
+
❌
+
❌
+
❌
+
+
+
µ
+
❌
+
❌
+
👍
+
❌
+
+
+
µs
+
👍
+
❌
+
❌
+
❌
+
+
+
+
If a configuration option specifies the u or µ syntax, InfluxDB fails to start and reports the following error in the logs:
+
+
+
run: parse config: time: unknown unit [µ|u] in duration [<integer>µ|<integer>u]
+
Does InfluxDB have a file system size limit?
+
InfluxDB works within file system size restrictions for Linux and Windows POSIX. Some storage providers and distributions have size restrictions; for example:
+
+
Amazon EBS volume limits size to ~16TB
+
Linux ext3 file system limits size ~16TB
+
Linux ext4 file system limits size to ~1EB (with file size limit ~16TB)
+
+
If you anticipate growing over 16TB per volume/file system, we recommend finding a provider and distribution that supports your storage requirements.
+
How do I use the InfluxDB CLI to return human readable timestamps?
+
When you first connect to the CLI, specify the rfc3339 precision:
+
+
+
influx -precision rfc3339
+
Alternatively, specify the precision once you’ve already connected to the CLI:
+
+
+
$ influx
+Connected to http://localhost:8086 version 0.xx.x
+InfluxDB shell 0.xx.x
+> precision rfc3339
+>
How can a non-admin user USE a database in the InfluxDB CLI?
+
In versions prior to v1.3, non-admin users could not execute a USE <database_name> query in the CLI even if they had READ and/or WRITE permissions on that database.
+
Starting with version 1.3, non-admin users can execute the USE <database_name> query for databases on which they have READ and/or WRITE permissions.
+If a non-admin user attempts to USE a database on which the user doesn’t have READ and/or WRITE permissions, the system returns an error:
+
+
+
ERR: Database <database_name> doesn't exist. Run SHOW DATABASES for a list of existing databases.
+
+
+
Note that the SHOW DATABASES query returns only those databases on which the non-admin user has READ and/or WRITE permissions.
+
+
+
How do I write to a non-DEFAULT retention policy with the InfluxDB CLI?
+
Use the syntax INSERT INTO [<database>.]<retention_policy> <line_protocol> to write data to a non-DEFAULT retention policy using the CLI.
+(Specifying the database and retention policy this way is only allowed with the CLI.
+Writes over HTTP must specify the database and optionally the retention policy with the db and rp query parameters.)
Note that you will need to fully qualify the measurement to query data in the non-DEFAULT retention policy. Fully qualify the measurement with the syntax:
+
+
+
"<database>"."<retention_policy>"."<measurement>"
+
How do I cancel a long-running query?
+
You can cancel a long-running interactive query from the CLI using Ctrl+C. To stop other long-running query that you see when using the SHOW QUERIES command,
+you can use the KILL QUERY command to stop it.
+
Why can’t I query Boolean field values?
+
Acceptable Boolean syntax differs for data writes and data queries.
+
+
+
+
Boolean syntax
+
Writes
+
Queries
+
+
+
+
+
t,f
+
👍
+
❌
+
+
+
T,F
+
👍
+
❌
+
+
+
true,false
+
👍
+
👍
+
+
+
True,False
+
👍
+
👍
+
+
+
TRUE,FALSE
+
👍
+
👍
+
+
+
+
For example, SELECT * FROM "hamlet" WHERE "bool"=True returns all points with bool set to TRUE, but SELECT * FROM "hamlet" WHERE "bool"=T returns nothing.
How does InfluxDB handle field type discrepancies across shards?
+
Field values can be floats, integers, strings, or Booleans.
+Field value types cannot differ within a
+shard, but they can differ across shards.
+
The SELECT statement
+
The
+SELECT statement
+returns all field values if all values have the same type.
+If field value types differ across shards, InfluxDB first performs any
+applicable cast
+operations and then returns all values with the type that occurs first in the
+following list: float, integer, string, Boolean.
+
If your data have field value type discrepancies, use the syntax
+<field_key>::<type> to query the different data types.
+
Example
+
The measurement just_my_type has a single field called my_field.
+my_field has four field values across four different shards, and each value has
+a different data type (float, integer, string, and Boolean).
+
SELECT * returns only the float and integer field values.
+Note that InfluxDB casts the integer value to a float in the response.
SELECT <field_key>::<type> [...] returns all value types.
+InfluxDB outputs each value type in its own column with incremented column names.
+Where possible, InfluxDB casts field values to another type;
+it casts the integer 7 to a float in the first column, and it
+casts the float 9.879034 to an integer in the second column.
+InfluxDB cannot cast floats or integers to strings or Booleans.
SHOW FIELD KEYS returns every data type, across every shard, associated with
+the field key.
+
Example
+
The measurement just_my_type has a single field called my_field.
+my_field has four field values across four different shards, and each value has
+a different data type (float, integer, string, and Boolean).
+SHOW FIELD KEYS returns all four data types:
What are the minimum and maximum integers that InfluxDB can store?
+
InfluxDB stores all integers as signed int64 data types.
+The minimum and maximum valid values for int64 are -9223372036854775808 and 9223372036854775807.
+See Go builtins for more information.
+
Values close to but within those limits may lead to unexpected results; some functions and operators convert the int64 data type to float64 during calculation which can cause overflow issues.
+
What are the minimum and maximum timestamps that InfluxDB can store?
+
The minimum timestamp is -9223372036854775806 or 1677-09-21T00:12:43.145224194Z.
+The maximum timestamp is 9223372036854775806 or 2262-04-11T23:47:16.854775806Z.
+
Timestamps outside that range return a parsing error.
+
How can I tell what type of data is stored in a field?
Currently, InfluxDB offers very limited support for changing a field’s data type.
+
The <field_key>::<type> syntax supports casting field values from integers to
+floats or from floats to integers.
+See Cast Operations
+for an example.
+There is no way to cast a float or integer to a string or Boolean (or vice versa).
+
We list possible workarounds for changing a field’s data type below.
+Note that these workarounds will not update data that have already been
+written to the database.
+
Write the data to a different field
+
The simplest workaround is to begin writing the new data type to a different field in the same
+series.
+
Work the shard system
+
Field value types cannot differ within a
+shard but they can differ across
+shards.
+
Users looking to change a field’s data type can use the SHOW SHARDS query
+to identify the end_time of the current shard.
+InfluxDB will accept writes with a different data type to an existing field if the point has a timestamp
+that occurs after that end_time.
Why does my query return epoch 0 as the timestamp?
+
In InfluxDB, epoch 0 (1970-01-01T00:00:00Z) is often used as a null timestamp equivalent.
+If you request a query that has no timestamp to return, such as an aggregation function with an unbounded time range, InfluxDB returns epoch 0 as the timestamp.
For information on how to use a subquery as a substitute for nested functions, see
+Data exploration.
+
What determines the time intervals returned by GROUP BY time() queries?
+
The time intervals returned by GROUP BY time() queries conform to the InfluxDB database’s preset time
+buckets or to the user-specified offset interval.
+
Example
+
Preset time buckets
+
The following query calculates the average value of sunflowers between
+6:15pm and 7:45pm and groups those averages into one hour intervals:
The results below show how InfluxDB maintains its preset time buckets.
+
In this example, the 6pm hour is a preset bucket and the 7pm hour is a preset bucket.
+The average for the 6pm time bucket does not include data prior to 6:15pm because of the WHERE time clause,
+but any data included in the average for the 6pm time bucket must occur in the 6pm hour.
+The same goes for the 7pm time bucket; any data included in the average for the 7pm
+time bucket must occur in the 7pm hour.
+The dotted lines show the points that make up each average.
+
Note that while the first timestamp in the results is 2016-08-29T18:00:00Z,
+the query results in that bucket do not include data with timestamps that occur before the start of the
+WHERE time clause (2016-08-29T18:15:00Z).
The following query calculates the average value of sunflowers between
+6:15pm and 7:45pm and groups those averages into one hour intervals.
+It also offsets the InfluxDB database’s preset time buckets by 15 minutes.
In this example, the user-specified
+offset interval
+shifts the InfluxDB database’s preset time buckets forward by 15 minutes.
+The average for the 6pm time bucket now includes data between 6:15pm and 7pm, and
+the average for the 7pm time bucket includes data between 7:15pm and 8pm.
+The dotted lines show the points that make up each average.
+
Note that the first timestamp in the result is 2016-08-29T18:15:00Z
+instead of 2016-08-29T18:00:00Z.
InfluxDB automatically queries data in a database’s default retention policy (RP). If your data is stored in another RP, you must specify the RP in your query to get results.
+
No field key in the SELECT clause
+
A query requires at least one field key in the SELECT clause. If the SELECT clause includes only tag keys, the query returns an empty response. For more information, see Data exploration.
+
SELECT query includes GROUP BY time()
+
If your SELECT query includes a GROUP BY time() clause, only data points between 1677-09-21 00:12:43.145224194 and now() are returned. Therefore, if any of your data points occur after now(), specify an alternative upper bound in your time interval.
+
(By default, most SELECT queries query data with timestamps between 1677-09-21 00:12:43.145224194 and 2262-04-11T23:47:16.854775806Z UTC.)
+
Tag and field key with the same name
+
Avoid using the same name for a tag and field key. If you inadvertently add the same name for a tag and field key, and then query both keys together, the query results show the second key queried (tag or field) appended with _1 (also visible as the column header in Chronograf). To query a tag or field key appended with _1, you must drop the appended _1and include the syntax ::tag or ::field.
Write the following points to create both a field and tag key with the same name leaves:
+
+
+
# create the `leaves` tag key
+INSERT grape,leaves=species leaves=6
+
+#create the `leaves` field key
+INSERT grape leaves=5
+
+
+
If you view both keys, you’ll notice that neither key includes _1:
+
+
+
# show the `leaves` tag key
+SHOW TAG KEYS
+
+name: grape
+tagKey
+------
+leaves
+
+# create the `leaves` field key
+SHOW FIELD KEYS
+
+name: grape
+fieldKey fieldType
+------ ---------
+leaves float
+
+
+
If you query the grape measurement, you’ll see the leaves tag key has an appended _1:
+
+
+
# query the `grape` measurement
+SELECT * FROM <database_name>.<retention_policy>."grape"
+
+name: grape
+time leaves leaves_1
+---- -------- ----------
+1574128162128468000 6.00 species
+1574128238044155000 5.00
+
+
+
To query a duplicate key name, you must drop_1and include::tag or ::field after the key:
+
+
+
# query duplicate keys using the correct syntax
+SELECT "leaves"::tag, "leaves"::field FROM <database_name>.<retention_policy>."grape"
+
+name: grape
+time leaves leaves_1
+---- -------- ----------
+1574128162128468000 species 6.00
+1574128238044155000 5.00
+
Therefore, queries that reference leaves_1 don’t return values.
+
+
+
+
+
Warning: If you inadvertently add a duplicate key name, follow the steps
+below to remove a duplicate key. Because of memory
+requirements, if you have large amounts of data, we recommend chunking your data
+(while selecting it) by a specified interval (for example, date range) to fit
+the allotted memory.
Use the following queries to remove a duplicate key.
+
+
+
+/* select each field key to keep in the original measurement and send to a temporary
+ measurement; then, group by the tag keys to keep (leave out the duplicate key) */
+
+SELECT"field_key","field_key2","field_key3"
+INTO<temporary_measurement>FROM<original_measurement>
+WHERE<daterange>GROUPBY"tag_key","tag_key2","tag_key3"
+
+/* verify the field keys and tags keys were successfully moved to the temporary
+measurement */
+SELECT*FROM"temporary_measurement"
+
+/* drop original measurement (with the duplicate key) */
+DROPMEASUREMENT"original_measurement"
+
+/* move data from temporary measurement back to original measurement you just dropped */
+SELECT*INTO"original_measurement"FROM"temporary_measurement"GROUPBY*
+
+/* verify the field keys and tags keys were successfully moved back to the original
+ measurement */
+SELECT*FROM"original_measurement"
+
+/* drop temporary measurement */
+DROPMEASUREMENT"temporary_measurement"
+
+
+
Why don’t my GROUP BY time() queries return timestamps that occur after now()?
To query data with timestamps that occur after now(), SELECT statements with
+a GROUP BY time() clause must provide an alternative upper bound in the
+WHERE clause.
+
In the following example, the first query covers data with timestamps between
+2015-09-18T21:30:00Z and now().
+The second query covers data with timestamps between 2015-09-18T21:30:00Z and 180 weeks from now().
Note that the WHERE clause must provide an alternative upper bound to
+override the default now() upper bound. The following query merely resets
+the lower bound to now() such that the query’s time range is between
+now() and now():
Can I perform mathematical operations against timestamps?
+
Currently, it is not possible to execute mathematical operators against timestamp values in InfluxDB.
+Most time calculations must be carried out by the client receiving the query results.
+
There is limited support for using InfluxQL functions against timestamp values.
+The function ELAPSED()
+returns the difference between subsequent timestamps in a single field.
+
Can I identify write precision from returned timestamps?
+
InfluxDB stores all timestamps as nanosecond values, regardless of the write precision supplied.
+It is important to note that when returning query results, the database silently drops trailing zeros from timestamps which obscures the initial write precision.
+
In the example below, the tags precision_supplied and timestamp_supplied show the time precision and timestamp that the user provided at the write.
+Because InfluxDB silently drops trailing zeros on returned timestamps, the write precision is not recognizable in the returned timestamps.
When should I single quote and when should I double quote in queries?
+
Single quote string values (for example, tag values) but do not single quote identifiers (database names, retention policy names, user names, measurement names, tag keys, and field keys).
+
Double quote identifiers if they start with a digit, contain characters other than [A-z,0-9,_], or if they are an InfluxQL keyword.
+Double quotes are not required for identifiers if they don’t fall into one of
+those categories but we recommend double quoting them anyway.
+
Examples:
+
Yes: SELECT bikes_available FROM bikes WHERE station_id='9'
+
Yes: SELECT "bikes_available" FROM "bikes" WHERE "station_id"='9'
+
Yes: SELECT MIN("avgrq-sz") AS "min_avgrq-sz" FROM telegraf
+
Yes: SELECT * from "cr@zy" where "p^e"='2'
+
No: SELECT 'bikes_available' FROM 'bikes' WHERE 'station_id'="9"
+
No: SELECT * from cr@zy where p^e='2'
+
Single quote date time strings. InfluxDB returns an error (ERR: invalid operation: time and *influxql.VarRef are not compatible) if you double quote
+a date time string.
+
Examples:
+
Yes: SELECT "water_level" FROM "h2o_feet" WHERE time > '2015-08-18T23:00:01.232000000Z' AND time < '2015-09-19'
+
No: SELECT "water_level" FROM "h2o_feet" WHERE time > "2015-08-18T23:00:01.232000000Z" AND time < "2015-09-19"
Why am I missing data after creating a new DEFAULT retention policy?
+
When you create a new DEFAULT retention policy (RP) on a database, the data written to the old DEFAULT RP remain in the old RP.
+Queries that do not specify an RP automatically query the new DEFAULT RP so the old data may appear to be missing.
+To query the old data you must fully qualify the relevant data in the query.
+
Example:
+
All of the data in the measurement fleeting fall under the DEFAULT RP called one_hour:
Why is my query with a WHERE OR time clause returning empty results?
+
Currently, InfluxDB does not support using OR in the WHERE clause to specify multiple time ranges.
+InfluxDB returns an empty response if the query’s WHERE clause uses OR
+with time intervals.
fill(previous) doesn’t fill the result for a time bucket if the previous value is outside the query’s time range.
+
In the following example, InfluxDB doesn’t fill the 2016-07-12T16:50:20Z-2016-07-12T16:50:30Z time bucket with the results from the 2016-07-12T16:50:00Z-2016-07-12T16:50:10Z time bucket because the query’s time range does not include the earlier time bucket.
While this is the expected behavior of fill(previous), an open feature request on GitHub proposes that fill(previous) should fill results even when previous values fall outside the query’s time range.
+
Why are my INTO queries missing data?
+
By default, INTO queries convert any tags in the initial data to fields in
+the newly written data.
+This can cause InfluxDB to overwrite points that were previously differentiated by a tag.
+Include GROUP BY * in all INTO queries to preserve tags in the newly written data.
+
Note that this behavior does not apply to queries that use the TOP() or BOTTOM() functions.
+See the TOP() and BOTTOM() documentation for more information.
+
Example
+
Initial data
+
The french_bulldogs measurement includes the color tag and the name field.
An INTO query without a GROUP BY * clause turns the color tag into
+a field in the newly written data.
+In the initial data the nugget point and the rumple points are differentiated only by the color tag.
+Once color becomes a field, InfluxDB assumes that the nugget point and the
+rumple point are duplicate points and it overwrites the nugget point with
+the rumple point.
+
+
+
>SELECT*INTO"all_dogs"FROM"french_bulldogs"
+name:result
+------------
+timewritten
+1970-01-01T00:00:00Z3
+
+>SELECT*FROM"all_dogs"
+name:all_dogs
+--------------
+timecolorname
+2016-05-25T00:05:00Zgreyrumple<---- no more nugget 🐶
+2016-05-25T00:10:00Zblackprince
+
INTO query with GROUP BY *
+
An INTO query with a GROUP BY * clause preserves color as a tag in the newly written data.
+In this case, the nugget point and the rumple point remain unique points and InfluxDB does not overwrite any data.
Currently, there is no way to perform cross-measurement math or grouping.
+All data must be under a single measurement to query it together.
+InfluxDB is not a relational database and mapping data across measurements is not currently a recommended schema.
+See GitHub Issue #3552 for a discussion of implementing JOIN in InfluxDB.
+
Does the order of the timestamps matter?
+
No.
+Our tests indicate that there is a only a negligible difference between the times
+it takes InfluxDB to complete the following queries:
InfluxDB maintains an in-memory index of every series in the system. As the number of unique series grows, so does the RAM usage. High series cardinality can lead to the operating system killing the InfluxDB process with an out of memory (OOM) exception. See SHOW CARDINALITY to learn about the InfluxSQL commands for series cardinality.
+
How can I remove series from the index?
+
To reduce series cardinality, series must be dropped from the index.
+DROP DATABASE,
+DROP MEASUREMENT, and
+DROP SERIES will all remove series from the index and reduce the overall series cardinality.
+
+
+
Note:DROP commands are usually CPU-intensive, as they frequently trigger a TSM compaction. Issuing DROP queries at a high frequency may significantly impact write and other query throughput.
+
+
+
How do I write integer field values?
+
Add a trailing i to the end of the field value when writing an integer.
+If you do not provide the i, InfluxDB will treat the field value as a float.
+
Writes an integer: value=100i
+Writes a float: value=100
+
How does InfluxDB handle duplicate points?
+
A point is uniquely identified by the measurement name, tag set, and timestamp.
+If you submit a new point with the same measurement, tag set, and timestamp as an existing point, the field set becomes the union of the old field set and the new field set, where any ties go to the new field set.
+This is the intended behavior.
+
For example:
+
Old point: cpu_load,hostname=server02,az=us_west val_1=24.5,val_2=7 1234567890000000
+
New point: cpu_load,hostname=server02,az=us_west val_1=5.24 1234567890000000
+
After you submit the new point, InfluxDB overwrites val_1 with the new field value and leaves the field val_2 alone:
What newline character does the InfluxDB API require?
+
The InfluxDB line protocol relies on line feed (\n, which is ASCII 0x0A) to indicate the end of a line and the beginning of a new line. Files or data that use a newline character other than \n will result in the following errors: bad timestamp, unable to parse.
+
Note that Windows uses carriage return and line feed (\r\n) as the newline character.
+
What words and characters should I avoid when writing data to InfluxDB?
+
InfluxQL keywords
+
If you use an InfluxQL keyword as an identifier you will need to double quote that identifier in every query.
+This can lead to non-intuitive errors.
+Identifiers are continuous query names, database names, field keys, measurement names, retention policy names, subscription names, tag keys, and user names.
+
time
+
The keyword time is a special case.
+time can be a
+continuous query name,
+database name,
+measurement name,
+retention policy name,
+subscription name, and
+user name.
+In those cases, time does not require double quotes in queries.
+time cannot be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
>INSERTmymeastime=1
+ERR:{"error":"partial write: invalid field name: input field \"time\" on measurement \"mymeas\" is invalid dropped=1"}
+
time is not a valid field key in InfluxDB.
+The system does does not write the point and returns a 400.
+
Write time as a tag key and attempt to query it
+
+
+
>INSERTmymeas,time=1value=1
+ERR:{"error":"partial write: invalid tag key: input tag \"time\" on measurement \"mymeas\" is invalid dropped=1"}
+
time is not a valid tag key in InfluxDB.
+The system does does not write the point and returns a 400.
+
Characters
+
To keep regular expressions and quoting simple, avoid using the following characters in identifiers:
+
\ backslash
+^ circumflex accent
+$ dollar sign
+' single quotation mark
+" double quotation mark
+= equal sign
+, comma
+
When should I single quote and when should I double quote when writing data?
+
+
+
Avoid single quoting and double quoting identifiers when writing data via
+line protocol; see the examples below for how writing identifiers with quotes
+can complicate queries. Identifiers are database names, retention policy
+names, user names, measurement names, tag keys, and field keys.
+Not recommended approaches (complicate queries):*
+Write with a double-quoted measurement: INSERT "bikes" bikes_available=3
+Applicable query: SELECT * FROM "\"bikes\""
+
Write with a single-quoted measurement: INSERT 'bikes' bikes_available=3
+Applicable query: SELECT * FROM "\'bikes\'"
+
Recommended approach (simpler queries):
+
Write with an unquoted measurement: INSERT bikes bikes_available=3
+Applicable query: SELECT * FROM "bikes"
+
+
+
Double quote field values that are strings–for example:
+
Write: INSERT bikes happiness="level 2"
+Applicable query: SELECT * FROM "bikes" WHERE "happiness"='level 2'
+
+
+
Special characters should be escaped with a backslash and not placed in quotes–for example:
+
Write: INSERT wacky va\"ue=4
+Applicable query: SELECT "va\"ue" FROM "wacky"
The tradeoff is that identical points with duplicate timestamps, more likely to occur as precision gets coarser, may overwrite other points.
+
What are the configuration recommendations and schema guidelines for writing sparse, historical data?
+
For users who want to write sparse, historical data to InfluxDB, InfluxData recommends:
+
First, lengthening your retention policy‘s shard group duration to cover several years.
+The default shard group duration is one week and if your data cover several hundred years – well, that’s a lot of shards!
+Having an extremely high number of shards is inefficient for InfluxDB.
+Increase the shard group duration for your data’s retention policy with the ALTER RETENTION POLICY query.
+
Second, temporarily lowering the cache-snapshot-write-cold-duration configuration setting.
+If you’re writing a lot of historical data, the default setting (10m) can cause the system to hold all of your data in cache for every shard.
+Temporarily lowering the cache-snapshot-write-cold-duration setting to 10s while you write the historical data makes the process more efficient.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
Optional. InfluxDB uses the server’s local nanosecond timestamp in UTC if the timestamp is not included with the point.
+
The timestamp for the data point. InfluxDB accepts one timestamp per point.
+
Unix nanosecond timestamp. Specify alternative precisions with the InfluxDB API.
+
+
+
+
+
+
Performance tips:
+
+
+
+
Before sending data to InfluxDB, sort by tag key to match the results from the
+Go bytes.Compare function.
+
To significantly improve compression, use the coarsest precision possible for timestamps.
+
Use the Network Time Protocol (NTP) to synchronize time between hosts. InfluxDB uses a host’s local time in UTC to assign timestamps to data. If a host’s clock isn’t synchronized with NTP, the data that the host writes to InfluxDB may have inaccurate timestamps.
+
+
Data types
+
+
+
+
Datatype
+
Element(s)
+
Description
+
+
+
+
+
Float
+
Field values
+
Default numerical type. IEEE-754 64-bit floating-point numbers (except NaN or +/- Inf). Examples: 1, 1.0, 1.e+78, 1.E+78.
+
+
+
Integer
+
Field values
+
Signed 64-bit integers (-9223372036854775808 to 9223372036854775807). Specify an integer with a trailing i on the number. Example: 1i.
+
+
+
String
+
Measurements, tag keys, tag values, field keys, field values
+
Length limit 64KB.
+
+
+
Boolean
+
Field values
+
Stores TRUE or FALSE values.
TRUE write syntax:[t, T, true, True, TRUE].
FALSE write syntax:[f, F, false, False, FALSE]
+
+
+
Timestamp
+
Timestamps
+
Unix nanosecond timestamp. Specify alternative precisions with the InfluxDB API. The minimum valid timestamp is -9223372036854775806 or 1677-09-21T00:12:43.145224194Z. The maximum valid timestamp is 9223372036854775806 or 2262-04-11T23:47:16.854775806Z.
+
+
+
+
Boolean syntax for writes and queries
+
Acceptable Boolean syntax differs for data writes and data queries.
+For more information, see
+Frequently asked questions.
+
Field type discrepancies
+
In a measurement, a field’s type cannot differ in a shard, but can differ across
+shards.
Write the field value -1.234456e+78 as a float to InfluxDB
+
+
+
INSERTmymeasvalue=-1.234456e+78
+
InfluxDB supports field values specified in scientific notation.
+
Write a field value 1.0 as a float to InfluxDB
+
+
+
INSERTmymeasvalue=1.0
+
Write the field value 1 as a float to InfluxDB
+
+
+
INSERTmymeasvalue=1
+
Write the field value 1 as an integer to InfluxDB
+
+
+
INSERTmymeasvalue=1i
+
Write the field value stringing along as a string to InfluxDB
+
+
+
INSERTmymeasvalue="stringing along"
+
Always double quote string field values. More on quoting below.
+
Write the field value true as a Boolean to InfluxDB
+
+
+
INSERTmymeasvalue=true
+
Do not quote Boolean field values.
+The following statement writes true as a string field value to InfluxDB:
+
+
+
INSERTmymeasvalue="true"
+
Attempt to write a string to a field that previously accepted floats
+
If the timestamps on the float and string are stored in the same shard:
+
+
+
>INSERTmymeasvalue=31465934559000000000
+>INSERTmymeasvalue="stringing along"1465934559000000001
+ERR:{"error":"field type conflict: input field \"value\" on measurement \"mymeas\" is type string, already exists as type float"}
+
If the timestamps on the float and string are not stored in the same shard:
Quoting, special characters, and additional naming guidelines
+
Quoting
+
+
+
+
Element
+
Double quotes
+
Single quotes
+
+
+
+
+
Timestamp
+
Never
+
Never
+
+
+
Measurements, tag keys, tag values, field keys
+
Never*
+
Never*
+
+
+
Field values
+
Double quote string field values. Do not double quote floats, integers, or Booleans.
+
Never
+
+
+
+
* InfluxDB line protocol allows users to double and single quote measurement names, tag
+keys, tag values, and field keys.
+It will, however, assume that the double or single quotes are part of the name,
+key, or value.
+This can complicate query syntax (see the example below).
+
Examples
+
Invalid line protocol - Double quote the timestamp
+
+
+
>INSERTmymeasvalue=9"1466625759000000000"
+ERR:{"error":"unable to parse 'mymeas value=9 \"1466625759000000000\"': bad timestamp"}
+
Double quoting (or single quoting) the timestamp yields a bad timestamp
+error.
+
Semantic error - Double quote a Boolean field value
If you double quote a measurement in line protocol, any queries on that
+measurement require both double quotes and escaped (\) double quotes in the
+FROM clause.
+
Special characters
+
You must use a backslash character \ to escape the following special characters:
+
+
In string field values, you must escape:
+
+
double quotes
+
backslash character
+
+
+
+
For example, \" escapes double quote.
+
+
+
Note on backslashes:
+
+
+
+
+
If you use multiple backslashes, they must be escaped. Influx interprets backslashes as follows:
+
+
\ or \\ interpreted as \
+
\\\ or \\\\ interpreted as \\
+
\\\\\ or \\\\\\ interpreted as \\\, and so on
+
+
+
+
In tag keys, tag values, and field keys, you must escape:
+
+
commas
+
equal signs
+
spaces
+
+
+
+
For example, \, escapes a comma.
+
+
In measurements, you must escape:
+
+
commas
+
spaces
+
+
+
+
You do not need to escape other special characters.
+
Examples
+
Write a point with special characters
+
+
+
INSERT"measurement\ with\ quo⚡️es\ and\ emoji",tag\key\with\sp🚀ces=tag\,value\,with"commas"field_k\ey="string field value, only \"needbeesc🍭ped"
+
The system writes a point where the measurement is "measurement with quo⚡️es and emoji", the tag key is tag key with sp🚀ces, the
+tag value is tag,value,with"commas", the field key is field_k\ey and the field value is string field value, only " need be esc🍭ped.
+
Additional naming guidelines
+
# at the beginning of the line is a valid comment character for line protocol.
+InfluxDB will ignore all subsequent characters until the next newline \n.
+
Measurement names, tag keys, tag values, field keys, and field values are
+case sensitive.
+
InfluxDB line protocol accepts
+InfluxQL keywords
+as identifier names.
+In general, we recommend avoiding using InfluxQL keywords in your schema as
+it can cause
+confusion when querying the data.
+
+
+
Note: Avoid using the reserved keys _field and _measurement. If these keys are included as a tag or field key, the associated point is discarded.
+
+
+
The keyword time is a special case.
+time can be a
+continuous query name,
+database name,
+measurement name,
+retention policy name,
+subscription name, and
+user name.
+In those cases, time does not require double quotes in queries.
+time cannot be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+See Frequently Asked Questions for more information.
+
InfluxDB line protocol in practice
+
To learn how to write line protocol to the database, see Tools.
+
Duplicate points
+
A point is uniquely identified by the measurement name, tag set, field set, and timestamp
+
If you write a point to a series with a timestamp that matches an existing point, the field set becomes a union of the old and new field set, and conflicts favor the new field set.
If you have a tag key and field key with the same name in a measurement, one of the keys will return appended with a _1 in query results (and as a column header in Chronograf). For example, location and location_1. To query a duplicate key, drop the _1 and use the InfluxQL ::tag or ::field syntax in your query, for example:
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
The InfluxDB line protocol is a text-based format for writing points to the
+database.
+Points must be in line protocol format for InfluxDB to successfully parse and
+write points (unless you’re using a service plugin).
+
Using fictional temperature data, this page introduces InfluxDB line protocol.
+It covers:
The final section, Writing data to InfluxDB,
+describes how to get data into InfluxDB and how InfluxDB handles Line
+Protocol duplicates.
+
Syntax
+
A single line of text in line protocol format represents one data point in InfluxDB.
+It informs InfluxDB of the point’s measurement, tag set, field set, and
+timestamp.
+The following code block shows a sample of line protocol and breaks it into its
+individual components:
The name of the measurement
+that you want to write your data to.
+The measurement is required in line protocol.
+
In the example, the measurement name is weather.
+
Tag set
+
The tag(s) that you want to include
+with your data point.
+Tags are optional in line protocol.
+
+
+
Note: Avoid using the reserved keys _field, _measurement, and time. If reserved keys are included as a tag or field key, the associated point is discarded.
+
+
+
Notice that the measurement and tag set are separated by a comma and no spaces.
+
Separate tag key-value pairs with an equals sign = and no spaces:
+
+
+
<tag_key>=<tag_value>
+
Separate multiple tag-value pairs with a comma and no spaces:
+
+
+
<tag_key>=<tag_value>,<tag_key>=<tag_value>
+
In the example, the tag set consists of one tag: location=us-midwest.
+Adding another tag (season=summer) to the example looks like this:
When using quotes in tag sets, line protocol supports single and double quotes as described in the following table:
+
+
+
+
Element
+
Double quotes
+
Single quotes
+
+
+
+
+
Measurement
+
Limited*
+
Limited*
+
+
+
Tag key
+
Limited*
+
Limited*
+
+
+
Tag value
+
Limited*
+
Limited*
+
+
+
Field key
+
Limited*
+
Limited*
+
+
+
Field value
+
Strings only
+
Never
+
+
+
Timestamp
+
Never
+
Never
+
+
+
+
*Line protocol accepts double and single quotes in
+measurement names, tag keys, tag values, and field keys, but interprets them as
+part of the name, key, or value.
For best performance you should sort tags by key before sending them to the
+database.
+The sort should match the results from the
+Go bytes.Compare function.
+
Whitespace I
+
Separate the measurement and the field set or, if you’re including a tag set
+with your data point, separate the tag set and the field set with a whitespace.
+The whitespace is required in line protocol.
+
Valid line protocol with no tag set:
+
+
+
weather temperature=82 1465839830100400200
+
Field set
+
The field(s) for your data point.
+Every data point requires at least one field in line protocol.
+
Separate field key-value pairs with an equals sign = and no spaces:
+
+
+
<field_key>=<field_value>
+
Separate multiple field-value pairs with a comma and no spaces:
Separate the field set and the optional timestamp with a whitespace.
+The whitespace is required in line protocol if you’re including a timestamp.
+
Timestamp
+
The timestamp for your data
+point in nanosecond-precision Unix time.
+The timestamp is optional in line protocol.
+If you do not specify a timestamp for your data point InfluxDB uses the server’s
+local nanosecond timestamp in UTC.
+
In the example, the timestamp is 1465839830100400200 (that’s
+2016-06-13T17:43:50.1004002Z in RFC3339 format).
+The line protocol below is the same data point but without the timestamp.
+When InfluxDB writes it to the database it uses your server’s
+local timestamp instead of 2016-06-13T17:43:50.1004002Z.
+
+
+
weather,location=us-midwest temperature=82
+
Use the InfluxDB API to specify timestamps with a precision other than nanoseconds,
+such as microseconds, milliseconds, or seconds.
+We recommend using the coarsest precision possible as this can result in
+significant improvements in compression.
+See the API Reference for more information.
+
+
+
Setup Tip:
+
Use the Network Time Protocol (NTP) to synchronize time between hosts.
+InfluxDB uses a host’s local time in UTC to assign timestamps to data; if
+hosts’ clocks aren’t synchronized with NTP, the timestamps on the data written
+to InfluxDB can be inaccurate.
Measurements, tag keys, tag values, and field keys are always strings.
+
+
+
Note:
+Because InfluxDB stores tag values as strings, InfluxDB cannot perform math on
+tag values.
+In addition, InfluxQL functions
+do not accept a tag value as a primary argument.
+It’s a good idea to take into account that information when designing your
+schema.
+
+
+
Timestamps are UNIX timestamps.
+The minimum valid timestamp is -9223372036854775806 or 1677-09-21T00:12:43.145224194Z.
+The maximum valid timestamp is 9223372036854775806 or 2262-04-11T23:47:16.854775806Z.
+As mentioned above, by default, InfluxDB assumes that timestamps have
+nanosecond precision.
+See the API Reference for how to specify
+alternative precisions.
+
Field values can be floats, integers, strings, or Booleans:
+
+
+
Floats - by default, InfluxDB assumes all numerical field values are floats.
Note: Acceptable Boolean syntax differs for data writes and data
+queries. See
+Frequently Asked Questions
+for more information.
+
+
+
+
+
Within a measurement, a field’s type cannot differ within a
+shard, but it can differ across
+shards. For example, writing an integer to a field that previously accepted
+floats fails if InfluxDB attempts to store the integer in the same shard as the
+floats:
+
+
+
>INSERTweather,location=us-midwesttemperature=821465839830100400200
+>INSERTweather,location=us-midwesttemperature=81i1465839830100400300
+ERR:{"error":"field type conflict: input field \"temperature\" on measurement \"weather\" is type int64, already exists as type float"}
+
But, writing an integer to a field that previously accepted floats succeeds if
+InfluxDB stores the integer in a new shard:
Do not double or single quote measurement names, tag keys, tag values, and field
+keys.
+It is valid line protocol but InfluxDB assumes that the quotes are part of the
+name.
Line protocol accepts
+InfluxQL keywords
+as identifier names.
+In general, we recommend avoiding using InfluxQL keywords in your schema as
+it can cause
+confusion when querying the data.
+
The keyword time is a special case.
+time can be a
+continuous query name,
+database name,
+measurement name,
+retention policy name,
+subscription name, and
+user name.
+In those cases, time does not require double quotes in queries.
+time cannot be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+See Frequently Asked Questions for more information.
+
Writing data to InfluxDB
+
Getting data in the database
+
Now that you know all about the InfluxDB line protocol, how do you actually get the
+line protocol to InfluxDB?
+Here, we’ll give two quick examples and then point you to the
+Tools sections for further
+information.
+
InfluxDB API
+
Write data to InfluxDB using the InfluxDB API.
+Send a POST request to the /write endpoint and provide your line protocol in
+the request body:
For in-depth descriptions of query string parameters, status codes, responses,
+and more examples, see the API Reference.
+
CLI
+
Write data to InfluxDB using the InfluxDB command line interface (CLI).
+Launch the CLI, use the relevant
+database, and put INSERT in
+front of your line protocol:
You can also use the CLI to
+import Line
+Protocol from a file.
+
There are several ways to write data to InfluxDB.
+See the Tools section for more
+on the InfluxDB API, the
+CLI, and the available Service Plugins (
+UDP,
+Graphite,
+CollectD, and
+OpenTSDB).
+
Duplicate points
+
A point is uniquely identified by the measurement name, tag set, and timestamp.
+If you submit line protocol with the same measurement, tag set, and timestamp,
+but with a different field set, the field set becomes the union of the old
+field set and the new field set, where any conflicts favor the new field set.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
Use the influxd upgrade command to upgrade InfluxDB 1.x to InfluxDB 2.8.
+The influxd upgrade command is
+part of the v2 influxd service and provides an in-place upgrade from
+InfluxDB 1.x to InfluxDB 2.8.
+
+
+
To complete the upgrade process, ensure that you are using the
+InfluxDB 2.8influxd binary that includes the
+influxd upgrade command.
+
+
+
+
The upgrade process does the following:
+
+
Reads the existing InfluxDB 1.x configuration file and generates an equivalent
+InfluxDB 2.8 configuration file at ~/.influxdbv2/config.toml
+or at a custom path specified with the --v2-config-path flag.
+
Upgrades metadata and storage engine paths to ~/.influxdbv2/meta and
+~/.influxdbv2/engine, respectively (unless otherwise specified).
+
Writes existing data and write ahead log (WAL) files into InfluxDB
+2.8buckets.
Reads existing metadata and migrates non-admin users, passwords, and
+permissions into a 1.x authorization–compatible store within ~/influxdbv2/influxdb.bolt.
+
+
When starting InfluxDB 2.8 after running influxdb upgrade,
+InfluxDB must build a new time series index (TSI).
+Depending on the volume of data present, this may take some time.
+
Important considerations before you begin
+
Before upgrading to InfluxDB 2.8, consider the following guidelines.
+Some or all might apply to your specific installation and use case.
+The sections below contain our recommendations for addressing possible gaps in the upgrade process.
+Consider whether you need to address any of the following before upgrading.
Available operating system, container, and platform support
+
InfluxDB 2.8 is currently available for macOS, Linux, and Windows.
+
+
+
InfluxDB 2.8 requires 64-bit operating systems.
+
+
+
+
Continuous queries
+
Continuous queries are replaced by tasks in InfluxDB 2.8.
+By default, influxd upgrade writes all continuous queries to ~/continuous_queries.txt.
+To convert continuous queries to InfluxDB tasks, see
+Migrate continuous queries to tasks.
+
Supported protocols
+
InfluxDB 2.8 doesn’t directly support the alternate write protocols
+supported in InfluxDB 1.x
+(CollectD, Graphite, OpenTSDB, Prometheus, UDP).
+Use Telegraf to translate these protocols to line protocol.
+
Kapacitor
+
You can continue to use Kapacitor with InfluxDB OSS 2.8 under the following scenarios:
+
+
Kapacitor Batch-style TICKscripts work with the 1.x read compatible API.
+Existing Kapacitor user credentials should continue to work using the 1.x compatibility API.
+
InfluxDB 2.8 has no subscriptions API and does not support Kapacitor stream tasks.
+To continue using stream tasks, write data directly to both InfluxDB and Kapacitor.
+Use Telegraf and its InfluxDB output plugin
+to write to Kapacitor and the InfluxDB v2 output plugin
+to write to InfluxDB v2.
+
+
Example Telegraf configuration
+
+
+
# Write to Kapacitor
+[[outputs.influxdb]]
+urls=["http://localhost:9092"]
+database="example-db"
+retention_policy="example-rp"
+
+# Write to InfluxDB 2.8
+[[outputs.influxdb]]
+urls=["http://localhost:8086"]
+database="example-db"
+retention_policy="example-rp"
+username="v1-auth-username"
+password="v1-auth-password"
+
User migration
+
influxd upgrade migrates existing 1.x users and their permissions except the following users:
To review 1.x users with admin privileges, run the following against your InfluxDB 1.x instance:
+
+
+
SHOWUSERS
+
Users with admin set to true will not be migrated.
+
To review the specific privileges granted to each 1.x user, run the following
+for each user in your InfluxDB 1.x instance:
+
+
+
SHOWGRANTSFOR"<username>"
+
If no grants appear, the user will not be migrated.
+
+
+
+
+
+
+
+
If using an admin user for visualization or Chronograf administrative functions,
+create a new read-only user before upgrading:
+
Create a read-only 1.x user
+
+
+
+
CREATE USER <username> WITH PASSWORD '<password>'
+GRANT READ ON <database> TO "<username>"
+
InfluxDB 2.8 only grants admin privileges to the primary user
+set up during the InfluxDB 2.8 upgrade.
+This provides you the opportunity to reassess who to grant admin permissions to
+when setting up InfluxDB 2.8.
+
Dashboards
+
You can continue to use your existing dashboards and visualization tools with
+InfluxDB 2.8 via the 1.x /query compatibility API.
+The upgrade process creates DBRP mappings
+to ensure existing users can execute InfluxQL queries with the appropriate permissions.
+
However, if your dashboard tool is configured using a user with admin permissions,
+you will need to create a new read-only user with the appropriate database permissions before upgrading.
+This new username and password combination should be used within the data source
+configurations to continue to provide read-only access to the underlying data.
+
Ensure your dashboards are all functioning before upgrading.
+
Other data
+
The 1.x _internal database is not migrated with the influxd upgrade command.
+To collect, store, and monitor similar internal InfluxDB metrics,
+create an InfluxDB 2.8 scraper
+to scrape data from the /metrics endpoint and store them in a bucket.
+
Secure by default
+
InfluxDB 2.8 requires authentication and does not support
+the InfluxDB 1.x auth-enabled = false configuration option.
If you upgrade with auth-enabled = false, the upgrade may appear complete,
+but client requests to InfluxDB 2.8 may be silently ignored
+(you won’t see a notification that the request was denied).
+
In-memory indexing option
+
InfluxDB 2.8 doesn’t support
+in-memory (inmem) indexing.
+The following InfluxDB 1.x configuration options associated with inmem
+indexing are ignored in the upgrade process:
+
+
max-series-per-database
+
max-values-per-tag
+
+
Interactive shell
+
The InfluxDB 2.8influx CLI includes an interactive
+InfluxQL shell for executing InfluxQL queries.
+To start an InfluxQL shell:
Stop your running InfluxDB 1.x instance.
+Make a backup copy of all 1.x data before upgrading:
+
+
+
+
cp -R .influxdb/ .influxdb_bak/
+
+
+
Use influxd version to ensure you are running InfluxDB 2.8 from the command line.
+The influxd upgrade command is only available in InfluxDB 2.8.
+
+
+
If your 1.x configuration file is at the
+default location, run:
+
+
+
+
influxd upgrade
+
+
+
Upgrade .deb packages
+
When installed from a .deb package, InfluxDB 1.x and 2.x run under the influxdb user.
+If you’ve installed both versions from .deb packages, run the upgrade command
+as the influxdb user:
+
+
+
+
sudo -u influxdb influxd upgrade
+
+
+
+
If your 1.x configuration file is not at the default location, run:
+
+
+
+
influxd upgrade --config-file <path to v1 config file>
+
To store the upgraded 2.8 configuration file in a custom location, include the --v2-config-path flag:
+
+
+
+
influxd upgrade --v2-config-path <destination path for v2 config file>
+
+
+
Follow the prompts to set up a new InfluxDB 2.8 instance.
+
+
+
Welcome to InfluxDB 2.8 upgrade!
+Please type your primary username: <your-username>
+
+Please type your password:
+
+Please type your password again:
+
+Please type your primary organization name: <your-org>
+
+Please type your primary bucket name: <your-bucket>
+
+Please type your retention period in hours.
+Or press ENTER for infinite:
+
+You have entered:
+ Username: <your-username>
+ Organization: <your-org>
+ Bucket: <your-bucket>
+ Retention Period: infinite
+Confirm? (y/n): y
+
+
+
The output of the upgrade prints to standard output.
+It is also saved (for troubleshooting and debugging) in the current directory to
+a file called upgrade.log located in the home directory of the user running
+influxdb upgrade.
+
Post-upgrade
+
Verify 1.x users were migrated to 2.8
+
To verify 1.x users were successfully migrated to 2.8, run
+influx v1 auth list.
+
Add authorizations for external clients
+
If your InfluxDB 1.x instance did not have authentication enabled and the
+influx v1 auth list doesn’t return any users, external clients connected to
+your 1.x instance will not be able to access InfluxDB 2.8,
+which requires authentication.
+
For these external clients to work with InfluxDB 2.8:
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
InfluxQL is designed for working with time series data and includes features specifically for working with time.
+You can review the following ways to work with time and timestamps in your InfluxQL queries:
Currently, InfluxDB does not support using OR with absolute time in the WHERE
+clause. See the Frequently Asked Questions
+document and the GitHub Issue
+for more information.
+
rfc3339_date_time_string
+
+
+
'YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ'
+
.nnnnnnnnn is optional and is set to .000000000 if not included.
+The RFC3339 date-time string requires single quotes.
+
rfc3339_like_date_time_string
+
+
+
'YYYY-MM-DD HH:MM:SS.nnnnnnnnn'
+
HH:MM:SS.nnnnnnnnn.nnnnnnnnn is optional and is set to 00:00:00.000000000 if not included.
+The RFC3339-like date-time string requires single quotes.
+
epoch_time
+
Epoch time is the amount of time that has elapsed since 00:00:00
+Coordinated Universal Time (UTC), Thursday, 1 January 1970.
+
By default, InfluxDB assumes that all epoch timestamps are in nanoseconds. Include a duration literal at the end of the epoch timestamp to indicate a precision other than nanoseconds.
+
Basic arithmetic
+
All timestamp formats support basic arithmetic.
+Add (+) or subtract (-) a time from a timestamp with a duration literal.
+Note that InfluxQL requires a whitespace between the + or - and the
+duration literal.
+
Examples
+
+
+
+
+
+
+
+
+
+ Specify a time range with RFC3339 date-time strings
+
The query returns data with timestamps between August 18, 2019 at 00:00:00 and August 18, 2019
+at 00:12:00.
+The first date-time string does not include a time; InfluxDB assumes the time
+is 00:00:00.
+
Note that the single quotes around the RFC3339-like date-time strings are
+required.
The query returns data with timestamps that occur between August 1, 2019
+at 00:00:00 and August 19, 2019 at 00:12:00. By default InfluxDB assumes epoch timestamps are in nanoseconds.
+
+
+
+
+
+
+
+
+
+
+
+ Specify a time range with second-precision epoch timestamps
+
The query returns data with timestamps that occur between August 19, 2019
+at 00:00:00 and August 19, 2019 at 00:12:00.
+The s duration literal at the end of the epoch timestamps indicate that the epoch timestamps are in seconds.
+
+
+
+
+
+
+
+
+
+
+
+ Perform basic arithmetic on an RFC3339-like date-time string
+
The query returns data with timestamps that occur at least six minutes after
+September 17, 2019 at 21:24:00.
+Note that the whitespace between the + and 6m is required.
+
+
+
+
+
+
+
+
+
+
+
+ Perform basic arithmetic on an epoch timestamp
+
The query returns data with timestamps that occur at least six minutes before
+September 18, 2019 at 21:24:00. Note that the whitespace between the - and 6m is required. Note that the results above are partial as the dataset is large.
+
+
+
+
+
+
+
+
+
Relative time
+
Use now() to query data with timestamps relative to the server’s current timestamp.
now() is the Unix time of the server at the time the query is executed on that server.
+The whitespace between - or + and the duration literal is required.
The query returns data with timestamps that occur between September 17, 2019 at 21:18:00 and 1000 days from now(). The whitespace between + and 1000d is required.
+
+
+
+
+
+
+
+
+
The Time Zone clause
+
Use the tz() clause to return the UTC offset for the specified timezone.
By default, InfluxDB stores and returns timestamps in UTC.
+The tz() clause includes the UTC offset or, if applicable, the UTC Daylight Savings Time (DST) offset to the query’s returned timestamps. The returned timestamps must be in RFC3339 format for the UTC offset or UTC DST to appear.
+The time_zone parameter follows the TZ syntax in the Internet Assigned Numbers Authority time zone database and it requires single quotes.
To query data with timestamps that occur after now(), SELECT statements with
+a GROUP BY time() clause must provide an alternative upper bound in the
+WHERE clause.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
The INTO clause is optional.
+If the command does not include INTO, you must specify the
+database with USE <database_name> when using the InfluxQL shell
+or with the db query string parameter in the
+InfluxDB 1.x compatibility API request.
Delete all data associated with the measurement h2o_feet:
+
+
+
DELETEFROM"h2o_feet"
+
Delete data in a measurement that has a specific tag value
+
Delete all data associated with the measurement h2o_quality and where the tag randtag equals 3:
+
+
+
DELETEFROM"h2o_quality"WHERE"randtag"='3'
+
Delete data before or after specified time
+
Delete all data in the database that occur before January 01, 2020:
+
+
+
DELETEWHEREtime<'2020-01-01'
+
A successful DELETE query returns an empty result.
+
If you need to delete points in the future, you must specify the future time period because DELETE SERIES runs for time < now() by default.
+
Delete future points:
+
+
+
DELETEFROMdevice_dataWHERE"device"='sensor1" and time > now() and < '2024-01-14T01:00:00Z'
+
Delete points in the future within a specified time range:
+
+
+
DELETEFROMdevice_dataWHERE"device"='sensor15" and time >= '2024-01-01T12:00:00Z' and <= '2025-06-30T11:59:00Z'
+
Delete measurements with DROP MEASUREMENT
+
The DROP MEASUREMENT statement deletes all data and series from the specified measurement and deletes the measurement from the index.
+
Syntax
+
+
+
DROPMEASUREMENT<measurement_name>
+
Example
+
Delete the measurement h2o_feet:
+
+
+
DROPMEASUREMENT"h2o_feet"
+
A successful DROP MEASUREMENT query returns an empty result.
+
+
+
The DROP MEASUREMENT command is very resource intensive. We do not recommend this command for bulk data deletion. Use the DELETE FROM command instead, which is less resource intensive.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
The syntax is specified using Extended Backus-Naur Form (“EBNF”).
+EBNF is the same notation used in the Go programming language specification,
+which can be found here.
+
+
+
Production = production_name "=" [ Expression ] "." .
+Expression = Alternative { "|" Alternative } .
+Alternative = Term { Term } .
+Term = production_name | token [ "…" token ] | Group | Option | Repetition .
+Group = "(" Expression ")" .
+Option = "[" Expression "]" .
+Repetition = "{" Expression "}" .
+
Notation operators in order of increasing precedence:
+
+
+
| alternation
+() grouping
+[] option (0 or 1 times)
+{} repetition (0 to n times)
cpu
+_cpu_stats
+"1h"
+"anything really"
+"1_Crazy-1337.identifier>NAME👍"
+
Keywords
+
+
+
ALL ALTER ANY AS ASC BEGIN
+BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
+DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
+DURATION END EVERY EXPLAIN FIELD FOR
+FROM GRANT GRANTS GROUP GROUPS IN
+INF INSERT INTO KEY KEYS KILL
+LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET
+ON ORDER PASSWORD POLICY POLICIES PRIVILEGES
+QUERIES QUERY READ REPLICATION RESAMPLE RETENTION
+REVOKE SELECT SERIES SET SHARD SHARDS
+SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG
+TO USER USERS VALUES WHERE WITH
+WRITE
+
If you use an InfluxQL keywords as an
+identifier you will need to
+double quote that identifier in every query.
+
The keyword time is a special case.
+time can be a
+database name,
+measurement name,
+retention policy name,
+subscription name, and
+user name.
+In those cases, time does not require double quotes in queries.
+time cannot be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+See Frequently Asked Questions for more information.
+
Literals
+
Integers
+
InfluxQL supports decimal integer literals.
+Hexadecimal and octal literals are not currently supported.
+
+
+
int_lit = ( "1" … "9" ) { digit } .
+
Floats
+
InfluxQL supports floating-point literals.
+Exponents are not currently supported.
+
+
+
float_lit = int_lit "." int_lit .
+
Strings
+
String literals must be surrounded by single quotes.
+Strings may contain ' characters as long as they are escaped (i.e., \').
+
+
+
string_lit = `'` { unicode_char } `'` .
+
Durations
+
Duration literals specify a length of time.
+An integer literal followed immediately (with no spaces) by a duration unit listed below is interpreted as a duration literal.
+Durations can be specified with mixed units.
The date and time literal format is not specified in EBNF like the rest of this document.
+It is specified using Go’s date / time parsing format, which is a reference date written in the format required by InfluxQL.
+The reference date time is:
+
InfluxQL reference date time: January 2nd, 2006 at 3:04:05 PM
Parses and plans the query, and then prints a summary of estimated costs.
+
Many SQL engines use the EXPLAIN statement to show join order, join algorithms, and predicate and expression pushdown.
+Since InfluxQL does not support joins, the cost of a InfluxQL query is typically a function of the total series accessed, the number of iterator accesses to a TSM file, and the number of TSM blocks that need to be scanned.
Executes the specified SELECT statement and returns data on the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including execution time and planning time, and the iterator type and cursor type.
Note: EXPLAIN ANALYZE ignores query output, so the cost of serialization to JSON or CSV is not accounted for.
+
+
+
execution_time
+
Shows the amount of time the query took to execute, including reading the time series data, performing operations as data flows through iterators, and draining processed data from iterators. Execution time doesn’t include the time taken to serialize the output into JSON or other formats.
+
planning_time
+
Shows the amount of time the query took to plan.
+Planning a query in InfluxDB requires a number of steps. Depending on the complexity of the query, planning can require more work and consume more CPU and memory resources than the executing the query. For example, the number of series keys required to execute a query affects how quickly the query is planned and the required memory.
+
First, InfluxDB determines the effective time range of the query and selects the shards to access (in InfluxDB Enterprise, shards may be on remote nodes).
+Next, for each shard and each measurement, InfluxDB performs the following steps:
+
+
Select matching series keys from the index, filtered by tag predicates in the WHERE clause.
+
Group filtered series keys into tag sets based on the GROUP BY dimensions.
+
Enumerate each tag set and create a cursor and iterator for each series key.
+
Merge iterators and return the merged result to the query executor.
+
+
iterator type
+
EXPLAIN ANALYZE supports the following iterator types:
+
+
create_iterator node represents work done by the local influxd instance──a complex composition of nested iterators combined and merged to produce the final query output.
+
(InfluxDB Enterprise only) remote_iterator node represents work done on remote machines.
EXPLAIN ANALYZE distinguishes 3 cursor types. While the cursor types have the same data structures and equal CPU and I/O costs, each cursor type is constructed for a different reason and separated in the final output. Consider the following cursor types when tuning a statement:
+
+
cursor_ref: Reference cursor created for SELECT projections that include a function, such as last() or mean().
+
cursor_aux: Auxiliary cursor created for simple expression projections (not selectors or an aggregation). For example, SELECT foo FROM m or SELECT foo+bar FROM m, where foo and bar are fields.
+
cursor_cond: Condition cursor created for fields referenced in a WHERE clause.
EXPLAIN ANALYZE separates storage block types, and reports the total number of blocks decoded and their size (in bytes) on disk. The following block types are supported:
Refers to the group of commands used to estimate or count exactly the cardinality of measurements, series, tag keys, tag key values, and field keys.
+
The SHOW CARDINALITY commands are available in two variations: estimated and exact. Estimated values are calculated using sketches and are a safe default for all cardinality sizes. Exact values are counts directly from TSM (Time-Structured Merge Tree) data, but are expensive to run for high cardinality data. Unless required, use the estimated variety.
+
Filtering by time is only supported when Time Series Index (TSI) is enabled on a database.
+
See the specific SHOW CARDINALITY commands for details:
Estimates or counts exactly the cardinality of the field key set for the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when Time Series Index (TSI) is enabled and time is not supported in the WHERE clause.
+
+
+
+
+
show_field_key_cardinality_stmt="SHOW FIELD KEY CARDINALITY"[on_clause][from_clause][where_clause][group_by_clause][limit_clause][offset_clause]
+
+show_field_key_exact_cardinality_stmt="SHOW FIELD KEY EXACT CARDINALITY"[on_clause][from_clause][where_clause][group_by_clause][limit_clause][offset_clause]
+
Examples
+
+
+
-- show estimated cardinality of the field key set of current database
+SHOWFIELDKEYCARDINALITY
+-- show exact cardinality on field key set of specified database
+SHOWFIELDKEYEXACTCARDINALITYONmydb
+
SHOW FIELD KEYS
+
+
+
show_field_keys_stmt = "SHOW FIELD KEYS" [on_clause] [ from_clause ] .
+
Examples
+
+
+
-- show field keys and field value data types from all measurements
+SHOWFIELDKEYS
+
+-- show field keys and field value data types from specified measurement
+SHOWFIELDKEYSFROM"cpu"
-- show all measurements
+SHOWMEASUREMENTS
+
+-- show measurements where region tag = 'uswest' AND host tag = 'serverA'
+SHOWMEASUREMENTSWHERE"region"='uswest'AND"host"='serverA'
+
+-- show measurements that start with 'h2o'
+SHOWMEASUREMENTSWITHMEASUREMENT=~/h2o.*/
Series cardinality is the major factor that affects RAM requirements. For more information, see:
+
+
+
Don’t have too many series. As the number of unique series grows, so does the memory usage. High series cardinality can force the host operating system to kill the InfluxDB process with an out of memory (OOM) exception.
+
+
+
+
+
NOTE:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is not supported in the WHERE clause.
+
+
+
+
SHOW TAG KEY CARDINALITY
+
Estimates or counts exactly the cardinality of tag key set on the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when TSI (Time Series Index) is enabled and time is not supported in the WHERE clause.
-- show all tag keys
+SHOWTAGKEYS
+
+-- show all tag keys from the cpu measurement
+SHOWTAGKEYSFROM"cpu"
+
+-- show all tag keys from the cpu measurement where the region key = 'uswest'
+SHOWTAGKEYSFROM"cpu"WHERE"region"='uswest'
+
+-- show all tag keys where the host key = 'serverA'
+SHOWTAGKEYSWHERE"host"='serverA'
-- show all tag values across all measurements for the region tag
+SHOWTAGVALUESWITHKEY="region"
+
+-- show tag values from the cpu measurement for the region tag
+SHOWTAGVALUESFROM"cpu"WITHKEY="region"
+
+-- show tag values across all measurements for all tag keys that do not include the letter c
+SHOWTAGVALUESWITHKEY!~/.*c.*/
+
+-- show tag values from the cpu measurement for region & host tag keys where service = 'redis'
+SHOWTAGVALUESFROM"cpu"WITHKEYIN("region","host")WHERE"service"='redis'
+
SHOW TAG VALUES CARDINALITY
+
Estimates or counts exactly the cardinality of tag key values for the specified tag key on the current database unless a database is specified using the ON <database> option.
+
+
+
Note:ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses are optional.
+When using these query clauses, the query falls back to an exact count.
+Filtering by time is only supported when TSI (Time Series Index) is enabled.
-- show estimated tag key values cardinality for a specified tag key
+SHOWTAGVALUESCARDINALITYWITHKEY="myTagKey"
+-- show estimated tag key values cardinality for a specified tag key
+SHOWTAGVALUESCARDINALITYWITHKEY="myTagKey"
+-- show exact tag key values cardinality for a specified tag key
+SHOWTAGVALUESEXACTCARDINALITYWITHKEY="myTagKey"
+-- show exact tag key values cardinality for a specified tag key
+SHOWTAGVALUESEXACTCARDINALITYWITHKEY="myTagKey"
Use comments with InfluxQL statements to describe your queries.
+
+
A single line comment begins with two hyphens (--) and ends where InfluxDB detects a line break.
+This comment type cannot span several lines.
+
A multi-line comment begins with /* and ends with */. This comment type can span several lines.
+Multi-line comments do not support nested multi-line comments.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
InfluxQL (Influx Query Language) is an SQL-like query language used to interact
+with InfluxDB and work with times series data.
+
+
+
+
+
InfluxQL feature support
+
InfluxQL is being rearchitected to work with the InfluxDB 3 storage engine.
+This process is ongoing and some InfluxQL features are still being implemented.
+For information about the current implementation status of InfluxQL features,
+see InfluxQL feature support.
cpu
+_cpu_stats
+"1h"
+"anything really"
+"1_Crazy-1337.identifier>NAME👍"
+
Keywords
+
+
+
ALL ALTER ANY AS ASC BEGIN
+BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
+DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
+DURATION END EVERY EXPLAIN FIELD FOR
+FROM GRANT GRANTS GROUP GROUPS IN
+INF INSERT INTO KEY KEYS KILL
+LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET
+ON ORDER PASSWORD POLICY POLICIES PRIVILEGES
+QUERIES QUERY READ REPLICATION RESAMPLE RETENTION
+REVOKE SELECT SERIES SET SHARD SHARDS
+SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG
+TO USER USERS VALUES WHERE WITH
+WRITE
+
If you use an InfluxQL keyword as an
+identifier,
+double-quote the identifier in every query.
+
The time keyword is a special case.
+time can be a
+database name,
+measurement name,
+retention policy name, and
+user name.
+
In those cases, you don’t need to double-quote time in queries.
+
time can’t be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+
+
Literals
+
Integers
+
InfluxQL supports decimal integer literals.
+Hexadecimal and octal literals aren’t currently supported.
+
+
+
int_lit = ( "1" … "9" ) { digit } .
+
Floats
+
InfluxQL supports floating-point literals.
+Exponents aren’t currently supported.
+
+
+
float_lit = int_lit "." int_lit .
+
Strings
+
String literals must be surrounded by single quotes.
+Strings may contain ' characters as long as they are escaped (that is, , \')
+
+
+
string_lit = `'` { unicode_char } `'` .
+
Durations
+
Duration literals specify a length of time.
+An integer literal followed immediately (with no spaces) by one of the duration units listed below is interpreted as a duration literal.
+Durations can be specified with mixed units.
Unlike other notations used in InfluxQL, the date and time literal format isn’t specified by EBNF.
+InfluxQL date and time is specified using Go’s time parsing format and
+reference date written in the format required by InfluxQL.
+The reference date time is:
+
InfluxQL reference date time: January 2nd, 2006 at 3:04:05 PM
Currently, InfluxQL doesn’t support using regular expressions to match non-string
+field values in the WHERE clause, databases,
+and retention polices.
+
+
Queries
+
A query is composed of one or more statements separated by a semicolon (;).
Parses and plans the query, and then prints a summary of estimated costs.
+
Many SQL engines use the EXPLAIN statement to show join order, join algorithms, and predicate and expression pushdown.
+Since InfluxQL doesn’t support joins, the cost of an InfluxQL query is typically a function of the total series accessed, the number of iterator accesses to a TSM file, and the number of TSM blocks that need to be scanned.
+
A query plan generated by EXPLAIN contains the following elements:
Executes the specified SELECT statement and returns data about the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including execution time and planning time, and the iterator type and cursor type.
+
For example, if you execute the following statement:
EXPLAIN ANALYZE ignores query output, so the cost of serialization to JSON or
+CSV isn’t accounted for.
+
+
execution_time
+
Shows the amount of time the query took to execute, including reading the time series data, performing operations as data flows through iterators, and draining processed data from iterators. Execution time doesn’t include the time taken to serialize the output into JSON or other formats.
+
planning_time
+
Shows the amount of time the query took to plan.
+Planning a query in InfluxDB requires a number of steps. Depending on the complexity of the query, planning can require more work and consume more CPU and memory resources than executing the query. For example, the number of series keys required to execute a query affects how quickly the query is planned and how much memory the planning requires.
+
First, InfluxDB determines the effective time range of the query and selects the shards to access.
+Next, for each shard and each measurement, InfluxDB performs the following steps:
+
+
Select matching series keys from the index, filtered by tag predicates in the WHERE clause.
+
Group filtered series keys into tag sets based on the GROUP BY dimensions.
+
Enumerate each tag set and create a cursor and iterator for each series key.
+
Merge iterators and return the merged result to the query executor.
+
+
iterator type
+
EXPLAIN ANALYZE supports the following iterator types:
+
+
create_iterator node represents work done by the local influxd instance──a complex composition of nested iterators combined and merged to produce the final query output.
+
(InfluxDB Enterprise only) remote_iterator node represents work done on remote machines.
EXPLAIN ANALYZE distinguishes 3 cursor types. While the cursor types have the same data structures and equal CPU and I/O costs, each cursor type is constructed for a different reason and separated in the final output. Consider the following cursor types when tuning a statement:
+
+
cursor_ref: Reference cursor created for SELECT projections that include a function, such as last() or mean().
+
cursor_aux: Auxiliary cursor created for simple expression projections (not selectors or an aggregation). For example, SELECT foo FROM m or SELECT foo+bar FROM m, where foo and bar are fields.
+
cursor_cond: Condition cursor created for fields referenced in a WHERE clause.
EXPLAIN ANALYZE separates storage block types, and reports the total number of
+blocks decoded and their size (in bytes) on disk. The following block types are supported:
show_field_keys_stmt = "SHOW FIELD KEYS" [on_clause] [ from_clause ] .
+
Examples
+
+
+
-- show field keys and field value data types from all measurements
+SHOWFIELDKEYS
+
+-- show field keys and field value data types from specified measurement
+SHOWFIELDKEYSFROM"cpu"
-- show all measurements
+SHOWMEASUREMENTS
+
+-- show measurements where region tag = 'uswest' AND host tag = 'serverA'
+SHOWMEASUREMENTSWHERE"region"='uswest'AND"host"='serverA'
+
+-- show measurements that start with 'h2o'
+SHOWMEASUREMENTSWITHMEASUREMENT=~/h2o.*/
-- show all tag keys
+SHOWTAGKEYS
+
+-- show all tag keys from the cpu measurement
+SHOWTAGKEYSFROM"cpu"
+
+-- show all tag keys from the cpu measurement where the region key = 'uswest'
+SHOWTAGKEYSFROM"cpu"WHERE"region"='uswest'
+
+-- show all tag keys where the host key = 'serverA'
+SHOWTAGKEYSWHERE"host"='serverA'
-- show all tag values across all measurements for the region tag
+SHOWTAGVALUESWITHKEY="region"
+
+-- show tag values from the cpu measurement for the region tag
+SHOWTAGVALUESFROM"cpu"WITHKEY="region"
+
+-- show tag values across all measurements for all tag keys that do not include the letter c
+SHOWTAGVALUESWITHKEY!~/.*c.*/
+
+-- show tag values from the cpu measurement for region & host tag keys where service = 'redis'
+SHOWTAGVALUESFROM"cpu"WITHKEYIN("region","host")WHERE"service"='redis'
The default time range is the Unix epoch (1970-01-01T00:00:00Z) to now.
+
Comments
+
Use comments with InfluxQL statements to describe your queries.
+
+
A single line comment begins with two hyphens (--) and ends where InfluxDB detects a line break.
+This comment type cannot span several lines.
+
A multi-line comment begins with /* and ends with */. This comment type can span several lines.
+Multi-line comments do not support nested multi-line comments.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
InfluxQL (Influx Query Language) is an SQL-like query language used to interact
+with InfluxDB and work with times series data.
+
+
+
+
+
InfluxQL feature support
+
InfluxQL is being rearchitected to work with the InfluxDB 3 storage engine.
+This process is ongoing and some InfluxQL features are still being implemented.
+For information about the current implementation status of InfluxQL features,
+see InfluxQL feature support.
cpu
+_cpu_stats
+"1h"
+"anything really"
+"1_Crazy-1337.identifier>NAME👍"
+
Keywords
+
+
+
ALL ALTER ANY AS ASC BEGIN
+BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
+DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
+DURATION END EVERY EXPLAIN FIELD FOR
+FROM GRANT GRANTS GROUP GROUPS IN
+INF INSERT INTO KEY KEYS KILL
+LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET
+ON ORDER PASSWORD POLICY POLICIES PRIVILEGES
+QUERIES QUERY READ REPLICATION RESAMPLE RETENTION
+REVOKE SELECT SERIES SET SHARD SHARDS
+SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG
+TO USER USERS VALUES WHERE WITH
+WRITE
+
If you use an InfluxQL keyword as an
+identifier,
+double-quote the identifier in every query.
+
The time keyword is a special case.
+time can be a
+database name,
+measurement name,
+retention policy name, and
+user name.
+
In those cases, you don’t need to double-quote time in queries.
+
time can’t be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+
+
Literals
+
Integers
+
InfluxQL supports decimal integer literals.
+Hexadecimal and octal literals aren’t currently supported.
+
+
+
int_lit = ( "1" … "9" ) { digit } .
+
Floats
+
InfluxQL supports floating-point literals.
+Exponents aren’t currently supported.
+
+
+
float_lit = int_lit "." int_lit .
+
Strings
+
String literals must be surrounded by single quotes.
+Strings may contain ' characters as long as they are escaped (that is, , \')
+
+
+
string_lit = `'` { unicode_char } `'` .
+
Durations
+
Duration literals specify a length of time.
+An integer literal followed immediately (with no spaces) by one of the duration units listed below is interpreted as a duration literal.
+Durations can be specified with mixed units.
Unlike other notations used in InfluxQL, the date and time literal format isn’t specified by EBNF.
+InfluxQL date and time is specified using Go’s time parsing format and
+reference date written in the format required by InfluxQL.
+The reference date time is:
+
InfluxQL reference date time: January 2nd, 2006 at 3:04:05 PM
Currently, InfluxQL doesn’t support using regular expressions to match non-string
+field values in the WHERE clause, databases,
+and retention polices.
+
+
Queries
+
A query is composed of one or more statements separated by a semicolon (;).
Parses and plans the query, and then prints a summary of estimated costs.
+
Many SQL engines use the EXPLAIN statement to show join order, join algorithms, and predicate and expression pushdown.
+Since InfluxQL doesn’t support joins, the cost of an InfluxQL query is typically a function of the total series accessed, the number of iterator accesses to a TSM file, and the number of TSM blocks that need to be scanned.
+
A query plan generated by EXPLAIN contains the following elements:
Executes the specified SELECT statement and returns data about the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including execution time and planning time, and the iterator type and cursor type.
+
For example, if you execute the following statement:
EXPLAIN ANALYZE ignores query output, so the cost of serialization to JSON or
+CSV isn’t accounted for.
+
+
execution_time
+
Shows the amount of time the query took to execute, including reading the time series data, performing operations as data flows through iterators, and draining processed data from iterators. Execution time doesn’t include the time taken to serialize the output into JSON or other formats.
+
planning_time
+
Shows the amount of time the query took to plan.
+Planning a query in InfluxDB requires a number of steps. Depending on the complexity of the query, planning can require more work and consume more CPU and memory resources than executing the query. For example, the number of series keys required to execute a query affects how quickly the query is planned and how much memory the planning requires.
+
First, InfluxDB determines the effective time range of the query and selects the shards to access.
+Next, for each shard and each measurement, InfluxDB performs the following steps:
+
+
Select matching series keys from the index, filtered by tag predicates in the WHERE clause.
+
Group filtered series keys into tag sets based on the GROUP BY dimensions.
+
Enumerate each tag set and create a cursor and iterator for each series key.
+
Merge iterators and return the merged result to the query executor.
+
+
iterator type
+
EXPLAIN ANALYZE supports the following iterator types:
+
+
create_iterator node represents work done by the local influxd instance──a complex composition of nested iterators combined and merged to produce the final query output.
+
(InfluxDB Enterprise only) remote_iterator node represents work done on remote machines.
EXPLAIN ANALYZE distinguishes 3 cursor types. While the cursor types have the same data structures and equal CPU and I/O costs, each cursor type is constructed for a different reason and separated in the final output. Consider the following cursor types when tuning a statement:
+
+
cursor_ref: Reference cursor created for SELECT projections that include a function, such as last() or mean().
+
cursor_aux: Auxiliary cursor created for simple expression projections (not selectors or an aggregation). For example, SELECT foo FROM m or SELECT foo+bar FROM m, where foo and bar are fields.
+
cursor_cond: Condition cursor created for fields referenced in a WHERE clause.
EXPLAIN ANALYZE separates storage block types, and reports the total number of
+blocks decoded and their size (in bytes) on disk. The following block types are supported:
show_field_keys_stmt = "SHOW FIELD KEYS" [on_clause] [ from_clause ] .
+
Examples
+
+
+
-- show field keys and field value data types from all measurements
+SHOWFIELDKEYS
+
+-- show field keys and field value data types from specified measurement
+SHOWFIELDKEYSFROM"cpu"
-- show all measurements
+SHOWMEASUREMENTS
+
+-- show measurements where region tag = 'uswest' AND host tag = 'serverA'
+SHOWMEASUREMENTSWHERE"region"='uswest'AND"host"='serverA'
+
+-- show measurements that start with 'h2o'
+SHOWMEASUREMENTSWITHMEASUREMENT=~/h2o.*/
-- show all tag keys
+SHOWTAGKEYS
+
+-- show all tag keys from the cpu measurement
+SHOWTAGKEYSFROM"cpu"
+
+-- show all tag keys from the cpu measurement where the region key = 'uswest'
+SHOWTAGKEYSFROM"cpu"WHERE"region"='uswest'
+
+-- show all tag keys where the host key = 'serverA'
+SHOWTAGKEYSWHERE"host"='serverA'
-- show all tag values across all measurements for the region tag
+SHOWTAGVALUESWITHKEY="region"
+
+-- show tag values from the cpu measurement for the region tag
+SHOWTAGVALUESFROM"cpu"WITHKEY="region"
+
+-- show tag values across all measurements for all tag keys that do not include the letter c
+SHOWTAGVALUESWITHKEY!~/.*c.*/
+
+-- show tag values from the cpu measurement for region & host tag keys where service = 'redis'
+SHOWTAGVALUESFROM"cpu"WITHKEYIN("region","host")WHERE"service"='redis'
The default time range is the Unix epoch (1970-01-01T00:00:00Z) to now.
+
Comments
+
Use comments with InfluxQL statements to describe your queries.
+
+
A single line comment begins with two hyphens (--) and ends where InfluxDB detects a line break.
+This comment type cannot span several lines.
+
A multi-line comment begins with /* and ends with */. This comment type can span several lines.
+Multi-line comments do not support nested multi-line comments.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
InfluxQL (Influx Query Language) is an SQL-like query language used to interact
+with InfluxDB and work with times series data.
+
+
+
+
+
InfluxQL feature support
+
InfluxQL is being rearchitected to work with the InfluxDB 3 storage engine.
+This process is ongoing and some InfluxQL features are still being implemented.
+For information about the current implementation status of InfluxQL features,
+see InfluxQL feature support.
cpu
+_cpu_stats
+"1h"
+"anything really"
+"1_Crazy-1337.identifier>NAME👍"
+
Keywords
+
+
+
ALL ALTER ANY AS ASC BEGIN
+BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
+DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
+DURATION END EVERY EXPLAIN FIELD FOR
+FROM GRANT GRANTS GROUP GROUPS IN
+INF INSERT INTO KEY KEYS KILL
+LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET
+ON ORDER PASSWORD POLICY POLICIES PRIVILEGES
+QUERIES QUERY READ REPLICATION RESAMPLE RETENTION
+REVOKE SELECT SERIES SET SHARD SHARDS
+SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG
+TO USER USERS VALUES WHERE WITH
+WRITE
+
If you use an InfluxQL keyword as an
+identifier,
+double-quote the identifier in every query.
+
The time keyword is a special case.
+time can be a
+database name,
+measurement name,
+retention policy name, and
+user name.
+
In those cases, you don’t need to double-quote time in queries.
+
time can’t be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+
+
Literals
+
Integers
+
InfluxQL supports decimal integer literals.
+Hexadecimal and octal literals aren’t currently supported.
+
+
+
int_lit = ( "1" … "9" ) { digit } .
+
Floats
+
InfluxQL supports floating-point literals.
+Exponents aren’t currently supported.
+
+
+
float_lit = int_lit "." int_lit .
+
Strings
+
String literals must be surrounded by single quotes.
+Strings may contain ' characters as long as they are escaped (that is, , \')
+
+
+
string_lit = `'` { unicode_char } `'` .
+
Durations
+
Duration literals specify a length of time.
+An integer literal followed immediately (with no spaces) by one of the duration units listed below is interpreted as a duration literal.
+Durations can be specified with mixed units.
Unlike other notations used in InfluxQL, the date and time literal format isn’t specified by EBNF.
+InfluxQL date and time is specified using Go’s time parsing format and
+reference date written in the format required by InfluxQL.
+The reference date time is:
+
InfluxQL reference date time: January 2nd, 2006 at 3:04:05 PM
Currently, InfluxQL doesn’t support using regular expressions to match non-string
+field values in the WHERE clause, databases,
+and retention polices.
+
+
Queries
+
A query is composed of one or more statements separated by a semicolon (;).
Parses and plans the query, and then prints a summary of estimated costs.
+
Many SQL engines use the EXPLAIN statement to show join order, join algorithms, and predicate and expression pushdown.
+Since InfluxQL doesn’t support joins, the cost of an InfluxQL query is typically a function of the total series accessed, the number of iterator accesses to a TSM file, and the number of TSM blocks that need to be scanned.
+
A query plan generated by EXPLAIN contains the following elements:
Executes the specified SELECT statement and returns data about the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including execution time and planning time, and the iterator type and cursor type.
+
For example, if you execute the following statement:
EXPLAIN ANALYZE ignores query output, so the cost of serialization to JSON or
+CSV isn’t accounted for.
+
+
execution_time
+
Shows the amount of time the query took to execute, including reading the time series data, performing operations as data flows through iterators, and draining processed data from iterators. Execution time doesn’t include the time taken to serialize the output into JSON or other formats.
+
planning_time
+
Shows the amount of time the query took to plan.
+Planning a query in InfluxDB requires a number of steps. Depending on the complexity of the query, planning can require more work and consume more CPU and memory resources than executing the query. For example, the number of series keys required to execute a query affects how quickly the query is planned and how much memory the planning requires.
+
First, InfluxDB determines the effective time range of the query and selects the shards to access.
+Next, for each shard and each measurement, InfluxDB performs the following steps:
+
+
Select matching series keys from the index, filtered by tag predicates in the WHERE clause.
+
Group filtered series keys into tag sets based on the GROUP BY dimensions.
+
Enumerate each tag set and create a cursor and iterator for each series key.
+
Merge iterators and return the merged result to the query executor.
+
+
iterator type
+
EXPLAIN ANALYZE supports the following iterator types:
+
+
create_iterator node represents work done by the local influxd instance──a complex composition of nested iterators combined and merged to produce the final query output.
+
(InfluxDB Enterprise only) remote_iterator node represents work done on remote machines.
EXPLAIN ANALYZE distinguishes 3 cursor types. While the cursor types have the same data structures and equal CPU and I/O costs, each cursor type is constructed for a different reason and separated in the final output. Consider the following cursor types when tuning a statement:
+
+
cursor_ref: Reference cursor created for SELECT projections that include a function, such as last() or mean().
+
cursor_aux: Auxiliary cursor created for simple expression projections (not selectors or an aggregation). For example, SELECT foo FROM m or SELECT foo+bar FROM m, where foo and bar are fields.
+
cursor_cond: Condition cursor created for fields referenced in a WHERE clause.
EXPLAIN ANALYZE separates storage block types, and reports the total number of
+blocks decoded and their size (in bytes) on disk. The following block types are supported:
show_field_keys_stmt = "SHOW FIELD KEYS" [on_clause] [ from_clause ] .
+
Examples
+
+
+
-- show field keys and field value data types from all measurements
+SHOWFIELDKEYS
+
+-- show field keys and field value data types from specified measurement
+SHOWFIELDKEYSFROM"cpu"
-- show all measurements
+SHOWMEASUREMENTS
+
+-- show measurements where region tag = 'uswest' AND host tag = 'serverA'
+SHOWMEASUREMENTSWHERE"region"='uswest'AND"host"='serverA'
+
+-- show measurements that start with 'h2o'
+SHOWMEASUREMENTSWITHMEASUREMENT=~/h2o.*/
-- show all tag keys
+SHOWTAGKEYS
+
+-- show all tag keys from the cpu measurement
+SHOWTAGKEYSFROM"cpu"
+
+-- show all tag keys from the cpu measurement where the region key = 'uswest'
+SHOWTAGKEYSFROM"cpu"WHERE"region"='uswest'
+
+-- show all tag keys where the host key = 'serverA'
+SHOWTAGKEYSWHERE"host"='serverA'
-- show all tag values across all measurements for the region tag
+SHOWTAGVALUESWITHKEY="region"
+
+-- show tag values from the cpu measurement for the region tag
+SHOWTAGVALUESFROM"cpu"WITHKEY="region"
+
+-- show tag values across all measurements for all tag keys that do not include the letter c
+SHOWTAGVALUESWITHKEY!~/.*c.*/
+
+-- show tag values from the cpu measurement for region & host tag keys where service = 'redis'
+SHOWTAGVALUESFROM"cpu"WITHKEYIN("region","host")WHERE"service"='redis'
The default time range is the Unix epoch (1970-01-01T00:00:00Z) to now.
+
Comments
+
Use comments with InfluxQL statements to describe your queries.
+
+
A single line comment begins with two hyphens (--) and ends where InfluxDB detects a line break.
+This comment type cannot span several lines.
+
A multi-line comment begins with /* and ends with */. This comment type can span several lines.
+Multi-line comments do not support nested multi-line comments.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
InfluxQL (Influx Query Language) is an SQL-like query language used to interact
+with InfluxDB and work with times series data.
+
+
+
+
+
InfluxQL feature support
+
InfluxQL is being rearchitected to work with the InfluxDB 3 storage engine.
+This process is ongoing and some InfluxQL features are still being implemented.
+For information about the current implementation status of InfluxQL features,
+see InfluxQL feature support.
cpu
+_cpu_stats
+"1h"
+"anything really"
+"1_Crazy-1337.identifier>NAME👍"
+
Keywords
+
+
+
ALL ALTER ANY AS ASC BEGIN
+BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
+DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
+DURATION END EVERY EXPLAIN FIELD FOR
+FROM GRANT GRANTS GROUP GROUPS IN
+INF INSERT INTO KEY KEYS KILL
+LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET
+ON ORDER PASSWORD POLICY POLICIES PRIVILEGES
+QUERIES QUERY READ REPLICATION RESAMPLE RETENTION
+REVOKE SELECT SERIES SET SHARD SHARDS
+SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG
+TO USER USERS VALUES WHERE WITH
+WRITE
+
If you use an InfluxQL keyword as an
+identifier,
+double-quote the identifier in every query.
+
The time keyword is a special case.
+time can be a
+database name,
+measurement name,
+retention policy name, and
+user name.
+
In those cases, you don’t need to double-quote time in queries.
+
time can’t be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+
+
Literals
+
Integers
+
InfluxQL supports decimal integer literals.
+Hexadecimal and octal literals aren’t currently supported.
+
+
+
int_lit = ( "1" … "9" ) { digit } .
+
Floats
+
InfluxQL supports floating-point literals.
+Exponents aren’t currently supported.
+
+
+
float_lit = int_lit "." int_lit .
+
Strings
+
String literals must be surrounded by single quotes.
+Strings may contain ' characters as long as they are escaped (that is, , \')
+
+
+
string_lit = `'` { unicode_char } `'` .
+
Durations
+
Duration literals specify a length of time.
+An integer literal followed immediately (with no spaces) by one of the duration units listed below is interpreted as a duration literal.
+Durations can be specified with mixed units.
Unlike other notations used in InfluxQL, the date and time literal format isn’t specified by EBNF.
+InfluxQL date and time is specified using Go’s time parsing format and
+reference date written in the format required by InfluxQL.
+The reference date time is:
+
InfluxQL reference date time: January 2nd, 2006 at 3:04:05 PM
Currently, InfluxQL doesn’t support using regular expressions to match non-string
+field values in the WHERE clause, databases,
+and retention polices.
+
+
Queries
+
A query is composed of one or more statements separated by a semicolon (;).
Parses and plans the query, and then prints a summary of estimated costs.
+
Many SQL engines use the EXPLAIN statement to show join order, join algorithms, and predicate and expression pushdown.
+Since InfluxQL doesn’t support joins, the cost of an InfluxQL query is typically a function of the total series accessed, the number of iterator accesses to a TSM file, and the number of TSM blocks that need to be scanned.
+
A query plan generated by EXPLAIN contains the following elements:
Executes the specified SELECT statement and returns data about the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including execution time and planning time, and the iterator type and cursor type.
+
For example, if you execute the following statement:
EXPLAIN ANALYZE ignores query output, so the cost of serialization to JSON or
+CSV isn’t accounted for.
+
+
execution_time
+
Shows the amount of time the query took to execute, including reading the time series data, performing operations as data flows through iterators, and draining processed data from iterators. Execution time doesn’t include the time taken to serialize the output into JSON or other formats.
+
planning_time
+
Shows the amount of time the query took to plan.
+Planning a query in InfluxDB requires a number of steps. Depending on the complexity of the query, planning can require more work and consume more CPU and memory resources than executing the query. For example, the number of series keys required to execute a query affects how quickly the query is planned and how much memory the planning requires.
+
First, InfluxDB determines the effective time range of the query and selects the shards to access.
+Next, for each shard and each measurement, InfluxDB performs the following steps:
+
+
Select matching series keys from the index, filtered by tag predicates in the WHERE clause.
+
Group filtered series keys into tag sets based on the GROUP BY dimensions.
+
Enumerate each tag set and create a cursor and iterator for each series key.
+
Merge iterators and return the merged result to the query executor.
+
+
iterator type
+
EXPLAIN ANALYZE supports the following iterator types:
+
+
create_iterator node represents work done by the local influxd instance──a complex composition of nested iterators combined and merged to produce the final query output.
+
(InfluxDB Enterprise only) remote_iterator node represents work done on remote machines.
EXPLAIN ANALYZE distinguishes 3 cursor types. While the cursor types have the same data structures and equal CPU and I/O costs, each cursor type is constructed for a different reason and separated in the final output. Consider the following cursor types when tuning a statement:
+
+
cursor_ref: Reference cursor created for SELECT projections that include a function, such as last() or mean().
+
cursor_aux: Auxiliary cursor created for simple expression projections (not selectors or an aggregation). For example, SELECT foo FROM m or SELECT foo+bar FROM m, where foo and bar are fields.
+
cursor_cond: Condition cursor created for fields referenced in a WHERE clause.
EXPLAIN ANALYZE separates storage block types, and reports the total number of
+blocks decoded and their size (in bytes) on disk. The following block types are supported:
show_field_keys_stmt = "SHOW FIELD KEYS" [on_clause] [ from_clause ] .
+
Examples
+
+
+
-- show field keys and field value data types from all measurements
+SHOWFIELDKEYS
+
+-- show field keys and field value data types from specified measurement
+SHOWFIELDKEYSFROM"cpu"
-- show all measurements
+SHOWMEASUREMENTS
+
+-- show measurements where region tag = 'uswest' AND host tag = 'serverA'
+SHOWMEASUREMENTSWHERE"region"='uswest'AND"host"='serverA'
+
+-- show measurements that start with 'h2o'
+SHOWMEASUREMENTSWITHMEASUREMENT=~/h2o.*/
-- show all tag keys
+SHOWTAGKEYS
+
+-- show all tag keys from the cpu measurement
+SHOWTAGKEYSFROM"cpu"
+
+-- show all tag keys from the cpu measurement where the region key = 'uswest'
+SHOWTAGKEYSFROM"cpu"WHERE"region"='uswest'
+
+-- show all tag keys where the host key = 'serverA'
+SHOWTAGKEYSWHERE"host"='serverA'
-- show all tag values across all measurements for the region tag
+SHOWTAGVALUESWITHKEY="region"
+
+-- show tag values from the cpu measurement for the region tag
+SHOWTAGVALUESFROM"cpu"WITHKEY="region"
+
+-- show tag values across all measurements for all tag keys that do not include the letter c
+SHOWTAGVALUESWITHKEY!~/.*c.*/
+
+-- show tag values from the cpu measurement for region & host tag keys where service = 'redis'
+SHOWTAGVALUESFROM"cpu"WITHKEYIN("region","host")WHERE"service"='redis'
The default time range is the Unix epoch (1970-01-01T00:00:00Z) to now.
+
Comments
+
Use comments with InfluxQL statements to describe your queries.
+
+
A single line comment begins with two hyphens (--) and ends where InfluxDB detects a line break.
+This comment type cannot span several lines.
+
A multi-line comment begins with /* and ends with */. This comment type can span several lines.
+Multi-line comments do not support nested multi-line comments.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example:
InfluxQL (Influx Query Language) is an SQL-like query language used to interact
+with InfluxDB and work with times series data.
+
+
+
+
+
InfluxQL feature support
+
InfluxQL is being rearchitected to work with the InfluxDB 3 storage engine.
+This process is ongoing and some InfluxQL features are still being implemented.
+For information about the current implementation status of InfluxQL features,
+see InfluxQL feature support.
cpu
+_cpu_stats
+"1h"
+"anything really"
+"1_Crazy-1337.identifier>NAME👍"
+
Keywords
+
+
+
ALL ALTER ANY AS ASC BEGIN
+BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
+DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
+DURATION END EVERY EXPLAIN FIELD FOR
+FROM GRANT GRANTS GROUP GROUPS IN
+INF INSERT INTO KEY KEYS KILL
+LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET
+ON ORDER PASSWORD POLICY POLICIES PRIVILEGES
+QUERIES QUERY READ REPLICATION RESAMPLE RETENTION
+REVOKE SELECT SERIES SET SHARD SHARDS
+SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG
+TO USER USERS VALUES WHERE WITH
+WRITE
+
If you use an InfluxQL keyword as an
+identifier,
+double-quote the identifier in every query.
+
The time keyword is a special case.
+time can be a
+database name,
+measurement name,
+retention policy name, and
+user name.
+
In those cases, you don’t need to double-quote time in queries.
+
time can’t be a field key or
+tag key;
+InfluxDB rejects writes with time as a field key or tag key and returns an error.
+
+
Literals
+
Integers
+
InfluxQL supports decimal integer literals.
+Hexadecimal and octal literals aren’t currently supported.
+
+
+
int_lit = ( "1" … "9" ) { digit } .
+
Floats
+
InfluxQL supports floating-point literals.
+Exponents aren’t currently supported.
+
+
+
float_lit = int_lit "." int_lit .
+
Strings
+
String literals must be surrounded by single quotes.
+Strings may contain ' characters as long as they are escaped (that is, , \')
+
+
+
string_lit = `'` { unicode_char } `'` .
+
Durations
+
Duration literals specify a length of time.
+An integer literal followed immediately (with no spaces) by one of the duration units listed below is interpreted as a duration literal.
+Durations can be specified with mixed units.
Unlike other notations used in InfluxQL, the date and time literal format isn’t specified by EBNF.
+InfluxQL date and time is specified using Go’s time parsing format and
+reference date written in the format required by InfluxQL.
+The reference date time is:
+
InfluxQL reference date time: January 2nd, 2006 at 3:04:05 PM
Currently, InfluxQL doesn’t support using regular expressions to match non-string
+field values in the WHERE clause, databases,
+and retention polices.
+
+
Queries
+
A query is composed of one or more statements separated by a semicolon (;).
Parses and plans the query, and then prints a summary of estimated costs.
+
Many SQL engines use the EXPLAIN statement to show join order, join algorithms, and predicate and expression pushdown.
+Since InfluxQL doesn’t support joins, the cost of an InfluxQL query is typically a function of the total series accessed, the number of iterator accesses to a TSM file, and the number of TSM blocks that need to be scanned.
+
A query plan generated by EXPLAIN contains the following elements:
Executes the specified SELECT statement and returns data about the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including execution time and planning time, and the iterator type and cursor type.
+
For example, if you execute the following statement:
EXPLAIN ANALYZE ignores query output, so the cost of serialization to JSON or
+CSV isn’t accounted for.
+
+
execution_time
+
Shows the amount of time the query took to execute, including reading the time series data, performing operations as data flows through iterators, and draining processed data from iterators. Execution time doesn’t include the time taken to serialize the output into JSON or other formats.
+
planning_time
+
Shows the amount of time the query took to plan.
+Planning a query in InfluxDB requires a number of steps. Depending on the complexity of the query, planning can require more work and consume more CPU and memory resources than executing the query. For example, the number of series keys required to execute a query affects how quickly the query is planned and how much memory the planning requires.
+
First, InfluxDB determines the effective time range of the query and selects the shards to access.
+Next, for each shard and each measurement, InfluxDB performs the following steps:
+
+
Select matching series keys from the index, filtered by tag predicates in the WHERE clause.
+
Group filtered series keys into tag sets based on the GROUP BY dimensions.
+
Enumerate each tag set and create a cursor and iterator for each series key.
+
Merge iterators and return the merged result to the query executor.
+
+
iterator type
+
EXPLAIN ANALYZE supports the following iterator types:
+
+
create_iterator node represents work done by the local influxd instance──a complex composition of nested iterators combined and merged to produce the final query output.
+
(InfluxDB Enterprise only) remote_iterator node represents work done on remote machines.
EXPLAIN ANALYZE distinguishes 3 cursor types. While the cursor types have the same data structures and equal CPU and I/O costs, each cursor type is constructed for a different reason and separated in the final output. Consider the following cursor types when tuning a statement:
+
+
cursor_ref: Reference cursor created for SELECT projections that include a function, such as last() or mean().
+
cursor_aux: Auxiliary cursor created for simple expression projections (not selectors or an aggregation). For example, SELECT foo FROM m or SELECT foo+bar FROM m, where foo and bar are fields.
+
cursor_cond: Condition cursor created for fields referenced in a WHERE clause.
EXPLAIN ANALYZE separates storage block types, and reports the total number of
+blocks decoded and their size (in bytes) on disk. The following block types are supported:
show_field_keys_stmt = "SHOW FIELD KEYS" [on_clause] [ from_clause ] .
+
Examples
+
+
+
-- show field keys and field value data types from all measurements
+SHOWFIELDKEYS
+
+-- show field keys and field value data types from specified measurement
+SHOWFIELDKEYSFROM"cpu"
-- show all measurements
+SHOWMEASUREMENTS
+
+-- show measurements where region tag = 'uswest' AND host tag = 'serverA'
+SHOWMEASUREMENTSWHERE"region"='uswest'AND"host"='serverA'
+
+-- show measurements that start with 'h2o'
+SHOWMEASUREMENTSWITHMEASUREMENT=~/h2o.*/
-- show all tag keys
+SHOWTAGKEYS
+
+-- show all tag keys from the cpu measurement
+SHOWTAGKEYSFROM"cpu"
+
+-- show all tag keys from the cpu measurement where the region key = 'uswest'
+SHOWTAGKEYSFROM"cpu"WHERE"region"='uswest'
+
+-- show all tag keys where the host key = 'serverA'
+SHOWTAGKEYSWHERE"host"='serverA'
-- show all tag values across all measurements for the region tag
+SHOWTAGVALUESWITHKEY="region"
+
+-- show tag values from the cpu measurement for the region tag
+SHOWTAGVALUESFROM"cpu"WITHKEY="region"
+
+-- show tag values across all measurements for all tag keys that do not include the letter c
+SHOWTAGVALUESWITHKEY!~/.*c.*/
+
+-- show tag values from the cpu measurement for region & host tag keys where service = 'redis'
+SHOWTAGVALUESFROM"cpu"WITHKEYIN("region","host")WHERE"service"='redis'
The default time range is the Unix epoch (1970-01-01T00:00:00Z) to now.
+
Comments
+
Use comments with InfluxQL statements to describe your queries.
+
+
A single line comment begins with two hyphens (--) and ends where InfluxDB detects a line break.
+This comment type cannot span several lines.
+
A multi-line comment begins with /* and ends with */. This comment type can span several lines.
+Multi-line comments do not support nested multi-line comments.
+ Thank you for being part of our community!
+ We welcome and encourage your feedback and bug reports for and this documentation.
+ To find support, use the following resources:
+
InfluxDB 3.8 is now available for both Core and Enterprise, alongside the
+1.6 release of the InfluxDB 3 Explorer UI. This release is focused on
+operational maturity and making InfluxDB easier to deploy, manage, and run
+reliably in production.
InfluxDB Docker latest tag changing to InfluxDB 3 Core
+
+
+
+ On May 27, 2026, the latest tag for InfluxDB Docker images will
+point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version
+tags in your Docker deployments.
+
+
+
+
+
If using Docker to install and run InfluxDB, the latest tag will point to
+InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in
+your Docker deployments. For example, if using Docker to run InfluxDB v2,
+replace the latest version tag with a specific version tag in your Docker
+pull command–for example: