From 37a1f7ecb2d3ad487e661c88eb95632b5eddeba3 Mon Sep 17 00:00:00 2001 From: Scott Anderson Date: Wed, 16 Dec 2020 13:32:37 -0700 Subject: [PATCH] Updated link in OSS optimize doc --- content/influxdb/v2.0/query-data/optimize-queries.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/influxdb/v2.0/query-data/optimize-queries.md b/content/influxdb/v2.0/query-data/optimize-queries.md index e5c57231e..252ec0b56 100644 --- a/content/influxdb/v2.0/query-data/optimize-queries.md +++ b/content/influxdb/v2.0/query-data/optimize-queries.md @@ -74,7 +74,7 @@ We're continually optimizing Flux and this list may not represent its current st ## Balance time range and data precision To ensure queries are performant, balance the time range and the precision of your data. For example, if you query data stored every second and request six months worth of data, -results would include ≈15.5 million points per series. Depending on the number of series returned after `filter()`([cardinality](/influxdb/cloud/reference/glossary/#series-cardinality)), this can quickly become many billions of points. +results would include ≈15.5 million points per series. Depending on the number of series returned after `filter()`([cardinality](/influxdb/v2.0/reference/glossary/#series-cardinality)), this can quickly become many billions of points. Flux must store these points in memory to generate a response. Use [pushdown functions](#pushdown-functions) to optimize how many points are stored in memory. To query data over large periods of time, create a task to [downsample data](/influxdb/v2.0/process-data/common-tasks/downsample-data/), and then query the downsampled data instead.