fix(influxdb3): correct Quix Streams guide for Cloud Dedicated (#6828)
* chore(deps): update yarn dependencies Run yarn to update lockfile with latest compatible versions. * fix(influxdb3): correct Quix Streams guide for Cloud Dedicated Extract downsample-quix content to shared file and fix product-specific terminology, links, and prerequisites for Cloud Dedicated and Clustered. - Use "database" terminology for Cloud Dedicated/Clustered - Remove Docker from prerequisites (not used in guide) - Add alt_links for cross-product navigation - Fix broken TOC anchor links - Add links to admin pages for tokens and databases - Remove incorrect /reference/regions link for Cloud Dedicated - Add lint rules for deprecated code-placeholders and py fence Closes #6825 * Apply suggestions from code review Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix(influxdb3): fix broken code blocks in Quix Streams guide Code blocks inside show-in shortcodes were missing closing fences, causing the markdown to render incorrectly. Added proper fence boundaries and placeholder key documentation for each code section. Also adds TODO to content-editing skill about improving automation for code-placeholder-key workflow. * Docs v2 docs v2 pr6828 (#6829) * fix(influxdb): Rename to match other pages. Remove alt_links * chore(deps): bump ESLint to 10.0.0 * Update content/shared/v3-process-data/downsample/quix.md --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>pull/6830/head
parent
6c91e8d222
commit
c97428b600
|
|
@ -157,6 +157,28 @@ function getSharedSource(filePath) {
|
|||
- Tests will fail because content hasn't changed
|
||||
- Published site won't reflect your edits
|
||||
|
||||
### Check for Path Differences and Add `alt_links`
|
||||
|
||||
When creating or editing shared content, check if the URL paths differ between products. If they do, add `alt_links` frontmatter to each product file to cross-reference the equivalent pages.
|
||||
|
||||
See [DOCS-FRONTMATTER.md](../../../DOCS-FRONTMATTER.md#alternative-links-alt_links) for syntax and examples.
|
||||
|
||||
### Check product resource terms are cross-referenced
|
||||
|
||||
When creating or editing content, check that product resource terms link to `admin/` or `reference/` pages that help the user understand and set up the resource.
|
||||
Product resource terms often appear inside `code-placeholder-key` shortcode text and bullet item text.
|
||||
Example product resource terms:
|
||||
|
||||
- "database token"
|
||||
- "database name"
|
||||
|
||||
**TODOs for CI/config:**
|
||||
|
||||
- Add automated check to validate `alt_links` are present when shared content paths differ across products
|
||||
- Add check for product-specific URL patterns in shared content (e.g., Cloud Serverless uses `/reference/regions` for URLs, Cloud Dedicated/Clustered do not have this page - cluster URLs come from account setup)
|
||||
- Add check/helper to ensure resource references (tokens, databases, buckets) link to proper admin pages using `/influxdb3/version/admin/` pattern
|
||||
- Rethink `code-placeholder-key` workflow: `docs placeholders` adds `placeholders` attributes to code blocks but doesn't generate the "Replace the following:" lists with `{{% code-placeholder-key %}}` shortcodes. Either improve automation to generate these lists, or simplify by removing `code-placeholder-key` if the attribute alone is sufficient
|
||||
|
||||
## Part 2: Testing Workflow
|
||||
|
||||
After making content changes, run tests to validate:
|
||||
|
|
@ -285,6 +307,7 @@ Vale reports three alert levels:
|
|||
### Fixing Common Vale Issues
|
||||
|
||||
**Spelling/vocabulary errors:**
|
||||
|
||||
```bash
|
||||
# If Vale flags a legitimate term, add it to vocabulary
|
||||
echo "YourTerm" >> .ci/vale/styles/config/vocabularies/InfluxDataDocs/accept.txt
|
||||
|
|
@ -292,6 +315,7 @@ echo "YourTerm" >> .ci/vale/styles/config/vocabularies/InfluxDataDocs/accept.txt
|
|||
|
||||
**Style violations:**
|
||||
Vale will suggest the correct form. For example:
|
||||
|
||||
```
|
||||
content/file.md:25:1: Use 'InfluxDB 3' instead of 'InfluxDB v3'
|
||||
```
|
||||
|
|
@ -300,6 +324,7 @@ Simply make the suggested change.
|
|||
|
||||
**False positives:**
|
||||
If Vale incorrectly flags something:
|
||||
|
||||
1. Check if it's a new technical term that should be in vocabulary
|
||||
2. See if the rule needs refinement (consult **vale-rule-config** skill)
|
||||
3. Add inline comments to disable specific rules if necessary:
|
||||
|
|
@ -617,6 +642,7 @@ ls content/influxdb3/core/api/
|
|||
|
||||
- [ ] Content created/edited using appropriate method (CLI or direct)
|
||||
- [ ] If shared content: Sourcing files touched (or used `docs edit`)
|
||||
- [ ] If shared content: Check for path differences and add `alt_links` if paths vary
|
||||
- [ ] Technical accuracy verified (MCP fact-check if needed)
|
||||
- [ ] Hugo builds without errors (`hugo --quiet`)
|
||||
- [ ] Vale style linting passes (`docker compose run -T vale content/**/*.md`)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: Downsample data stored in InfluxDB using Quix Streams
|
||||
description: >
|
||||
Use [Quix Streams](https://github.com/quixio/quix-streams) to query time series
|
||||
data stored in InfluxDB and written to Kafka at regular intervals, continuously
|
||||
downsample it, and then write the downsampled data back to InfluxDB.
|
||||
menu:
|
||||
influxdb3_cloud_dedicated:
|
||||
name: Use Quix
|
||||
parent: Downsample data
|
||||
identifier: influxdb-dedicated-downsample-quix
|
||||
weight: 102
|
||||
aliases:
|
||||
- /influxdb3/cloud-dedicated/process-data/downsample-quix/
|
||||
source: /content/shared/v3-process-data/downsample/quix.md
|
||||
---
|
||||
|
|
@ -12,350 +12,5 @@ menu:
|
|||
weight: 202
|
||||
related:
|
||||
- /influxdb3/cloud-serverless/query-data/sql/aggregate-select/, Aggregate or apply selector functions to data (SQL)
|
||||
source: /content/shared/v3-process-data/downsample/quix.md
|
||||
---
|
||||
|
||||
Use [Quix Streams](https://github.com/quixio/quix-streams) to query time series
|
||||
data stored in InfluxDB and written to Kafka at regular intervals, continuously
|
||||
downsample it, and then write the downsampled data back to InfluxDB.
|
||||
Quix Streams is an open source Python library for building containerized stream
|
||||
processing applications with Apache Kafka. It is designed to run as a service
|
||||
that continuously processes a stream of data while streaming the results to a
|
||||
Kafka topic. You can try it locally, with a local Kafka installation, or run it
|
||||
in [Quix Cloud](https://quix.io/) with a free trial.
|
||||
|
||||
This guide uses [Python](https://www.python.org/) and the
|
||||
[InfluxDB 3 Python client library](https://github.com/InfluxCommunity/influxdb3-python),
|
||||
but you can use your runtime of choice and any of the available
|
||||
[InfluxDB 3 client libraries](/influxdb3/cloud-serverless/reference/client-libraries/v3/).
|
||||
This guide also assumes you have already
|
||||
[setup your Python project and virtual environment](/influxdb3/cloud-serverless/query-data/execute-queries/client-libraries/python/#create-a-python-virtual-environment).
|
||||
|
||||
## Pipeline architecture
|
||||
|
||||
The following diagram illustrates how data is passed between processes as it is downsampled:
|
||||
|
||||
{{< html-diagram/quix-downsample-pipeline >}}
|
||||
|
||||
> [!Note]
|
||||
> It is usually more efficient to write raw data directly to Kafka rather than
|
||||
> writing raw data to InfluxDB first (essentially starting the Quix Streams
|
||||
> pipeline with the "raw-data" topic). However, this guide assumes that you
|
||||
> already have raw data in InfluxDB that you want to downsample.
|
||||
|
||||
---
|
||||
|
||||
1. [Set up prerequisites](#set-up-prerequisites)
|
||||
2. [Install dependencies](#install-dependencies)
|
||||
3. [Prepare InfluxDB buckets](#prepare-influxdb-buckets)
|
||||
4. [Create the downsampling logic](#create-the-downsampling-logic)
|
||||
5. [Create the producer and consumer clients](#create-the-producer-and-consumer-clients)
|
||||
1. [Create the producer](#create-the-producer)
|
||||
2. [Create the consumer](#create-the-consumer)
|
||||
6. [Get the full downsampling code files](#get-the-full-downsampling-code-files)
|
||||
|
||||
## Set up prerequisites
|
||||
|
||||
The process described in this guide requires the following:
|
||||
|
||||
- An InfluxDB Cloud Serverless account with data ready for downsampling.
|
||||
- A [Quix Cloud](https://portal.platform.quix.io/self-sign-up/) account or a
|
||||
local Apache Kafka or Red Panda installation.
|
||||
- Familiarity with basic Python and Docker concepts.
|
||||
|
||||
## Install dependencies
|
||||
|
||||
Use `pip` to install the following dependencies:
|
||||
|
||||
- `influxdb_client_3`
|
||||
- `quixstreams<2.5`
|
||||
- `pandas`
|
||||
|
||||
```sh
|
||||
pip install influxdb3-python pandas quixstreams<2.5
|
||||
```
|
||||
|
||||
## Prepare InfluxDB buckets
|
||||
|
||||
The downsampling process involves two InfluxDB buckets.
|
||||
Each bucket has a [retention period](/influxdb3/cloud-serverless/reference/glossary/#retention-period)
|
||||
that specifies how long data persists before it expires and is deleted.
|
||||
By using two buckets, you can store unmodified, high-resolution data in a bucket
|
||||
with a shorter retention period and then downsampled, low-resolution data in a
|
||||
bucket with a longer retention period.
|
||||
|
||||
Ensure you have a bucket for each of the following:
|
||||
|
||||
- One to query unmodified data from
|
||||
- The other to write downsampled data to
|
||||
|
||||
For information about creating buckets, see
|
||||
[Create a bucket](/influxdb3/cloud-serverless/admin/buckets/create-bucket/).
|
||||
|
||||
## Create the downsampling logic
|
||||
|
||||
This process reads the raw data from the input Kafka topic that stores data streamed from InfluxDB,
|
||||
downsamples it, and then sends it to an output topic that is used to write back to InfluxDB.
|
||||
|
||||
1. Use the Quix Streams library's `Application` class to initialize a connection to Apache Kafka.
|
||||
|
||||
```py
|
||||
from quixstreams import Application
|
||||
|
||||
app = Application(consumer_group='downsampling-process', auto_offset_reset='earliest')
|
||||
input_topic = app.topic('raw-data')
|
||||
output_topic = app.topic('downsampled-data')
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
2. Configure the Quix Streams built-in windowing function to create a tumbling
|
||||
window that continously downsamples the data into 1-minute buckets.
|
||||
|
||||
```py
|
||||
# ...
|
||||
target_field = 'temperature' # The field that you want to downsample.
|
||||
|
||||
def custom_ts_extractor(value):
|
||||
# ...
|
||||
# truncated for brevity - custom code that defines the 'time_recorded'
|
||||
# field as the timestamp to use for windowing...
|
||||
|
||||
topic = app.topic(input_topic, timestamp_extractor=custom_ts_extractor)
|
||||
|
||||
sdf = (
|
||||
sdf.apply(lambda value: value[target_field]) # Extract temperature values
|
||||
.tumbling_window(timedelta(minutes=1)) # 1-minute tumbling windows
|
||||
.mean() # Calculate average temperature
|
||||
.final() # Emit results at window completion
|
||||
)
|
||||
|
||||
sdf = sdf.apply(
|
||||
lambda value: {
|
||||
'time': value['end'], # End of the window
|
||||
'temperature_avg': value['value'], # Average temperature
|
||||
}
|
||||
)
|
||||
|
||||
sdf.to_topic(output_topic) # Output results to the 'downsampled-data' topic
|
||||
# ...
|
||||
```
|
||||
|
||||
The results are streamed to the Kafka topic, `downsampled-data`.
|
||||
|
||||
> [!Note]
|
||||
> "sdf" stands for "Streaming Dataframe".
|
||||
|
||||
You can find the full code for this process in the
|
||||
[Quix GitHub repository](https://github.com/quixio/template-influxdbv3-downsampling/blob/dev/Downsampler/main.py).
|
||||
|
||||
## Create the producer and consumer clients
|
||||
|
||||
Use the `influxdb_client_3` and `quixstreams` modules to instantiate two clients that interact with InfluxDB and Apache Kafka:
|
||||
|
||||
- A **producer** client configured to read from your InfluxDB bucket with _unmodified_ data and _produce_ that data to Kafka.
|
||||
- A **consumer** client configured to _consume_ data from Kafka and write the _downsampled_ data to the corresponding InfluxDB bucket.
|
||||
|
||||
### Create the producer client
|
||||
|
||||
Provide the following credentials for the producer:
|
||||
|
||||
- **host**: [{{< product-name >}} region URL](/influxdb3/cloud-serverless/reference/regions)
|
||||
_(without the protocol)_
|
||||
- **org**: InfluxDB organization name
|
||||
- **token**: InfluxDB API token with read and write permissions on the buckets you
|
||||
want to query and write to.
|
||||
- **database**: InfluxDB bucket name
|
||||
|
||||
The producer queries for fresh data from InfluxDB at specific intervals. It's configured to look for a specific measurement defined in a variable. It writes the raw data to a Kafka topic called 'raw-data'
|
||||
|
||||
{{% code-placeholders "(API|(RAW|DOWNSAMPLED)_BUCKET|ORG)_(NAME|TOKEN)" %}}
|
||||
```py
|
||||
from influxdb_client_3 import InfluxDBClient3
|
||||
from quixstreams import Application
|
||||
import pandas
|
||||
|
||||
# Instantiate an InfluxDBClient3 client configured for your unmodified bucket
|
||||
influxdb_raw = InfluxDBClient3(
|
||||
host='{{< influxdb/host >}}',
|
||||
token='API_TOKEN',
|
||||
database='RAW_BUCKET_NAME'
|
||||
)
|
||||
|
||||
# os.environ['localdev'] = 'true' # Uncomment if you're using local Kafka rather than Quix Cloud
|
||||
|
||||
# Create a Quix Streams producer application that connects to a local Kafka installation
|
||||
app = Application(
|
||||
broker_address=os.environ.get('BROKER_ADDRESS','localhost:9092'),
|
||||
consumer_group=consumer_group_name,
|
||||
auto_create_topics=True
|
||||
)
|
||||
|
||||
# Override the app variable if the local development env var is set to false or is not present.
|
||||
# This causes Quix Streams to use an application configured for Quix Cloud
|
||||
localdev = os.environ.get('localdev', 'false')
|
||||
|
||||
if localdev == 'false':
|
||||
# Create a Quix platform-specific application instead (broker address is in-built)
|
||||
app = Application(consumer_group=consumer_group_name, auto_create_topics=True)
|
||||
|
||||
topic = app.topic(name='raw-data')
|
||||
|
||||
## ... remaining code trunctated for brevity ...
|
||||
|
||||
# Query InfluxDB for the raw data and store it in a Dataframe
|
||||
def get_data():
|
||||
# Run in a loop until the main thread is terminated
|
||||
while run:
|
||||
try:
|
||||
myquery = f'SELECT * FROM "{measurement_name}" WHERE time >= {interval}'
|
||||
print(f'sending query {myquery}')
|
||||
# Query InfluxDB 3 using influxql or sql
|
||||
table = influxdb_raw.query(
|
||||
query=myquery,
|
||||
mode='pandas',
|
||||
language='influxql')
|
||||
|
||||
#... remaining code trunctated for brevity ...
|
||||
|
||||
# Send the data to a Kafka topic for the downsampling process to consumer
|
||||
def main():
|
||||
"""
|
||||
Read data from the Query and publish it to Kafka
|
||||
"""
|
||||
#... remaining code trunctated for brevity ...
|
||||
|
||||
for index, obj in enumerate(records):
|
||||
print(obj) # Obj contains each row in the table includimng temperature
|
||||
# Generate a unique message_key for each row
|
||||
message_key = obj['machineId']
|
||||
logger.info(f'Produced message with key:{message_key}, value:{obj}')
|
||||
|
||||
serialized = topic.serialize(
|
||||
key=message_key, value=obj, headers={'uuid': str(uuid.uuid4())}
|
||||
)
|
||||
|
||||
# publish each row returned in the query to the topic 'raw-data'
|
||||
producer.produce(
|
||||
topic=topic.name,
|
||||
headers=serialized.headers,
|
||||
key=serialized.key,
|
||||
value=serialized.value,
|
||||
)
|
||||
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
You can find the full code for this process in the
|
||||
[Quix GitHub repository](https://github.com/quixio/template-influxdbv3-downsampling/blob/dev/InfluxDB%20V3%20Data%20Source/main.py).
|
||||
|
||||
### Create the consumer
|
||||
|
||||
As before, provide the following credentials for the consumer:
|
||||
|
||||
- **host**: [{{< product-name >}} region URL](/influxdb3/cloud-serverless/reference/regions)
|
||||
_(without the protocol)_
|
||||
- **org**: InfluxDB organization name
|
||||
- **token**: InfluxDB API token with read and write permissions on the buckets you
|
||||
want to query and write to.
|
||||
- **database**: InfluxDB bucket name
|
||||
|
||||
This process reads messages from the Kafka topic `downsampled-data` and writes each message as a point dictionary back to InfluxDB.
|
||||
|
||||
{{% code-placeholders "(API|(RAW|DOWNSAMPLED)_BUCKET|ORG)_(NAME|TOKEN)" %}}
|
||||
```py
|
||||
# Instantiate an InfluxDBClient3 client configured for your downsampled database.
|
||||
# When writing, the org= argument is required by the client (but ignored by InfluxDB).
|
||||
influxdb_downsampled = InfluxDBClient3(
|
||||
host='{{< influxdb/host >}}',
|
||||
token='API_TOKEN',
|
||||
database='DOWNSAMPLED_BUCKET_NAME',
|
||||
org=''
|
||||
)
|
||||
|
||||
# os.environ['localdev'] = 'true' # Uncomment if you're using local Kafka rather than Quix Cloud
|
||||
|
||||
# Create a Quix Streams consumer application that connects to a local Kafka installation
|
||||
app = Application(
|
||||
broker_address=os.environ.get('BROKER_ADDRESS','localhost:9092'),
|
||||
consumer_group=consumer_group_name,
|
||||
auto_create_topics=True
|
||||
)
|
||||
|
||||
# Override the app variable if the local development env var is set to false or is not present.
|
||||
# This causes Quix Streams to use an application configured for Quix Cloud
|
||||
localdev = os.environ.get('localdev', 'false')
|
||||
|
||||
if localdev == 'false':
|
||||
# Create a Quix platform-specific application instead (broker address is in-built)
|
||||
app = Application(consumer_group=consumer_group_name, auto_create_topics=True)
|
||||
|
||||
input_topic = app.topic('downsampled-data')
|
||||
|
||||
## ... remaining code trunctated for brevity ...
|
||||
|
||||
def send_data_to_influx(message):
|
||||
logger.info(f'Processing message: {message}')
|
||||
try:
|
||||
|
||||
## ... remaining code trunctated for brevity ...
|
||||
|
||||
# Construct the points dictionary
|
||||
points = {
|
||||
'measurement': measurement_name,
|
||||
'tags': tags,
|
||||
'fields': fields,
|
||||
'time': message['time']
|
||||
}
|
||||
|
||||
influxdb_downsampled.write(record=points, write_precision='ms')
|
||||
|
||||
sdf = app.dataframe(input_topic)
|
||||
sdf = sdf.update(send_data_to_influx) # Continuously apply the 'send_data' function to each message in the incoming stream
|
||||
|
||||
## ... remaining code trunctated for brevity ...
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
You can find the full code for this process in the
|
||||
[Quix GitHub repository](https://github.com/quixio/template-influxdbv3-downsampling/blob/dev/InfluxDB%20V3%20Data%20Sink/main.py).
|
||||
|
||||
## Get the full downsampling code files
|
||||
|
||||
To get the complete set of files referenced in this tutorial, clone the Quix "downsampling template" repository or use an interactive version of this tutorial saved as a Jupyter Notebook.
|
||||
|
||||
### Clone the downsampling template repository
|
||||
|
||||
To clone the downsampling template, enter the following command in the command line:
|
||||
|
||||
```sh
|
||||
git clone https://github.com/quixio/template-influxdbv3-downsampling.git
|
||||
```
|
||||
|
||||
This repository contains the following folders which store different parts of the whole pipeline:
|
||||
|
||||
- **Machine Data to InfluxDB**: A script that generates synthetic machine data
|
||||
and writes it to InfluxDB. This is useful if you dont have your own data yet,
|
||||
or just want to work with test data first.
|
||||
|
||||
- It produces a reading every 250 milliseconds.
|
||||
- This script originally comes from the
|
||||
[InfluxCommunity repository](https://github.com/InfluxCommunity/Arrow-Task-Engine/blob/master/machine_simulator/src/machine_generator.py)
|
||||
but has been adapted to write directly to InfluxDB rather than using an MQTT broker.
|
||||
|
||||
- **InfluxDB V3 Data Source**: A service that queries for fresh data from
|
||||
InfluxDB at specific intervals. It's configured to look for the measurement
|
||||
produced by the previously-mentioned synthetic machine data generator.
|
||||
It writes the raw data to a Kafka topic called "raw-data".
|
||||
- **Downsampler**: A service that performs a 1-minute tumbling window operation
|
||||
on the data from InfluxDB and emits the mean of the "temperature" reading
|
||||
every minute. It writes the output to a "downsampled-data" Kafka topic.
|
||||
- **InfluxDB V3 Data Sink**: A service that reads from the "downsampled-data"
|
||||
topic and writes the downsample records as points back into InfluxDB.
|
||||
|
||||
### Use the downsampling Jupyter Notebook
|
||||
|
||||
You can use the interactive notebook ["Continuously downsample data using InfluxDB and Quix Streams"](https://github.com/quixio/tutorial-code/edit/main/notebooks/Downsampling_viaKafka_Using_Quix_Influx.ipynb) to try downsampling code yourself. It is configured to install Apache Kafka within the runtime environment (such as Google Colab).
|
||||
|
||||
Each process is also set up to run in the background so that a running cell does not block the rest of the tutorial.
|
||||
|
||||
<a target="_blank" href="https://colab.research.google.com/github/quixio/tutorial-code/blob/main/notebooks/Downsampling_viaKafka_Using_Quix_Influx.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
|
||||
|
|
|
|||
|
|
@ -12,350 +12,5 @@ menu:
|
|||
weight: 202
|
||||
related:
|
||||
- /influxdb3/clustered/query-data/sql/aggregate-select/, Aggregate or apply selector functions to data (SQL)
|
||||
source: /content/shared/v3-process-data/downsample/quix.md
|
||||
---
|
||||
|
||||
Use [Quix Streams](https://github.com/quixio/quix-streams) to query time series
|
||||
data stored in InfluxDB and written to Kafka at regular intervals, continuously
|
||||
downsample it, and then write the downsampled data back to InfluxDB.
|
||||
Quix Streams is an open source Python library for building containerized stream
|
||||
processing applications with Apache Kafka. It is designed to run as a service
|
||||
that continuously processes a stream of data while streaming the results to a
|
||||
Kafka topic. You can try it locally, with a local Kafka installation, or run it
|
||||
in [Quix Cloud](https://quix.io/) with a free trial.
|
||||
|
||||
This guide uses [Python](https://www.python.org/) and the
|
||||
[InfluxDB 3 Python client library](https://github.com/InfluxCommunity/influxdb3-python),
|
||||
but you can use your runtime of choice and any of the available
|
||||
[InfluxDB 3 client libraries](/influxdb3/clustered/reference/client-libraries/v3/).
|
||||
This guide also assumes you have already
|
||||
[setup your Python project and virtual environment](/influxdb3/clustered/query-data/execute-queries/client-libraries/python/#create-a-python-virtual-environment).
|
||||
|
||||
## Pipeline architecture
|
||||
|
||||
The following diagram illustrates how data is passed between processes as it is downsampled:
|
||||
|
||||
{{< html-diagram/quix-downsample-pipeline >}}
|
||||
|
||||
> [!Note]
|
||||
> It is usually more efficient to write raw data directly to Kafka rather than
|
||||
> writing raw data to InfluxDB first (essentially starting the Quix Streams
|
||||
> pipeline with the "raw-data" topic). However, this guide assumes that you
|
||||
> already have raw data in InfluxDB that you want to downsample.
|
||||
|
||||
---
|
||||
|
||||
1. [Set up prerequisites](#set-up-prerequisites)
|
||||
2. [Install dependencies](#install-dependencies)
|
||||
3. [Prepare InfluxDB buckets](#prepare-influxdb-buckets)
|
||||
4. [Create the downsampling logic](#create-the-downsampling-logic)
|
||||
5. [Create the producer and consumer clients](#create-the-producer-and-consumer-clients)
|
||||
1. [Create the producer](#create-the-producer)
|
||||
2. [Create the consumer](#create-the-consumer)
|
||||
6. [Get the full downsampling code files](#get-the-full-downsampling-code-files)
|
||||
|
||||
## Set up prerequisites
|
||||
|
||||
The process described in this guide requires the following:
|
||||
|
||||
- An InfluxDB cluster with data ready for downsampling.
|
||||
- A [Quix Cloud](https://portal.platform.quix.io/self-sign-up/) account or a
|
||||
local Apache Kafka or Red Panda installation.
|
||||
- Familiarity with basic Python and Docker concepts.
|
||||
|
||||
## Install dependencies
|
||||
|
||||
Use `pip` to install the following dependencies:
|
||||
|
||||
- `influxdb_client_3`
|
||||
- `quixstreams<2.5`
|
||||
- `pandas`
|
||||
|
||||
```sh
|
||||
pip install influxdb3-python pandas quixstreams<2.5
|
||||
```
|
||||
|
||||
## Prepare InfluxDB buckets
|
||||
|
||||
The downsampling process involves two InfluxDB databases.
|
||||
Each database has a [retention period](/influxdb3/clustered/reference/glossary/#retention-period)
|
||||
that specifies how long data persists before it expires and is deleted.
|
||||
By using two databases, you can store unmodified, high-resolution data in a database
|
||||
with a shorter retention period and then downsampled, low-resolution data in a
|
||||
database with a longer retention period.
|
||||
|
||||
Ensure you have a database for each of the following:
|
||||
|
||||
- One to query unmodified data from
|
||||
- The other to write downsampled data to
|
||||
|
||||
For information about creating databases, see
|
||||
[Create a bucket](/influxdb3/clustered/admin/databases/create/).
|
||||
|
||||
## Create the downsampling logic
|
||||
|
||||
This process reads the raw data from the input Kafka topic that stores data streamed from InfluxDB,
|
||||
downsamples it, and then sends it to an output topic that is used to write back to InfluxDB.
|
||||
|
||||
1. Use the Quix Streams library's `Application` class to initialize a connection to Apache Kafka.
|
||||
|
||||
```py
|
||||
from quixstreams import Application
|
||||
|
||||
app = Application(consumer_group='downsampling-process', auto_offset_reset='earliest')
|
||||
input_topic = app.topic('raw-data')
|
||||
output_topic = app.topic('downsampled-data')
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
2. Configure the Quix Streams built-in windowing function to create a tumbling
|
||||
window that continously downsamples the data into 1-minute buckets.
|
||||
|
||||
```py
|
||||
# ...
|
||||
target_field = 'temperature' # The field that you want to downsample.
|
||||
|
||||
def custom_ts_extractor(value):
|
||||
# ...
|
||||
# truncated for brevity - custom code that defines the 'time_recorded'
|
||||
# field as the timestamp to use for windowing...
|
||||
|
||||
topic = app.topic(input_topic, timestamp_extractor=custom_ts_extractor)
|
||||
|
||||
sdf = (
|
||||
sdf.apply(lambda value: value[target_field]) # Extract temperature values
|
||||
.tumbling_window(timedelta(minutes=1)) # 1-minute tumbling windows
|
||||
.mean() # Calculate average temperature
|
||||
.final() # Emit results at window completion
|
||||
)
|
||||
|
||||
sdf = sdf.apply(
|
||||
lambda value: {
|
||||
'time': value['end'], # End of the window
|
||||
'temperature_avg': value['value'], # Average temperature
|
||||
}
|
||||
)
|
||||
|
||||
sdf.to_topic(output_topic) # Output results to the 'downsampled-data' topic
|
||||
# ...
|
||||
```
|
||||
|
||||
The results are streamed to the Kafka topic, `downsampled-data`.
|
||||
|
||||
> [!Note]
|
||||
> "sdf" stands for "Streaming Dataframe".
|
||||
|
||||
You can find the full code for this process in the
|
||||
[Quix GitHub repository](https://github.com/quixio/template-influxdbv3-downsampling/blob/dev/Downsampler/main.py).
|
||||
|
||||
## Create the producer and consumer clients
|
||||
|
||||
Use the `influxdb_client_3` and `quixstreams` modules to instantiate two clients that interact with InfluxDB and Apache Kafka:
|
||||
|
||||
- A **producer** client configured to read from your InfluxDB database with _unmodified_ data and _produce_ that data to Kafka.
|
||||
- A **consumer** client configured to _consume_ data from Kafka and write the _downsampled_ data to the corresponding InfluxDB database.
|
||||
|
||||
### Create the producer client
|
||||
|
||||
Provide the following credentials for the producer:
|
||||
|
||||
- **host**: {{< product-name omit=" Clustered">}} cluster URL
|
||||
_(without the protocol)_
|
||||
- **org**: An arbitrary string. {{< product-name >}} ignores the organization.
|
||||
- **token**: InfluxDB database token with read and write permissions on the databases you
|
||||
want to query and write to.
|
||||
- **database**: InfluxDB database name
|
||||
|
||||
The producer queries for fresh data from InfluxDB at specific intervals. It's configured to look for a specific measurement defined in a variable. It writes the raw data to a Kafka topic called 'raw-data'
|
||||
|
||||
{{% code-placeholders "(RAW|DOWNSAMPLED)_DATABASE_(NAME|TOKEN)" %}}
|
||||
```py
|
||||
from influxdb_client_3 import InfluxDBClient3
|
||||
from quixstreams import Application
|
||||
import pandas
|
||||
|
||||
# Instantiate an InfluxDBClient3 client configured for your unmodified database
|
||||
influxdb_raw = InfluxDBClient3(
|
||||
host='{{< influxdb/host >}}',
|
||||
token='DATABASE_TOKEN',
|
||||
database='RAW_DATABASE_NAME'
|
||||
)
|
||||
|
||||
# os.environ['localdev'] = 'true' # Uncomment if you're using local Kafka rather than Quix Cloud
|
||||
|
||||
# Create a Quix Streams producer application that connects to a local Kafka installation
|
||||
app = Application(
|
||||
broker_address=os.environ.get('BROKER_ADDRESS','localhost:9092'),
|
||||
consumer_group=consumer_group_name,
|
||||
auto_create_topics=True
|
||||
)
|
||||
|
||||
# Override the app variable if the local development env var is set to false or is not present.
|
||||
# This causes Quix Streams to use an application configured for Quix Cloud
|
||||
localdev = os.environ.get('localdev', 'false')
|
||||
|
||||
if localdev == 'false':
|
||||
# Create a Quix platform-specific application instead (broker address is in-built)
|
||||
app = Application(consumer_group=consumer_group_name, auto_create_topics=True)
|
||||
|
||||
topic = app.topic(name='raw-data')
|
||||
|
||||
## ... remaining code trunctated for brevity ...
|
||||
|
||||
# Query InfluxDB for the raw data and store it in a Dataframe
|
||||
def get_data():
|
||||
# Run in a loop until the main thread is terminated
|
||||
while run:
|
||||
try:
|
||||
myquery = f'SELECT * FROM "{measurement_name}" WHERE time >= {interval}'
|
||||
print(f'sending query {myquery}')
|
||||
# Query InfluxDB 3 using influxql or sql
|
||||
table = influxdb_raw.query(
|
||||
query=myquery,
|
||||
mode='pandas',
|
||||
language='influxql')
|
||||
|
||||
#... remaining code trunctated for brevity ...
|
||||
|
||||
# Send the data to a Kafka topic for the downsampling process to consumer
|
||||
def main():
|
||||
"""
|
||||
Read data from the Query and publish it to Kafka
|
||||
"""
|
||||
#... remaining code trunctated for brevity ...
|
||||
|
||||
for index, obj in enumerate(records):
|
||||
print(obj) # Obj contains each row in the table includimng temperature
|
||||
# Generate a unique message_key for each row
|
||||
message_key = obj['machineId']
|
||||
logger.info(f'Produced message with key:{message_key}, value:{obj}')
|
||||
|
||||
serialized = topic.serialize(
|
||||
key=message_key, value=obj, headers={'uuid': str(uuid.uuid4())}
|
||||
)
|
||||
|
||||
# publish each row returned in the query to the topic 'raw-data'
|
||||
producer.produce(
|
||||
topic=topic.name,
|
||||
headers=serialized.headers,
|
||||
key=serialized.key,
|
||||
value=serialized.value,
|
||||
)
|
||||
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
You can find the full code for this process in the
|
||||
[Quix GitHub repository](https://github.com/quixio/template-influxdbv3-downsampling/blob/dev/InfluxDB%20V3%20Data%20Source/main.py).
|
||||
|
||||
### Create the consumer
|
||||
|
||||
As before, provide the following credentials for the consumer:
|
||||
|
||||
- **host**: {{< product-name omit=" Clustered">}} cluster URL
|
||||
_(without the protocol)_
|
||||
- **org**: An arbitrary string. {{< product-name >}} ignores the organization.
|
||||
- **token**: InfluxDB database token with read and write permissions on the databases you
|
||||
want to query and write to.
|
||||
- **database**: InfluxDB database name
|
||||
|
||||
This process reads messages from the Kafka topic `downsampled-data` and writes each message as a point dictionary back to InfluxDB.
|
||||
|
||||
{{% code-placeholders "(RAW|DOWNSAMPLED)_DATABASE_(NAME|TOKEN)" %}}
|
||||
```py
|
||||
# Instantiate an InfluxDBClient3 client configured for your downsampled database.
|
||||
# When writing, the org= argument is required by the client (but ignored by InfluxDB).
|
||||
influxdb_downsampled = InfluxDBClient3(
|
||||
host='{{< influxdb/host >}}',
|
||||
token='DATABASE_TOKEN',
|
||||
database='DOWNSAMPLED_DATABASE_NAME',
|
||||
org=''
|
||||
)
|
||||
|
||||
# os.environ['localdev'] = 'true' # Uncomment if you're using local Kafka rather than Quix Cloud
|
||||
|
||||
# Create a Quix Streams consumer application that connects to a local Kafka installation
|
||||
app = Application(
|
||||
broker_address=os.environ.get('BROKER_ADDRESS','localhost:9092'),
|
||||
consumer_group=consumer_group_name,
|
||||
auto_create_topics=True
|
||||
)
|
||||
|
||||
# Override the app variable if the local development env var is set to false or is not present.
|
||||
# This causes Quix Streams to use an application configured for Quix Cloud
|
||||
localdev = os.environ.get('localdev', 'false')
|
||||
|
||||
if localdev == 'false':
|
||||
# Create a Quix platform-specific application instead (broker address is in-built)
|
||||
app = Application(consumer_group=consumer_group_name, auto_create_topics=True)
|
||||
|
||||
input_topic = app.topic('downsampled-data')
|
||||
|
||||
## ... remaining code trunctated for brevity ...
|
||||
|
||||
def send_data_to_influx(message):
|
||||
logger.info(f'Processing message: {message}')
|
||||
try:
|
||||
|
||||
## ... remaining code trunctated for brevity ...
|
||||
|
||||
# Construct the points dictionary
|
||||
points = {
|
||||
'measurement': measurement_name,
|
||||
'tags': tags,
|
||||
'fields': fields,
|
||||
'time': message['time']
|
||||
}
|
||||
|
||||
influxdb_downsampled.write(record=points, write_precision='ms')
|
||||
|
||||
sdf = app.dataframe(input_topic)
|
||||
sdf = sdf.update(send_data_to_influx) # Continuously apply the 'send_data' function to each message in the incoming stream
|
||||
|
||||
## ... remaining code trunctated for brevity ...
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
You can find the full code for this process in the
|
||||
[Quix GitHub repository](https://github.com/quixio/template-influxdbv3-downsampling/blob/dev/InfluxDB%20V3%20Data%20Sink/main.py).
|
||||
|
||||
## Get the full downsampling code files
|
||||
|
||||
To get the complete set of files referenced in this tutorial, clone the Quix "downsampling template" repository or use an interactive version of this tutorial saved as a Jupyter Notebook.
|
||||
|
||||
### Clone the downsampling template repository
|
||||
|
||||
To clone the downsampling template, enter the following command in the command line:
|
||||
|
||||
```sh
|
||||
git clone https://github.com/quixio/template-influxdbv3-downsampling.git
|
||||
```
|
||||
|
||||
This repository contains the following folders which store different parts of the whole pipeline:
|
||||
|
||||
- **Machine Data to InfluxDB**: A script that generates synthetic machine data
|
||||
and writes it to InfluxDB. This is useful if you dont have your own data yet,
|
||||
or just want to work with test data first.
|
||||
|
||||
- It produces a reading every 250 milliseconds.
|
||||
- This script originally comes from the
|
||||
[InfluxCommunity repository](https://github.com/InfluxCommunity/Arrow-Task-Engine/blob/master/machine_simulator/src/machine_generator.py)
|
||||
but has been adapted to write directly to InfluxDB rather than using an MQTT broker.
|
||||
|
||||
- **InfluxDB V3 Data Source**: A service that queries for fresh data from
|
||||
InfluxDB at specific intervals. It's configured to look for the measurement
|
||||
produced by the previously-mentioned synthetic machine data generator.
|
||||
It writes the raw data to a Kafka topic called "raw-data".
|
||||
- **Downsampler**: A service that performs a 1-minute tumbling window operation
|
||||
on the data from InfluxDB and emits the mean of the "temperature" reading
|
||||
every minute. It writes the output to a "downsampled-data" Kafka topic.
|
||||
- **InfluxDB V3 Data Sink**: A service that reads from the "downsampled-data"
|
||||
topic and writes the downsample records as points back into InfluxDB.
|
||||
|
||||
### Use the downsampling Jupyter Notebook
|
||||
|
||||
You can use the interactive notebook ["Continuously downsample data using InfluxDB and Quix Streams"](https://github.com/quixio/tutorial-code/edit/main/notebooks/Downsampling_viaKafka_Using_Quix_Influx.ipynb) to try downsampling code yourself. It is configured to install Apache Kafka within the runtime environment (such as Google Colab).
|
||||
|
||||
Each process is also set up to run in the background so that a running cell does not block the rest of the tutorial.
|
||||
|
||||
<a target="_blank" href="https://colab.research.google.com/github/quixio/tutorial-code/blob/main/notebooks/Downsampling_viaKafka_Using_Quix_Influx.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
|
||||
|
|
|
|||
|
|
@ -1,17 +1,3 @@
|
|||
---
|
||||
title: Downsample data stored in InfluxDB using Quix Streams
|
||||
description: >
|
||||
Use [Quix Streams](https://github.com/quixio/quix-streams) to query time series
|
||||
data stored in InfluxDB and written to Kafka at regular intervals, continuously
|
||||
downsample it, and then write the downsampled data back to InfluxDB.
|
||||
menu:
|
||||
influxdb3_cloud_dedicated:
|
||||
name: Use Quix
|
||||
parent: Downsample data
|
||||
identifier: influxdb-dedicated-downsample-quix
|
||||
weight: 102
|
||||
---
|
||||
|
||||
Use [Quix Streams](https://github.com/quixio/quix-streams) to query time series
|
||||
data stored in InfluxDB and written to Kafka at regular intervals, continuously
|
||||
downsample it, and then write the downsampled data back to InfluxDB.
|
||||
|
|
@ -24,9 +10,9 @@ in [Quix Cloud](https://quix.io/) with a free trial.
|
|||
This guide uses [Python](https://www.python.org/) and the
|
||||
[InfluxDB 3 Python client library](https://github.com/InfluxCommunity/influxdb3-python),
|
||||
but you can use your runtime of choice and any of the available
|
||||
[InfluxDB 3 client libraries](/influxdb3/cloud-dedicated/reference/client-libraries/v3/).
|
||||
[InfluxDB 3 client libraries](/influxdb3/version/reference/client-libraries/v3/).
|
||||
This guide also assumes you have already
|
||||
[setup your Python project and virtual environment](/influxdb3/cloud-dedicated/query-data/execute-queries/client-libraries/python/#create-a-python-virtual-environment).
|
||||
[setup your Python project and virtual environment](/influxdb3/version/query-data/execute-queries/client-libraries/python/#create-a-python-virtual-environment).
|
||||
|
||||
## Pipeline architecture
|
||||
|
||||
|
|
@ -44,21 +30,31 @@ The following diagram illustrates how data is passed between processes as it is
|
|||
|
||||
1. [Set up prerequisites](#set-up-prerequisites)
|
||||
2. [Install dependencies](#install-dependencies)
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
3. [Prepare InfluxDB buckets](#prepare-influxdb-buckets)
|
||||
{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
3. [Prepare InfluxDB databases](#prepare-influxdb-databases)
|
||||
{{% /show-in %}}
|
||||
4. [Create the downsampling logic](#create-the-downsampling-logic)
|
||||
5. [Create the producer and consumer clients](#create-the-producer-and-consumer-clients)
|
||||
1. [Create the producer](#create-the-producer)
|
||||
2. [Create the consumer](#create-the-consumer)
|
||||
1. [Create the producer client](#create-the-producer-client)
|
||||
2. [Create the consumer client](#create-the-consumer-client)
|
||||
6. [Get the full downsampling code files](#get-the-full-downsampling-code-files)
|
||||
|
||||
## Set up prerequisites
|
||||
|
||||
The process described in this guide requires the following:
|
||||
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
- An InfluxDB Cloud Serverless account with data ready for downsampling.
|
||||
{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
- An InfluxDB cluster with data ready for downsampling.
|
||||
{{% /show-in %}}
|
||||
- A [Quix Cloud](https://portal.platform.quix.io/self-sign-up/) account or a
|
||||
local Apache Kafka or Red Panda installation.
|
||||
- Familiarity with basic Python and Docker concepts.
|
||||
local Apache Kafka or Redpanda installation.
|
||||
- Familiarity with basic Python concepts.
|
||||
|
||||
## Install dependencies
|
||||
|
||||
|
|
@ -72,22 +68,33 @@ Use `pip` to install the following dependencies:
|
|||
pip install influxdb3-python pandas quixstreams<2.5
|
||||
```
|
||||
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
## Prepare InfluxDB buckets
|
||||
{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
## Prepare InfluxDB databases
|
||||
{{% /show-in %}}
|
||||
|
||||
The downsampling process involves two InfluxDB buckets.
|
||||
Each bucket has a [retention period](/influxdb3/cloud-dedicated/reference/glossary/#retention-period)
|
||||
The downsampling process involves two InfluxDB {{% show-in "cloud-serverless" %}}buckets{{% /show-in %}}{{% show-in "cloud-dedicated,clustered" %}}databases{{% /show-in %}}.
|
||||
Each {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}}{{% show-in "cloud-dedicated,clustered" %}}database{{% /show-in %}} has a [retention period](/influxdb3/version/reference/glossary/#retention-period)
|
||||
that specifies how long data persists before it expires and is deleted.
|
||||
By using two buckets, you can store unmodified, high-resolution data in a bucket
|
||||
By using two {{% show-in "cloud-serverless" %}}buckets{{% /show-in %}}{{% show-in "cloud-dedicated,clustered" %}}databases{{% /show-in %}}, you can store unmodified, high-resolution data in a {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}}{{% show-in "cloud-dedicated,clustered" %}}database{{% /show-in %}}
|
||||
with a shorter retention period and then downsampled, low-resolution data in a
|
||||
bucket with a longer retention period.
|
||||
{{% show-in "cloud-serverless" %}}bucket{{% /show-in %}}{{% show-in "cloud-dedicated,clustered" %}}database{{% /show-in %}} with a longer retention period.
|
||||
|
||||
Ensure you have a bucket for each of the following:
|
||||
Ensure you have a {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}}{{% show-in "cloud-dedicated,clustered" %}}database{{% /show-in %}} for each of the following:
|
||||
|
||||
- One to query unmodified data from
|
||||
- The other to write downsampled data to
|
||||
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
For information about creating buckets, see
|
||||
[Create a bucket](/influxdb3/cloud-dedicated/admin/buckets/create-bucket/).
|
||||
[Create a bucket](/influxdb3/cloud-serverless/admin/buckets/create-bucket/).
|
||||
{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
For information about creating databases, see
|
||||
[Create a database](/influxdb3/version/admin/databases/create/).
|
||||
{{% /show-in %}}
|
||||
|
||||
## Create the downsampling logic
|
||||
|
||||
|
|
@ -96,7 +103,7 @@ downsamples it, and then sends it to an output topic that is used to write back
|
|||
|
||||
1. Use the Quix Streams library's `Application` class to initialize a connection to Apache Kafka.
|
||||
|
||||
```py
|
||||
```python
|
||||
from quixstreams import Application
|
||||
|
||||
app = Application(consumer_group='downsampling-process', auto_offset_reset='earliest')
|
||||
|
|
@ -107,9 +114,9 @@ downsamples it, and then sends it to an output topic that is used to write back
|
|||
```
|
||||
|
||||
2. Configure the Quix Streams built-in windowing function to create a tumbling
|
||||
window that continously downsamples the data into 1-minute buckets.
|
||||
window that continuously downsamples the data into 1-minute buckets.
|
||||
|
||||
```py
|
||||
```python
|
||||
# ...
|
||||
target_field = 'temperature' # The field that you want to downsample.
|
||||
|
||||
|
|
@ -148,26 +155,36 @@ You can find the full code for this process in the
|
|||
|
||||
## Create the producer and consumer clients
|
||||
|
||||
Use the `influxdb_client_3` and `quixstreams` modules to instantiate two clients that interact with InfluxDB and Apache Kafka:
|
||||
Use the `influxdb_client_3` and `quixstreams` modules to instantiate two clients that interact with InfluxDB and Apache Kafka:
|
||||
|
||||
- A **producer** client configured to read from your InfluxDB bucket with _unmodified_ data and _produce_ that data to Kafka.
|
||||
- A **consumer** client configured to _consume_ data from Kafka and write the _downsampled_ data to the corresponding InfluxDB bucket.
|
||||
- A **producer** client configured to read from your InfluxDB {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}}{{% show-in "cloud-dedicated,clustered" %}}database{{% /show-in %}} with _unmodified_ data and _produce_ that data to Kafka.
|
||||
- A **consumer** client configured to _consume_ data from Kafka and write the _downsampled_ data to the corresponding InfluxDB {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}}{{% show-in "cloud-dedicated,clustered" %}}database{{% /show-in %}}.
|
||||
|
||||
### Create the producer client
|
||||
|
||||
Provide the following credentials for the producer:
|
||||
|
||||
- **host**: [{{< product-name >}} region URL](/influxdb3/cloud-dedicated/reference/regions)
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
- **host**: Your [{{% product-name %}} region URL](/influxdb3/cloud-serverless/reference/regions)
|
||||
_(without the protocol)_
|
||||
- **org**: InfluxDB organization name
|
||||
- **token**: InfluxDB API token with read and write permissions on the buckets you
|
||||
- **org**: Your InfluxDB organization name
|
||||
- **token**: Your InfluxDB [API token](/influxdb3/version/admin/tokens/) with read and write permissions on the buckets you
|
||||
want to query and write to.
|
||||
- **database**: InfluxDB bucket name
|
||||
- **database**: Your InfluxDB [bucket](/influxdb3/version/admin/buckets/) name
|
||||
{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
- **host**: Your {{% product-name omit=" Clustered" %}} cluster URL
|
||||
_(without the protocol)_
|
||||
- **org**: An arbitrary string (InfluxDB ignores this parameter, but the client requires it)
|
||||
- **token**: Your InfluxDB [database token](/influxdb3/version/admin/tokens/database/) with read and write permissions on the databases you
|
||||
want to query and write to.
|
||||
- **database**: Your InfluxDB [database](/influxdb3/version/admin/databases/) name
|
||||
{{% /show-in %}}
|
||||
|
||||
The producer queries for fresh data from InfluxDB at specific intervals. It's configured to look for a specific measurement defined in a variable. It writes the raw data to a Kafka topic called 'raw-data'
|
||||
|
||||
{{% code-placeholders "(API|(RAW|DOWNSAMPLED)_BUCKET|ORG)_(NAME|TOKEN)" %}}
|
||||
```py
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
```python { placeholders="API_TOKEN|RAW_BUCKET_NAME" }
|
||||
from influxdb_client_3 import InfluxDBClient3
|
||||
from quixstreams import Application
|
||||
import pandas
|
||||
|
|
@ -178,7 +195,24 @@ influxdb_raw = InfluxDBClient3(
|
|||
token='API_TOKEN',
|
||||
database='RAW_BUCKET_NAME'
|
||||
)
|
||||
```
|
||||
{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
```python { placeholders="DATABASE_TOKEN|RAW_DATABASE_NAME" }
|
||||
from influxdb_client_3 import InfluxDBClient3
|
||||
from quixstreams import Application
|
||||
import pandas
|
||||
|
||||
# Instantiate an InfluxDBClient3 client configured for your unmodified database
|
||||
influxdb_raw = InfluxDBClient3(
|
||||
host='{{< influxdb/host >}}',
|
||||
token='DATABASE_TOKEN',
|
||||
database='RAW_DATABASE_NAME'
|
||||
)
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
```python
|
||||
# os.environ['localdev'] = 'true' # Uncomment if you're using local Kafka rather than Quix Cloud
|
||||
|
||||
# Create a Quix Streams producer application that connects to a local Kafka installation
|
||||
|
|
@ -215,7 +249,7 @@ def get_data():
|
|||
|
||||
#... remaining code trunctated for brevity ...
|
||||
|
||||
# Send the data to a Kafka topic for the downsampling process to consumer
|
||||
# Send the data to a Kafka topic for the downsampling process to consumer
|
||||
def main():
|
||||
"""
|
||||
Read data from the Query and publish it to Kafka
|
||||
|
|
@ -223,7 +257,7 @@ def main():
|
|||
#... remaining code trunctated for brevity ...
|
||||
|
||||
for index, obj in enumerate(records):
|
||||
print(obj) # Obj contains each row in the table includimng temperature
|
||||
print(obj) # Obj contains each row in the table including temperature
|
||||
# Generate a unique message_key for each row
|
||||
message_key = obj['machineId']
|
||||
logger.info(f'Produced message with key:{message_key}, value:{obj}')
|
||||
|
|
@ -241,27 +275,47 @@ def main():
|
|||
)
|
||||
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following:
|
||||
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
- {{% code-placeholder-key %}}`API_TOKEN`{{% /code-placeholder-key %}}: your InfluxDB [API token](/influxdb3/version/admin/tokens/) with read permission on the bucket
|
||||
- {{% code-placeholder-key %}}`RAW_BUCKET_NAME`{{% /code-placeholder-key %}}: the name of your InfluxDB [bucket](/influxdb3/version/admin/buckets/) with unmodified data
|
||||
{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
- {{% code-placeholder-key %}}`DATABASE_TOKEN`{{% /code-placeholder-key %}}: your InfluxDB [database token](/influxdb3/version/admin/tokens/database/) with read permission on the database
|
||||
- {{% code-placeholder-key %}}`RAW_DATABASE_NAME`{{% /code-placeholder-key %}}: the name of your InfluxDB [database](/influxdb3/version/admin/databases/) with unmodified data
|
||||
{{% /show-in %}}
|
||||
|
||||
You can find the full code for this process in the
|
||||
[Quix GitHub repository](https://github.com/quixio/template-influxdbv3-downsampling/blob/dev/InfluxDB%20V3%20Data%20Source/main.py).
|
||||
|
||||
### Create the consumer
|
||||
### Create the consumer client
|
||||
|
||||
As before, provide the following credentials for the consumer:
|
||||
|
||||
- **host**: [{{< product-name >}} region URL](/influxdb3/cloud-dedicated/reference/regions)
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
- **host**: Your [{{% product-name %}} region URL](/influxdb3/cloud-serverless/reference/regions)
|
||||
_(without the protocol)_
|
||||
- **org**: InfluxDB organization name
|
||||
- **token**: InfluxDB API token with read and write permissions on the buckets you
|
||||
- **org**: Your InfluxDB organization name
|
||||
- **token**: Your InfluxDB [API token](/influxdb3/version/admin/tokens/) with read and write permissions on the buckets you
|
||||
want to query and write to.
|
||||
- **database**: InfluxDB bucket name
|
||||
- **database**: Your InfluxDB [bucket](/influxdb3/version/admin/buckets/) name
|
||||
{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
- **host**: Your {{% product-name omit=" Clustered" %}} cluster URL
|
||||
_(without the protocol)_
|
||||
- **org**: An arbitrary string (InfluxDB ignores this parameter, but the client requires it)
|
||||
- **token**: Your InfluxDB [database token](/influxdb3/version/admin/tokens/database/) with read and write permissions on the databases you
|
||||
want to query and write to.
|
||||
- **database**: Your InfluxDB [database](/influxdb3/version/admin/databases/) name
|
||||
{{% /show-in %}}
|
||||
|
||||
This process reads messages from the Kafka topic `downsampled-data` and writes each message as a point dictionary back to InfluxDB.
|
||||
|
||||
{{% code-placeholders "(API|(RAW|DOWNSAMPLED)_BUCKET|ORG)_(NAME|TOKEN)" %}}
|
||||
```py
|
||||
# Instantiate an InfluxDBClient3 client configured for your downsampled database.
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
```python { placeholders="API_TOKEN|DOWNSAMPLED_BUCKET_NAME" }
|
||||
# Instantiate an InfluxDBClient3 client configured for your downsampled bucket.
|
||||
# When writing, the org= argument is required by the client (but ignored by InfluxDB).
|
||||
influxdb_downsampled = InfluxDBClient3(
|
||||
host='{{< influxdb/host >}}',
|
||||
|
|
@ -269,7 +323,22 @@ influxdb_downsampled = InfluxDBClient3(
|
|||
database='DOWNSAMPLED_BUCKET_NAME',
|
||||
org=''
|
||||
)
|
||||
```
|
||||
{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
```python { placeholders="DATABASE_TOKEN|DOWNSAMPLED_DATABASE_NAME" }
|
||||
# Instantiate an InfluxDBClient3 client configured for your downsampled database.
|
||||
# When writing, the org= argument is required by the client (but ignored by InfluxDB).
|
||||
influxdb_downsampled = InfluxDBClient3(
|
||||
host='{{< influxdb/host >}}',
|
||||
token='DATABASE_TOKEN',
|
||||
database='DOWNSAMPLED_DATABASE_NAME',
|
||||
org=''
|
||||
)
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
```python
|
||||
# os.environ['localdev'] = 'true' # Uncomment if you're using local Kafka rather than Quix Cloud
|
||||
|
||||
# Create a Quix Streams consumer application that connects to a local Kafka installation
|
||||
|
|
@ -310,9 +379,19 @@ def send_data_to_influx(message):
|
|||
sdf = app.dataframe(input_topic)
|
||||
sdf = sdf.update(send_data_to_influx) # Continuously apply the 'send_data' function to each message in the incoming stream
|
||||
|
||||
## ... remaining code trunctated for brevity ...
|
||||
## ... remaining code truncated for brevity ...
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following:
|
||||
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
- {{% code-placeholder-key %}}`API_TOKEN`{{% /code-placeholder-key %}}: your InfluxDB [API token](/influxdb3/version/admin/tokens/) with write permission on the bucket
|
||||
- {{% code-placeholder-key %}}`DOWNSAMPLED_BUCKET_NAME`{{% /code-placeholder-key %}}: the name of your InfluxDB [bucket](/influxdb3/version/admin/buckets/) for downsampled data
|
||||
{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
- {{% code-placeholder-key %}}`DATABASE_TOKEN`{{% /code-placeholder-key %}}: your InfluxDB [database token](/influxdb3/version/admin/tokens/database/) with write permission on the database
|
||||
- {{% code-placeholder-key %}}`DOWNSAMPLED_DATABASE_NAME`{{% /code-placeholder-key %}}: the name of your InfluxDB [database](/influxdb3/version/admin/databases/) for downsampled data
|
||||
{{% /show-in %}}
|
||||
|
||||
You can find the full code for this process in the
|
||||
[Quix GitHub repository](https://github.com/quixio/template-influxdbv3-downsampling/blob/dev/InfluxDB%20V3%20Data%20Sink/main.py).
|
||||
|
|
@ -335,24 +414,24 @@ This repository contains the following folders which store different parts of th
|
|||
and writes it to InfluxDB. This is useful if you dont have your own data yet,
|
||||
or just want to work with test data first.
|
||||
|
||||
- It produces a reading every 250 milliseconds.
|
||||
- It produces a reading every 250 milliseconds.
|
||||
- This script originally comes from the
|
||||
[InfluxCommunity repository](https://github.com/InfluxCommunity/Arrow-Task-Engine/blob/master/machine_simulator/src/machine_generator.py)
|
||||
but has been adapted to write directly to InfluxDB rather than using an MQTT broker.
|
||||
|
||||
- **InfluxDB V3 Data Source**: A service that queries for fresh data from
|
||||
|
||||
- **InfluxDB v3 Data Source**: A service that queries for fresh data from
|
||||
InfluxDB at specific intervals. It's configured to look for the measurement
|
||||
produced by the previously-mentioned synthetic machine data generator.
|
||||
produced by the synthetic machine data generator.
|
||||
It writes the raw data to a Kafka topic called "raw-data".
|
||||
- **Downsampler**: A service that performs a 1-minute tumbling window operation
|
||||
on the data from InfluxDB and emits the mean of the "temperature" reading
|
||||
every minute. It writes the output to a "downsampled-data" Kafka topic.
|
||||
- **InfluxDB V3 Data Sink**: A service that reads from the "downsampled-data"
|
||||
- **InfluxDB v3 Data Sink**: A service that reads from the "downsampled-data"
|
||||
topic and writes the downsample records as points back into InfluxDB.
|
||||
|
||||
### Use the downsampling Jupyter Notebook
|
||||
|
||||
You can use the interactive notebook ["Continuously downsample data using InfluxDB and Quix Streams"](https://github.com/quixio/tutorial-code/edit/main/notebooks/Downsampling_viaKafka_Using_Quix_Influx.ipynb) to try downsampling code yourself. It is configured to install Apache Kafka within the runtime environment (such as Google Colab).
|
||||
You can use the interactive notebook ["Continuously downsample data using InfluxDB and Quix Streams"](https://github.com/quixio/tutorial-code/edit/main/notebooks/Downsampling_viaKafka_Using_Quix_Influx.ipynb) to try downsampling code yourself. It is configured to install Apache Kafka within the runtime environment (such as Google Colab).
|
||||
|
||||
Each process is also set up to run in the background so that a running cell does not block the rest of the tutorial.
|
||||
|
||||
19
lefthook.yml
19
lefthook.yml
|
|
@ -5,6 +5,25 @@
|
|||
pre-commit:
|
||||
parallel: true
|
||||
commands:
|
||||
deprecated-markdown-patterns:
|
||||
tags: lint
|
||||
glob: "content/**/*.md"
|
||||
run: |
|
||||
errors=0
|
||||
# Check for deprecated code-placeholders shortcode
|
||||
if grep -l '{{% code-placeholders' {staged_files} 2>/dev/null; then
|
||||
echo "❌ Found deprecated {{% code-placeholders %}} shortcode."
|
||||
echo " Use \`\`\`language { placeholders=\"...\" } instead."
|
||||
errors=1
|
||||
fi
|
||||
# Check for abbreviated 'py' language identifier
|
||||
if grep -lE '^\s*```py(\s|$)' {staged_files} 2>/dev/null; then
|
||||
echo "❌ Found abbreviated 'py' code fence language."
|
||||
echo " Use 'python' instead of 'py' for code fences."
|
||||
errors=1
|
||||
fi
|
||||
exit $errors
|
||||
fail_text: "Deprecated markdown patterns found. See messages above for details."
|
||||
eslint-debug-check:
|
||||
glob: "assets/js/*.js"
|
||||
run: yarn eslint {staged_files}
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@
|
|||
"@vvago/vale": "^3.12.0",
|
||||
"autoprefixer": ">=10.2.5",
|
||||
"cypress": "^14.0.1",
|
||||
"eslint": "^9.18.0",
|
||||
"eslint": "^10.0.0",
|
||||
"eslint-config-prettier": "^10.1.5",
|
||||
"eslint-plugin-import": "^2.31.0",
|
||||
"eslint-plugin-jsdoc": "^50.6.17",
|
||||
|
|
|
|||
186
yarn.lock
186
yarn.lock
|
|
@ -190,65 +190,50 @@
|
|||
dependencies:
|
||||
eslint-visitor-keys "^3.4.3"
|
||||
|
||||
"@eslint-community/regexpp@^4.12.1", "@eslint-community/regexpp@^4.12.2":
|
||||
"@eslint-community/regexpp@^4.12.2":
|
||||
version "4.12.2"
|
||||
resolved "https://registry.yarnpkg.com/@eslint-community/regexpp/-/regexpp-4.12.2.tgz#bccdf615bcf7b6e8db830ec0b8d21c9a25de597b"
|
||||
integrity sha512-EriSTlt5OC9/7SXkRSCAhfSxxoSUgBm33OH+IkwbdpgoqsSsUg7y3uh+IICI/Qg4BBWr3U2i39RpmycbxMq4ew==
|
||||
|
||||
"@eslint/config-array@^0.21.1":
|
||||
version "0.21.1"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/config-array/-/config-array-0.21.1.tgz#7d1b0060fea407f8301e932492ba8c18aff29713"
|
||||
integrity sha512-aw1gNayWpdI/jSYVgzN5pL0cfzU02GT3NBpeT/DXbx1/1x7ZKxFPd9bwrzygx/qiwIQiJ1sw/zD8qY/kRvlGHA==
|
||||
"@eslint/config-array@^0.23.0":
|
||||
version "0.23.1"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/config-array/-/config-array-0.23.1.tgz#908223da7b9148f1af5bfb3144b77a9387a89446"
|
||||
integrity sha512-uVSdg/V4dfQmTjJzR0szNczjOH/J+FyUMMjYtr07xFRXR7EDf9i1qdxrD0VusZH9knj1/ecxzCQQxyic5NzAiA==
|
||||
dependencies:
|
||||
"@eslint/object-schema" "^2.1.7"
|
||||
"@eslint/object-schema" "^3.0.1"
|
||||
debug "^4.3.1"
|
||||
minimatch "^3.1.2"
|
||||
minimatch "^10.1.1"
|
||||
|
||||
"@eslint/config-helpers@^0.4.2":
|
||||
version "0.4.2"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/config-helpers/-/config-helpers-0.4.2.tgz#1bd006ceeb7e2e55b2b773ab318d300e1a66aeda"
|
||||
integrity sha512-gBrxN88gOIf3R7ja5K9slwNayVcZgK6SOUORm2uBzTeIEfeVaIhOpCtTox3P6R7o2jLFwLFTLnC7kU/RGcYEgw==
|
||||
"@eslint/config-helpers@^0.5.2":
|
||||
version "0.5.2"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/config-helpers/-/config-helpers-0.5.2.tgz#314c7b03d02a371ad8c0a7f6821d5a8a8437ba9d"
|
||||
integrity sha512-a5MxrdDXEvqnIq+LisyCX6tQMPF/dSJpCfBgBauY+pNZ28yCtSsTvyTYrMhaI+LK26bVyCJfJkT0u8KIj2i1dQ==
|
||||
dependencies:
|
||||
"@eslint/core" "^0.17.0"
|
||||
"@eslint/core" "^1.1.0"
|
||||
|
||||
"@eslint/core@^0.17.0":
|
||||
version "0.17.0"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/core/-/core-0.17.0.tgz#77225820413d9617509da9342190a2019e78761c"
|
||||
integrity sha512-yL/sLrpmtDaFEiUj1osRP4TI2MDz1AddJL+jZ7KSqvBuliN4xqYY54IfdN8qD8Toa6g1iloph1fxQNkjOxrrpQ==
|
||||
"@eslint/core@^1.1.0":
|
||||
version "1.1.0"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/core/-/core-1.1.0.tgz#51f5cd970e216fbdae6721ac84491f57f965836d"
|
||||
integrity sha512-/nr9K9wkr3P1EzFTdFdMoLuo1PmIxjmwvPozwoSodjNBdefGujXQUF93u1DDZpEaTuDvMsIQddsd35BwtrW9Xw==
|
||||
dependencies:
|
||||
"@types/json-schema" "^7.0.15"
|
||||
|
||||
"@eslint/eslintrc@^3.3.1":
|
||||
version "3.3.3"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/eslintrc/-/eslintrc-3.3.3.tgz#26393a0806501b5e2b6a43aa588a4d8df67880ac"
|
||||
integrity sha512-Kr+LPIUVKz2qkx1HAMH8q1q6azbqBAsXJUxBl/ODDuVPX45Z9DfwB8tPjTi6nNZ8BuM3nbJxC5zCAg5elnBUTQ==
|
||||
dependencies:
|
||||
ajv "^6.12.4"
|
||||
debug "^4.3.2"
|
||||
espree "^10.0.1"
|
||||
globals "^14.0.0"
|
||||
ignore "^5.2.0"
|
||||
import-fresh "^3.2.1"
|
||||
js-yaml "^4.1.1"
|
||||
minimatch "^3.1.2"
|
||||
strip-json-comments "^3.1.1"
|
||||
|
||||
"@eslint/js@9.39.2", "@eslint/js@^9.18.0":
|
||||
"@eslint/js@^9.18.0":
|
||||
version "9.39.2"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/js/-/js-9.39.2.tgz#2d4b8ec4c3ea13c1b3748e0c97ecd766bdd80599"
|
||||
integrity sha512-q1mjIoW1VX4IvSocvM/vbTiveKC4k9eLrajNEuSsmjymSDEbpGddtpfOoN7YGAqBK3NG+uqo8ia4PDTt8buCYA==
|
||||
|
||||
"@eslint/object-schema@^2.1.7":
|
||||
version "2.1.7"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/object-schema/-/object-schema-2.1.7.tgz#6e2126a1347e86a4dedf8706ec67ff8e107ebbad"
|
||||
integrity sha512-VtAOaymWVfZcmZbp6E2mympDIHvyjXs/12LqWYjVw6qjrfF+VK+fyG33kChz3nnK+SU5/NeHOqrTEHS8sXO3OA==
|
||||
"@eslint/object-schema@^3.0.1":
|
||||
version "3.0.1"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/object-schema/-/object-schema-3.0.1.tgz#9a1dc9af00d790dc79a9bf57a756e3cb2740ddb9"
|
||||
integrity sha512-P9cq2dpr+LU8j3qbLygLcSZrl2/ds/pUpfnHNNuk5HW7mnngHs+6WSq5C9mO3rqRX8A1poxqLTC9cu0KOyJlBg==
|
||||
|
||||
"@eslint/plugin-kit@^0.4.1":
|
||||
version "0.4.1"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/plugin-kit/-/plugin-kit-0.4.1.tgz#9779e3fd9b7ee33571a57435cf4335a1794a6cb2"
|
||||
integrity sha512-43/qtrDUokr7LJqoF2c3+RInu/t4zfrpYdoSDfYyhg52rwLV6TnOvdG4fXm7IkSB3wErkcmJS9iEhjVtOSEjjA==
|
||||
"@eslint/plugin-kit@^0.6.0":
|
||||
version "0.6.0"
|
||||
resolved "https://registry.yarnpkg.com/@eslint/plugin-kit/-/plugin-kit-0.6.0.tgz#e0cb12ec66719cb2211ad36499fb516f2a63899d"
|
||||
integrity sha512-bIZEUzOI1jkhviX2cp5vNyXQc6olzb2ohewQubuYlMXZ2Q/XjBO0x0XhGPvc9fjSIiUN0vw+0hq53BJ4eQSJKQ==
|
||||
dependencies:
|
||||
"@eslint/core" "^0.17.0"
|
||||
"@eslint/core" "^1.1.0"
|
||||
levn "^0.4.1"
|
||||
|
||||
"@evilmartians/lefthook@^1.7.1":
|
||||
|
|
@ -352,6 +337,11 @@
|
|||
wrap-ansi "^8.1.0"
|
||||
wrap-ansi-cjs "npm:wrap-ansi@^7.0.0"
|
||||
|
||||
"@isaacs/cliui@^9.0.0":
|
||||
version "9.0.0"
|
||||
resolved "https://registry.yarnpkg.com/@isaacs/cliui/-/cliui-9.0.0.tgz#4d0a3f127058043bf2e7ee169eaf30ed901302f3"
|
||||
integrity sha512-AokJm4tuBHillT+FpMtxQ60n8ObyXBatq7jD2/JA9dxbDDokKQm8KMht5ibGzLVU9IJDIKK4TPKgMHEYMn3lMg==
|
||||
|
||||
"@isaacs/fs-minipass@^4.0.0":
|
||||
version "4.0.1"
|
||||
resolved "https://registry.yarnpkg.com/@isaacs/fs-minipass/-/fs-minipass-4.0.1.tgz#2d59ae3ab4b38fb4270bfa23d30f8e2e86c7fe32"
|
||||
|
|
@ -624,7 +614,12 @@
|
|||
dependencies:
|
||||
"@types/ms" "*"
|
||||
|
||||
"@types/estree@^1.0.6":
|
||||
"@types/esrecurse@^4.3.1":
|
||||
version "4.3.1"
|
||||
resolved "https://registry.yarnpkg.com/@types/esrecurse/-/esrecurse-4.3.1.tgz#6f636af962fbe6191b830bd676ba5986926bccec"
|
||||
integrity sha512-xJBAbDifo5hpffDBuHl0Y8ywswbiAp/Wi7Y/GtAgSlZyIABppyurxVueOPE8LUQOxdlgi6Zqce7uoEpqNTeiUw==
|
||||
|
||||
"@types/estree@^1.0.6", "@types/estree@^1.0.8":
|
||||
version "1.0.8"
|
||||
resolved "https://registry.yarnpkg.com/@types/estree/-/estree-1.0.8.tgz#958b91c991b1867ced318bedea0e215ee050726e"
|
||||
integrity sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==
|
||||
|
|
@ -1097,6 +1092,13 @@ balanced-match@^1.0.0:
|
|||
resolved "https://registry.yarnpkg.com/balanced-match/-/balanced-match-1.0.2.tgz#e83e3a7e3f300b34cb9d87f615fa0cbf357690ee"
|
||||
integrity sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==
|
||||
|
||||
balanced-match@^4.0.2:
|
||||
version "4.0.2"
|
||||
resolved "https://registry.yarnpkg.com/balanced-match/-/balanced-match-4.0.2.tgz#241591ea634702bef9c482696f2469406e16d233"
|
||||
integrity sha512-x0K50QvKQ97fdEz2kPehIerj+YTeptKF9hyYkKf6egnwmMWAkADiO0QCzSp0R5xN8FTZgYaBfSaue46Ej62nMg==
|
||||
dependencies:
|
||||
jackspeak "^4.2.3"
|
||||
|
||||
bare-events@^2.5.4, bare-events@^2.7.0:
|
||||
version "2.8.2"
|
||||
resolved "https://registry.yarnpkg.com/bare-events/-/bare-events-2.8.2.tgz#7b3e10bd8e1fc80daf38bb516921678f566ab89f"
|
||||
|
|
@ -1216,6 +1218,13 @@ brace-expansion@^2.0.1:
|
|||
dependencies:
|
||||
balanced-match "^1.0.0"
|
||||
|
||||
brace-expansion@^5.0.2:
|
||||
version "5.0.2"
|
||||
resolved "https://registry.yarnpkg.com/brace-expansion/-/brace-expansion-5.0.2.tgz#b6c16d0791087af6c2bc463f52a8142046c06b6f"
|
||||
integrity sha512-Pdk8c9poy+YhOgVWw1JNN22/HcivgKWwpxKq04M/jTmHyCZn12WPJebZxdjSa5TmBqISrUSgNYU3eRORljfCCw==
|
||||
dependencies:
|
||||
balanced-match "^4.0.2"
|
||||
|
||||
braces@~3.0.2:
|
||||
version "3.0.3"
|
||||
resolved "https://registry.yarnpkg.com/braces/-/braces-3.0.3.tgz#490332f40919452272d55a8480adc0c441358789"
|
||||
|
|
@ -1315,7 +1324,7 @@ chainsaw@~0.1.0:
|
|||
dependencies:
|
||||
traverse ">=0.3.0 <0.4"
|
||||
|
||||
chalk@^4.0.0, chalk@^4.1.0:
|
||||
chalk@^4.1.0:
|
||||
version "4.1.2"
|
||||
resolved "https://registry.yarnpkg.com/chalk/-/chalk-4.1.2.tgz#aac4e2b7734a740867aeb16bf02aad556a1e7a01"
|
||||
integrity sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==
|
||||
|
|
@ -2401,11 +2410,13 @@ eslint-plugin-jsx-a11y@^6.10.2:
|
|||
safe-regex-test "^1.0.3"
|
||||
string.prototype.includes "^2.0.1"
|
||||
|
||||
eslint-scope@^8.4.0:
|
||||
version "8.4.0"
|
||||
resolved "https://registry.yarnpkg.com/eslint-scope/-/eslint-scope-8.4.0.tgz#88e646a207fad61436ffa39eb505147200655c82"
|
||||
integrity sha512-sNXOfKCn74rt8RICKMvJS7XKV/Xk9kA7DyJr8mJik3S7Cwgy3qlkkmyS2uQB3jiJg6VNdZd/pDBJu0nvG2NlTg==
|
||||
eslint-scope@^9.1.0:
|
||||
version "9.1.0"
|
||||
resolved "https://registry.yarnpkg.com/eslint-scope/-/eslint-scope-9.1.0.tgz#dfcb41d6c0d73df6b977a50cf3e91c41ddb4154e"
|
||||
integrity sha512-CkWE42hOJsNj9FJRaoMX9waUFYhqY4jmyLFdAdzZr6VaCg3ynLYx4WnOdkaIifGfH4gsUcBTn4OZbHXkpLD0FQ==
|
||||
dependencies:
|
||||
"@types/esrecurse" "^4.3.1"
|
||||
"@types/estree" "^1.0.8"
|
||||
esrecurse "^4.3.0"
|
||||
estraverse "^5.2.0"
|
||||
|
||||
|
|
@ -2419,32 +2430,34 @@ eslint-visitor-keys@^4.2.1:
|
|||
resolved "https://registry.yarnpkg.com/eslint-visitor-keys/-/eslint-visitor-keys-4.2.1.tgz#4cfea60fe7dd0ad8e816e1ed026c1d5251b512c1"
|
||||
integrity sha512-Uhdk5sfqcee/9H/rCOJikYz67o0a2Tw2hGRPOG2Y1R2dg7brRe1uG0yaNQDHu+TO/uQPF/5eCapvYSmHUjt7JQ==
|
||||
|
||||
eslint@^9.18.0:
|
||||
version "9.39.2"
|
||||
resolved "https://registry.yarnpkg.com/eslint/-/eslint-9.39.2.tgz#cb60e6d16ab234c0f8369a3fe7cc87967faf4b6c"
|
||||
integrity sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==
|
||||
eslint-visitor-keys@^5.0.0:
|
||||
version "5.0.0"
|
||||
resolved "https://registry.yarnpkg.com/eslint-visitor-keys/-/eslint-visitor-keys-5.0.0.tgz#b9aa1a74aa48c44b3ae46c1597ce7171246a94a9"
|
||||
integrity sha512-A0XeIi7CXU7nPlfHS9loMYEKxUaONu/hTEzHTGba9Huu94Cq1hPivf+DE5erJozZOky0LfvXAyrV/tcswpLI0Q==
|
||||
|
||||
eslint@^10.0.0:
|
||||
version "10.0.0"
|
||||
resolved "https://registry.yarnpkg.com/eslint/-/eslint-10.0.0.tgz#c93c36a96d91621d0fbb680db848ea11af56ab1e"
|
||||
integrity sha512-O0piBKY36YSJhlFSG8p9VUdPV/SxxS4FYDWVpr/9GJuMaepzwlf4J8I4ov1b+ySQfDTPhc3DtLaxcT1fN0yqCg==
|
||||
dependencies:
|
||||
"@eslint-community/eslint-utils" "^4.8.0"
|
||||
"@eslint-community/regexpp" "^4.12.1"
|
||||
"@eslint/config-array" "^0.21.1"
|
||||
"@eslint/config-helpers" "^0.4.2"
|
||||
"@eslint/core" "^0.17.0"
|
||||
"@eslint/eslintrc" "^3.3.1"
|
||||
"@eslint/js" "9.39.2"
|
||||
"@eslint/plugin-kit" "^0.4.1"
|
||||
"@eslint-community/regexpp" "^4.12.2"
|
||||
"@eslint/config-array" "^0.23.0"
|
||||
"@eslint/config-helpers" "^0.5.2"
|
||||
"@eslint/core" "^1.1.0"
|
||||
"@eslint/plugin-kit" "^0.6.0"
|
||||
"@humanfs/node" "^0.16.6"
|
||||
"@humanwhocodes/module-importer" "^1.0.1"
|
||||
"@humanwhocodes/retry" "^0.4.2"
|
||||
"@types/estree" "^1.0.6"
|
||||
ajv "^6.12.4"
|
||||
chalk "^4.0.0"
|
||||
cross-spawn "^7.0.6"
|
||||
debug "^4.3.2"
|
||||
escape-string-regexp "^4.0.0"
|
||||
eslint-scope "^8.4.0"
|
||||
eslint-visitor-keys "^4.2.1"
|
||||
espree "^10.4.0"
|
||||
esquery "^1.5.0"
|
||||
eslint-scope "^9.1.0"
|
||||
eslint-visitor-keys "^5.0.0"
|
||||
espree "^11.1.0"
|
||||
esquery "^1.7.0"
|
||||
esutils "^2.0.2"
|
||||
fast-deep-equal "^3.1.3"
|
||||
file-entry-cache "^8.0.0"
|
||||
|
|
@ -2454,12 +2467,11 @@ eslint@^9.18.0:
|
|||
imurmurhash "^0.1.4"
|
||||
is-glob "^4.0.0"
|
||||
json-stable-stringify-without-jsonify "^1.0.1"
|
||||
lodash.merge "^4.6.2"
|
||||
minimatch "^3.1.2"
|
||||
minimatch "^10.1.1"
|
||||
natural-compare "^1.4.0"
|
||||
optionator "^0.9.3"
|
||||
|
||||
espree@^10.0.1, espree@^10.3.0, espree@^10.4.0:
|
||||
espree@^10.3.0:
|
||||
version "10.4.0"
|
||||
resolved "https://registry.yarnpkg.com/espree/-/espree-10.4.0.tgz#d54f4949d4629005a1fa168d937c3ff1f7e2a837"
|
||||
integrity sha512-j6PAQ2uUr79PZhBjP5C5fhl8e39FmRnOjsD5lGnWrFU8i2G776tBK7+nP8KuQUTTyAZUwfQqXAgrVH5MbH9CYQ==
|
||||
|
|
@ -2468,12 +2480,21 @@ espree@^10.0.1, espree@^10.3.0, espree@^10.4.0:
|
|||
acorn-jsx "^5.3.2"
|
||||
eslint-visitor-keys "^4.2.1"
|
||||
|
||||
espree@^11.1.0:
|
||||
version "11.1.0"
|
||||
resolved "https://registry.yarnpkg.com/espree/-/espree-11.1.0.tgz#7d0c82a69f8df670728dba256264b383fbf73e8f"
|
||||
integrity sha512-WFWYhO1fV4iYkqOOvq8FbqIhr2pYfoDY0kCotMkDeNtGpiGGkZ1iov2u8ydjtgM8yF8rzK7oaTbw2NAzbAbehw==
|
||||
dependencies:
|
||||
acorn "^8.15.0"
|
||||
acorn-jsx "^5.3.2"
|
||||
eslint-visitor-keys "^5.0.0"
|
||||
|
||||
esprima@^4.0.0, esprima@^4.0.1:
|
||||
version "4.0.1"
|
||||
resolved "https://registry.yarnpkg.com/esprima/-/esprima-4.0.1.tgz#13b04cdb3e6c5d19df91ab6987a8695619b0aa71"
|
||||
integrity sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==
|
||||
|
||||
esquery@^1.5.0, esquery@^1.6.0:
|
||||
esquery@^1.6.0, esquery@^1.7.0:
|
||||
version "1.7.0"
|
||||
resolved "https://registry.yarnpkg.com/esquery/-/esquery-1.7.0.tgz#08d048f261f0ddedb5bae95f46809463d9c9496d"
|
||||
integrity sha512-Ap6G0WQwcU/LHsvLwON1fAQX9Zp0A2Y6Y/cJBl9r/JbW90Zyg4/zbG6zzKa2OTALELarYHmKu0GhpM5EO+7T0g==
|
||||
|
|
@ -2885,11 +2906,6 @@ global-dirs@^3.0.0:
|
|||
dependencies:
|
||||
ini "2.0.0"
|
||||
|
||||
globals@^14.0.0:
|
||||
version "14.0.0"
|
||||
resolved "https://registry.yarnpkg.com/globals/-/globals-14.0.0.tgz#898d7413c29babcf6bafe56fcadded858ada724e"
|
||||
integrity sha512-oahGvuMGQlPw/ivIYBjVSrWAfWLBeku5tpPE2fOPLi+WHffIWbuh2tCjhyQhTBPMf5E9jDEH4FOmTYgYwbKwtQ==
|
||||
|
||||
globals@^15.14.0:
|
||||
version "15.15.0"
|
||||
resolved "https://registry.yarnpkg.com/globals/-/globals-15.15.0.tgz#7c4761299d41c32b075715a4ce1ede7897ff72a8"
|
||||
|
|
@ -3046,7 +3062,7 @@ ignore@^7.0.5:
|
|||
resolved "https://registry.yarnpkg.com/ignore/-/ignore-7.0.5.tgz#4cb5f6cd7d4c7ab0365738c7aea888baa6d7efd9"
|
||||
integrity sha512-Hs59xBNfUIunMFgWAbGX5cq6893IbWg4KnrjbYwX3tx0ztorVgTDA6B2sxf8ejHJ4wz8BqGUMYlnzNBer5NvGg==
|
||||
|
||||
import-fresh@^3.2.1, import-fresh@^3.3.0:
|
||||
import-fresh@^3.3.0:
|
||||
version "3.3.1"
|
||||
resolved "https://registry.yarnpkg.com/import-fresh/-/import-fresh-3.3.1.tgz#9cecb56503c0ada1f2741dbbd6546e4b13b57ccf"
|
||||
integrity sha512-TR3KfrTZTYLPB6jUjfx6MF9WcWrHL9su5TObK4ZkYgBdWKPOFoSoQIdEuTuR82pmtxH2spWG9h6etwfr1pLBqQ==
|
||||
|
|
@ -3378,6 +3394,13 @@ jackspeak@^3.1.2:
|
|||
optionalDependencies:
|
||||
"@pkgjs/parseargs" "^0.11.0"
|
||||
|
||||
jackspeak@^4.2.3:
|
||||
version "4.2.3"
|
||||
resolved "https://registry.yarnpkg.com/jackspeak/-/jackspeak-4.2.3.tgz#27ef80f33b93412037c3bea4f8eddf80e1931483"
|
||||
integrity sha512-ykkVRwrYvFm1nb2AJfKKYPr0emF6IiXDYUaFx4Zn9ZuIH7MrzEZ3sD5RlqGXNRpHtvUHJyOnCEFxOlNDtGo7wg==
|
||||
dependencies:
|
||||
"@isaacs/cliui" "^9.0.0"
|
||||
|
||||
jquery@^3.7.1:
|
||||
version "3.7.1"
|
||||
resolved "https://registry.yarnpkg.com/jquery/-/jquery-3.7.1.tgz#083ef98927c9a6a74d05a6af02806566d16274de"
|
||||
|
|
@ -3697,11 +3720,6 @@ lodash-es@4.17.21, lodash-es@^4.17.21, lodash-es@^4.17.23:
|
|||
resolved "https://registry.yarnpkg.com/lodash-es/-/lodash-es-4.17.23.tgz#58c4360fd1b5d33afc6c0bbd3d1149349b1138e0"
|
||||
integrity sha512-kVI48u3PZr38HdYz98UmfPnXl2DXrpdctLrFLCd3kOx1xUkOmpFPx7gCWWM5MPkL/fD8zb+Ph0QzjGFs4+hHWg==
|
||||
|
||||
lodash.merge@^4.6.2:
|
||||
version "4.6.2"
|
||||
resolved "https://registry.yarnpkg.com/lodash.merge/-/lodash.merge-4.6.2.tgz#558aa53b43b661e1925a0afdfa36a9a1085fe57a"
|
||||
integrity sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==
|
||||
|
||||
lodash.once@^4.1.1:
|
||||
version "4.1.1"
|
||||
resolved "https://registry.yarnpkg.com/lodash.once/-/lodash.once-4.1.1.tgz#0dd3971213c7c56df880977d504c88fb471a97ac"
|
||||
|
|
@ -4253,6 +4271,13 @@ mimic-fn@^2.1.0:
|
|||
resolved "https://registry.yarnpkg.com/mimic-fn/-/mimic-fn-2.1.0.tgz#7ed2c2ccccaf84d3ffcb7a69b57711fc2083401b"
|
||||
integrity sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg==
|
||||
|
||||
minimatch@^10.1.1:
|
||||
version "10.2.1"
|
||||
resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-10.2.1.tgz#9d82835834cdc85d5084dd055e9a4685fa56e5f0"
|
||||
integrity sha512-MClCe8IL5nRRmawL6ib/eT4oLyeKMGCghibcDWK+J0hh0Q8kqSdia6BvbRMVk6mPa6WqUa5uR2oxt6C5jd533A==
|
||||
dependencies:
|
||||
brace-expansion "^5.0.2"
|
||||
|
||||
minimatch@^3.1.1, minimatch@^3.1.2:
|
||||
version "3.1.2"
|
||||
resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.1.2.tgz#19cd194bfd3e428f049a70817c038d89ab4be35b"
|
||||
|
|
@ -5483,11 +5508,6 @@ strip-final-newline@^2.0.0:
|
|||
resolved "https://registry.yarnpkg.com/strip-final-newline/-/strip-final-newline-2.0.0.tgz#89b852fb2fcbe936f6f4b3187afb0a12c1ab58ad"
|
||||
integrity sha512-BrpvfNAE3dcvq7ll3xVumzjKjZQ5tI1sEUIKr3Uoks0XUl45St3FlatVqef9prk4jRDzhW6WZg+3bk93y6pLjA==
|
||||
|
||||
strip-json-comments@^3.1.1:
|
||||
version "3.1.1"
|
||||
resolved "https://registry.yarnpkg.com/strip-json-comments/-/strip-json-comments-3.1.1.tgz#31f1281b3832630434831c310c01cccda8cbe006"
|
||||
integrity sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==
|
||||
|
||||
stylis@^4.3.6:
|
||||
version "4.3.6"
|
||||
resolved "https://registry.yarnpkg.com/stylis/-/stylis-4.3.6.tgz#7c7b97191cb4f195f03ecab7d52f7902ed378320"
|
||||
|
|
|
|||
Loading…
Reference in New Issue