fix: failing tests, skip tests for unsupported code samples, typos and updates

pull/5531/head
Jason Stirnaman 2024-07-23 15:41:16 -05:00
parent 94348a8739
commit 4408a43de3
8 changed files with 265 additions and 136 deletions

View File

@ -314,15 +314,15 @@ _If your project's virtual environment is already running, skip to step 3._
1. Create a directory for your project and change into it: 1. Create a directory for your project and change into it:
```sh ```bash
mkdir influx3-query-example && cd $_ mkdir -p influx3-query-example && cd influx3-query-example
``` ```
2. To create and activate a Python virtual environment, run the following command: 2. To create and activate a Python virtual environment, run the following command:
<!--pytest-codeblocks:cont--> <!--pytest-codeblocks:cont-->
```sh ```bash
python -m venv envs/virtual-env && . envs/virtual-env/bin/activate python -m venv envs/virtual-env && . envs/virtual-env/bin/activate
``` ```
@ -330,7 +330,7 @@ _If your project's virtual environment is already running, skip to step 3._
<!--pytest-codeblocks:cont--> <!--pytest-codeblocks:cont-->
```sh ```bash
pip install influxdb3-python-cli pip install influxdb3-python-cli
``` ```
@ -342,7 +342,7 @@ _If your project's virtual environment is already running, skip to step 3._
<!--pytest-codeblocks:cont--> <!--pytest-codeblocks:cont-->
```sh ```sh
influx3 config \ influx3 config create \
--name="config-dedicated" \ --name="config-dedicated" \
--database="get-started" \ --database="get-started" \
--host="{{< influxdb/host >}}" \ --host="{{< influxdb/host >}}" \
@ -390,7 +390,7 @@ _If your project's virtual environment is already running, skip to step 3._
<!-- Run for tests and hide from users. <!-- Run for tests and hide from users.
```sh ```sh
mkdir -p influxdb_py_client && cd $_ mkdir -p influxdb_py_client && cd influxdb_py_client
``` ```
--> -->
@ -448,7 +448,8 @@ _If your project's virtual environment is already running, skip to step 3._
''' '''
table = client.query(query=sql) table = client.query(query=sql)
assert table['room'], "Expect table to have room column." assert table.num_rows > 0, "Expect query to return data."
assert table['room'], f"Expect ${table} to have room column."
print(table.to_pandas().to_markdown()) print(table.to_pandas().to_markdown())
``` ```

View File

@ -144,38 +144,24 @@ Provide the following:
- `--write-database` Grants write access to a database - `--write-database` Grants write access to a database
- Token description - Token description
<!--Skip database create and delete tests: namespaces aren't reusable-->
<!--pytest.mark.skip-->
{{% code-placeholders "get-started" %}} {{% code-placeholders "get-started" %}}
```sh ```bash
influxctl token create \ influxctl token create \
--read-database get-started \ --read-database get-started \
--write-database get-started \ --write-database get-started \
"Read/write token for get-started database" "Read/write token for get-started database" > /app/iot-starter/secret.txt
``` ```
{{% /code-placeholders %}} {{% /code-placeholders %}}
<!--actual test <!--test-cleanup
```bash
```sh influxctl token delete --force \
$(influxctl token list \
# Test the preceding command outside of the code block. | grep "Read/write token for get-started database" \
# influxctl authentication requires TTY interaction-- | head -n1 | cut -d' ' -f2)
# output the auth URL to a file that the host can open.
TOKEN_NAME=token_TEST_RUN
script -q /dev/null -c "influxctl token list > /shared/urls.txt \
&& influxctl token create \
--read-database DATABASE_NAME \
--write-database DATABASE_NAME \
\"Read/write token ${TOKEN_NAME} for DATABASE_NAME database\" > /shared/tokens.txt
&& influxctl token revoke $(head /shared/tokens.txt) \
&& rm /shared/tokens.txt"
``` ```
--> -->
The command returns the token ID and the token string. The command returns the token ID and the token string.
@ -240,6 +226,8 @@ $env:INFLUX_TOKEN = "DATABASE_TOKEN"
{{% code-placeholders "DATABASE_TOKEN" %}} {{% code-placeholders "DATABASE_TOKEN" %}}
<!--pytest.mark.skip-->
```sh ```sh
set INFLUX_TOKEN=DATABASE_TOKEN set INFLUX_TOKEN=DATABASE_TOKEN
# Make sure to include a space character at the end of this command. # Make sure to include a space character at the end of this command.

View File

@ -206,12 +206,12 @@ to write the [home sensor sample data](#home-sensor-data-line-protocol) to your
{{% influxdb/custom-timestamps %}} {{% influxdb/custom-timestamps %}}
{{% code-placeholders "get-started" %}} {{% code-placeholders "get-started" %}}
```sh ```bash
influxctl write \ influxctl write \
--database get-started \ --database get-started \
--token $INFLUX_TOKEN \ --token $INFLUX_TOKEN \
--precision s \ --precision s \
'home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000 'home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000 home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600 home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600 home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
@ -263,7 +263,7 @@ Use [Telegraf](/telegraf/v1/) to consume line protocol, and then write it to
2. Copy and save the [home sensor data sample](#home-sensor-data-line-protocol) 2. Copy and save the [home sensor data sample](#home-sensor-data-line-protocol)
to a file on your local system--for example, `home.lp`. to a file on your local system--for example, `home.lp`.
```sh ```bash
cat <<- EOF > home.lp cat <<- EOF > home.lp
home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000 home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000 home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
@ -298,7 +298,7 @@ Use [Telegraf](/telegraf/v1/) to consume line protocol, and then write it to
(`./telegraf.conf`) that enables the `inputs.file` and `outputs.influxdb_v2` (`./telegraf.conf`) that enables the `inputs.file` and `outputs.influxdb_v2`
plugins: plugins:
```sh ```bash
telegraf --sample-config \ telegraf --sample-config \
--input-filter file \ --input-filter file \
--output-filter influxdb_v2 \ --output-filter influxdb_v2 \
@ -353,7 +353,7 @@ Use [Telegraf](/telegraf/v1/) to consume line protocol, and then write it to
echo '' >> telegraf.conf echo '' >> telegraf.conf
echo ' organization = ""' >> telegraf.conf echo ' organization = ""' >> telegraf.conf
echo '' >> telegraf.conf echo '' >> telegraf.conf
echo ' bucket = "get-started"' >> telegraf.conf echo ' bucket = "${INFLUX_DATABASE}"' >> telegraf.conf
``` ```
--> -->
@ -373,10 +373,18 @@ Use [Telegraf](/telegraf/v1/) to consume line protocol, and then write it to
Enter the following command in your terminal: Enter the following command in your terminal:
```sh <!--pytest.mark.skip-->
```bash
telegraf --once --config ./telegraf.conf telegraf --once --config ./telegraf.conf
``` ```
<!--test
```bash
telegraf --quiet --once --config ./telegraf.conf
```
-->
If the write is successful, the output is similar to the following: If the write is successful, the output is similar to the following:
```plaintext ```plaintext
@ -446,12 +454,13 @@ to InfluxDB:
{{% code-placeholders "DATABASE_TOKEN" %}} {{% code-placeholders "DATABASE_TOKEN" %}}
```sh ```bash
response=$(curl --silent --write-out "%{response_code}:-%{errormsg}" \ response=$(curl --silent \
"https://{{< influxdb/host >}}/write?db=get-started&precision=s" \ "https://{{< influxdb/host >}}/write?db=get-started&precision=s" \
--header "Authorization: Bearer DATABASE_TOKEN" \ --header "Authorization: Bearer DATABASE_TOKEN" \
--header "Content-type: text/plain; charset=utf-8" \ --header "Content-type: text/plain; charset=utf-8" \
--header "Accept: application/json" \ --header "Accept: application/json" \
--write-out "\n%{response_code}" \
--data-binary " --data-binary "
home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000 home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000 home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
@ -481,16 +490,15 @@ home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200 home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200
") ")
# Format the response code and error message output. # Extract the response body (all but the last line)
response_code=${response%%:-*} response_body=$(echo "$response" | head -n -1)
errormsg=${response#*:-}
# Remove leading and trailing whitespace from errormsg # Extract the HTTP status code (the last line)
errormsg=$(echo "${errormsg}" | tr -d '[:space:]') response_code=$(echo "$response" | tail -n 1)
echo "$response_code" echo "$response_code"
if [[ $errormsg ]]; then if [[ $response_body ]]; then
echo "$errormsg" echo "$response_body"
fi fi
``` ```
@ -558,7 +566,7 @@ to InfluxDB:
{{% code-placeholders "DATABASE_TOKEN"%}} {{% code-placeholders "DATABASE_TOKEN"%}}
```sh ```bash
response=$(curl --silent --write-out "%{response_code}:-%{errormsg}" \ response=$(curl --silent --write-out "%{response_code}:-%{errormsg}" \
"https://{{< influxdb/host >}}/api/v2/write?bucket=get-started&precision=s" \ "https://{{< influxdb/host >}}/api/v2/write?bucket=get-started&precision=s" \
--header "Authorization: Bearer DATABASE_TOKEN" \ --header "Authorization: Bearer DATABASE_TOKEN" \
@ -801,7 +809,7 @@ To write data to {{% product-name %}} using Go, use the InfluxDB v3
2. Initialize a new Go module in the directory. 2. Initialize a new Go module in the directory.
<!--pytest-codeblocks:cont--> <!--pytest.mark.skip-->
```bash ```bash
go mod init influxdb_go_client go mod init influxdb_go_client
@ -810,7 +818,7 @@ To write data to {{% product-name %}} using Go, use the InfluxDB v3
3. In your terminal or editor, create a new file for your code--for example: 3. In your terminal or editor, create a new file for your code--for example:
`write.go`. `write.go`.
<!--pytest-codeblocks:cont--> <!--pytest.mark.skip-->
```bash ```bash
touch write.go touch write.go
@ -955,7 +963,7 @@ To write data to {{% product-name %}} using Go, use the InfluxDB v3
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```sh ```bash
go mod tidy && go run influxdb_go_client go mod tidy && go run influxdb_go_client
``` ```
@ -979,15 +987,13 @@ the failure message.
`influxdb_js_client` directory for your project: `influxdb_js_client` directory for your project:
```bash ```bash
mkdir influxdb_js_client && cd influxdb_js_client mkdir -p influxdb_js_client && cd influxdb_js_client
``` ```
3. Inside of `influxdb_js_client`, enter the following command to initialize a 3. Inside of `influxdb_js_client`, enter the following command to initialize a
package. This example configures the package to use package. This example configures the package to use
[ECMAScript modules (ESM)](https://nodejs.org/api/packages.html#modules-loaders). [ECMAScript modules (ESM)](https://nodejs.org/api/packages.html#modules-loaders).
<!--pytest-codeblocks:cont-->
```bash ```bash
npm init -y; npm pkg set type="module" npm init -y; npm pkg set type="module"
``` ```
@ -995,8 +1001,6 @@ the failure message.
4. Install the `@influxdata/influxdb3-client` JavaScript client library as a 4. Install the `@influxdata/influxdb3-client` JavaScript client library as a
dependency to your project. dependency to your project.
<!--pytest-codeblocks:cont-->
```bash ```bash
npm install --save @influxdata/influxdb3-client npm install --save @influxdata/influxdb3-client
``` ```
@ -1004,7 +1008,6 @@ the failure message.
5. In your terminal or editor, create a `write.js` file. 5. In your terminal or editor, create a `write.js` file.
<!--pytest-codeblocks:cont--> <!--pytest-codeblocks:cont-->
```bash ```bash
touch write.js touch write.js
``` ```
@ -1148,9 +1151,9 @@ the failure message.
9. In your terminal, execute `index.js` to write to {{% product-name %}}: 9. In your terminal, execute `index.js` to write to {{% product-name %}}:
<!--pytest-codeblocks:cont--> <!--pytest.mark.skip-->
```sh ```bash
node index.js node index.js
``` ```
@ -1176,7 +1179,7 @@ the failure message.
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```sh ```bash
dotnet new console --name influxdb_csharp_client dotnet new console --name influxdb_csharp_client
``` ```
@ -1184,7 +1187,7 @@ the failure message.
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```sh ```bash
cd influxdb_csharp_client cd influxdb_csharp_client
``` ```
@ -1193,7 +1196,7 @@ the failure message.
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```sh ```bash
dotnet add package InfluxDB3.Client dotnet add package InfluxDB3.Client
``` ```
@ -1339,7 +1342,7 @@ the failure message.
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```sh ```bash
dotnet run dotnet run
``` ```
@ -1362,6 +1365,7 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
[Maven](https://maven.apache.org/download.cgi) for your system. [Maven](https://maven.apache.org/download.cgi) for your system.
2. In your terminal or editor, use Maven to generate a project--for example: 2. In your terminal or editor, use Maven to generate a project--for example:
<!--pytest.mark.skip-->
```bash ```bash
mvn org.apache.maven.plugins:maven-archetype-plugin:3.1.2:generate \ mvn org.apache.maven.plugins:maven-archetype-plugin:3.1.2:generate \
-DarchetypeArtifactId="maven-archetype-quickstart" \ -DarchetypeArtifactId="maven-archetype-quickstart" \
@ -1403,7 +1407,6 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
enter the following in your terminal: enter the following in your terminal:
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```bash ```bash
mvn validate mvn validate
``` ```
@ -1564,7 +1567,6 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
the project code--for example: the project code--for example:
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```bash ```bash
mvn compile mvn compile
``` ```
@ -1573,8 +1575,7 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
example, using Maven: example, using Maven:
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```bash
```sh
mvn exec:java -Dexec.mainClass="com.influxdbv3.App" mvn exec:java -Dexec.mainClass="com.influxdbv3.App"
``` ```

View File

@ -36,6 +36,7 @@ list_code_example: |
FlightInfo flightInfo = sqlClient.execute(query, auth); FlightInfo flightInfo = sqlClient.execute(query, auth);
} }
} }
```
--- ---
[Apache Arrow Flight SQL for Java](https://arrow.apache.org/docs/java/reference/org/apache/arrow/flight/sql/package-summary.html) integrates with Java applications to query and retrieve data from Flight database servers using RPC and SQL. [Apache Arrow Flight SQL for Java](https://arrow.apache.org/docs/java/reference/org/apache/arrow/flight/sql/package-summary.html) integrates with Java applications to query and retrieve data from Flight database servers using RPC and SQL.
@ -483,6 +484,8 @@ Follow these steps to build and run the application using Docker:
- **`HOST`**: your {{% product-name %}} hostname (URL without the "https://") - **`HOST`**: your {{% product-name %}} hostname (URL without the "https://")
- **`TOKEN`**: your [{{% product-name %}} database token](/influxdb/cloud-dedicated/get-started/setup/) with _read_ permission to the database - **`TOKEN`**: your [{{% product-name %}} database token](/influxdb/cloud-dedicated/get-started/setup/) with _read_ permission to the database
<!--pytest.mark.skip-->
```sh ```sh
docker build \ docker build \
--build-arg DATABASE_NAME=INFLUX_DATABASE \ --build-arg DATABASE_NAME=INFLUX_DATABASE \
@ -495,6 +498,8 @@ Follow these steps to build and run the application using Docker:
4. To run the application in a new Docker container, enter the following command: 4. To run the application in a new Docker container, enter the following command:
<!--pytest.mark.skip-->
```sh ```sh
docker run javaflight docker run javaflight
``` ```

View File

@ -9,68 +9,145 @@ menu:
parent: No-code solutions parent: No-code solutions
--- ---
Write data to InfluxDB by configuring third-party technologies that don't require coding.
A number of third-party technologies can be configured to send line protocol directly to InfluxDB. ## Prerequisites
If you're using any of the following technologies, check out the handy links below to configure these technologies to write data to InfluxDB (**no additional software to download or install**): - Authentication credentials for your InfluxDB instance: your InfluxDB host URL,
[organization](/influxdb/cloud/admin/organizations/),
[bucket](/influxdb/cloud/admin/buckets/), and an [API token](/influxdb/cloud/admin/tokens/)
with write permission on the bucket.
- (Write metrics and log events only) [Configure Vector 0.9 or later](#configure-vector) To setup InfluxDB and create credentials, follow the
- [Configure Apache NiFi 1.8 or later](#configure-apache-nifi) [Get started](/influxdb/cloud/get-started/) guide.
- [Configure OpenHAB 3.0 or later](#configure-openhab)
- [Configure Apache JMeter 5.2 or later](#configure-apache-jmeter)
- [Configure FluentD 1.x or later](#configure-fluentd)
#### Configure Vector - Access to one of the third-party tools listed in this guide.
1. View the **Vector documentation**: You can configure the following third-party tools to send line protocol data
- For write metrics, [InfluxDB Metrics Sink](https://vector.dev/docs/reference/sinks/influxdb_metrics/) directly to InfluxDB without writing code:
- For log events, [InfluxDB Logs Sink](https://vector.dev/docs/reference/sinks/influxdb_logs/)
2. Under **Configuration**, click **v2** to view configuration settings.
3. Scroll down to **How It Works** for more detail:
- [InfluxDB Metrics Sink How It Works ](https://vector.dev/docs/reference/sinks/influxdb_metrics/#how-it-works)
- [InfluxDB Logs Sink How It Works](https://vector.dev/docs/reference/sinks/influxdb_logs/#how-it-works)
#### Configure Apache NiFi {{% note %}}
Many third-party integrations are community contributions.
If there's an integration missing from the list below, please [open a docs issue](https://github.com/influxdata/docs-v2/issues/new/choose) to let us know.
{{% /note %}}
See the _[InfluxDB Processors for Apache NiFi Readme](https://github.com/influxdata/nifi-influxdb-bundle#influxdb-processors-for-apache-nifi)_ for details. - [Vector 0.9 or later](#configure-vector)
#### Configure OpenHAB - [Apache NiFi 1.8 or later](#configure-apache-nifi)
See the _[InfluxDB Persistence Readme](https://github.com/openhab/openhab-addons/tree/master/bundles/org.openhab.persistence.influxdb)_ for details. - [OpenHAB 3.0 or later](#configure-openhab)
#### Configure Apache JMeter - [Apache JMeter 5.2 or later](#configure-apache-jmeter)
<!-- after doc updates are made, we can simplify to: See the _[Apache JMeter User's Manual - JMeter configuration](https://jmeter.apache.org/usermanual/realtime-results.html#jmeter-configuration)_ for details. --> - [Apache Pulsar](#configure-apache-pulsar)
To configure Apache JMeter, complete the following steps in InfluxDB and JMeter. - [FluentD 1.x or later](#configure-fluentd)
##### In InfluxDB
1. [Find the name of your organization](/influxdb/cloud/admin/organizations/view-orgs/) (needed to create a bucket and token). ## Configure Vector
2. [Create a bucket using the influx CLI](/influxdb/cloud/admin/buckets/create-bucket/#create-a-bucket-using-the-influx-cli) and name it `jmeter`.
3. [Create a token](/influxdb/cloud/admin/tokens/create-token/).
##### In JMeter > Vector is a lightweight and ultra-fast tool for building observability pipelines.
>
> {{% cite %}}-- [Vector documentation](https://vector.dev/docs/){{% /cite %}}
Configure Vector to write metrics and log events to an InfluxDB instance.
1. Configure your [InfluxDB authentication credentials](#prerequisites) for Vector to write to your bucket.
- View example configurations:
- [InfluxDB metrics sink configuration](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#configuration)
- [InfluxDB logs sink configuration](https://vector.dev/docs/reference/configuration/sinks/influxdb_logs/#example-configurations)
- Use the following Vector configuration fields for InfluxDB v2 credentials:
- [`endpoint`](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#endpoint):
the URL (including scheme, host, and port) for your InfluxDB instance
- [`org`](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#org):
the name of your InfluxDB organization
- [`bucket`](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#bucket):
the name of the bucket to write data to
- [`token`](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#token):
an API token with write permission on the specified bucket
3. Configure the data that you want Vector to write to InfluxDB.
- View [examples of metrics events and configurations](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#examples).
- View [Telemetry log metrics](https://vector.dev/docs/reference/configuration/sinks/influxdb_logs/#telemetry).
4. For more detail, see the **How it works** sections:
- [InfluxDB metrics sink-How it works](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#how-it-works)
- [InfluxDB logs sink-How it works](https://vector.dev/docs/reference/configuration/sinks/influxdb_logs/#how-it-works)
## Configure Apache NiFi
> [Apache NiFi](https://nifi.apache.org/documentation/v1/) is a software project from the Apache Software Foundation designed to automate the flow of data between software systems.
>
> {{% cite %}}-- [Wikipedia](https://en.wikipedia.org/wiki/Apache_NiFi){{% /cite %}}
The InfluxDB processors for Apache NiFi lets you write NiFi Record structured
data into InfluxDB v2.
See
_[InfluxDB Processors for Apache NiFi](https://github.com/influxdata/nifi-influxdb-bundle#influxdb-processors-for-apache-nifi)_
on GitHub for details.
## Configure OpenHAB
> The open Home Automation Bus (openHAB, pronounced ˈəʊpənˈhæb) is an open source, technology agnostic home automation platform
>
> {{% cite %}}-- [openHAB documentation](https://www.openhab.org/docs/){{% /cite %}}
> [The InfluxDB Persistence add-on] service allows you to persist and query states using the [InfluxDB] time series database.
>
> {{% cite %}}-- [openHAB InfluxDB persistence add-on](https://github.com/openhab/openhab-addons/tree/main/bundles/org.openhab.persistence.influxdb){{% /cite %}}
See
_[InfluxDB Persistence add-on](https://github.com/openhab/openhab-addons/tree/master/bundles/org.openhab.persistence.influxdb)_
on GitHub for details.
## Configure Apache JMeter
> [Apache JMeter](https://jmeter.apache.org/) is an Apache project that can be used as a load testing tool for
> analyzing and measuring the performance of a variety of services, with a focus
> on web applications.
>
> {{% cite %}}-- [Wikipedia](https://en.wikipedia.org/wiki/Apache_JMeter){{% /cite %}}
1. Create a [Backend Listener](https://jmeter.apache.org/usermanual/component_reference.html#Backend_Listener) using the _**InfluxDBBackendListenerClient**_ implementation. 1. Create a [Backend Listener](https://jmeter.apache.org/usermanual/component_reference.html#Backend_Listener) using the _**InfluxDBBackendListenerClient**_ implementation.
2. In the **Backend Listener implementation** field, enter: 2. In the **Backend Listener implementation** field, enter:
``` ```text
org.apache.jmeter.visualizers.backend.influxdb.influxdbBackendListenerClient org.apache.jmeter.visualizers.backend.influxdb.influxdbBackendListenerClient
``` ```
3. Under **Parameters**, specify the following: 3. Under **Parameters**, specify the following:
- **influxdbMetricsSender**: - **influxdbMetricsSender**:
``` ```text
org.apache.jmeter.visualizers.backend.influxdb.HttpMetricsSender org.apache.jmeter.visualizers.backend.influxdb.HttpMetricsSender
``` ```
- **influxdbUrl**: _(include the bucket and org you created in InfluxDB)_ - **influxdbUrl**: _(include the bucket and org you created in InfluxDB)_
``` ```text
https://cloud2.influxdata.com/api/v2/write?org=my-org&bucket=jmeter https://cloud2.influxdata.com/api/v2/write?org=my-org&bucket=jmeter
``` ```
- **application**: `InfluxDB2` - **application**: `InfluxDB2`
- **influxdbToken**: _your InfluxDB API token_ - **influxdbToken**: _your InfluxDB API token with write permission on the
specified bucket_
- Include additional parameters as needed. - Include additional parameters as needed.
4. Click **Add** to add the _**InfluxDBBackendListenerClient**_ implementation. 1. Click **Add** to add the _**InfluxDBBackendListenerClient**_ implementation.
#### Configure FluentD ## Configure Apache Pulsar
See the _[influxdb-plugin-fluent Readme](https://github.com/influxdata/influxdb-plugin-fluent)_ for details. > Apache Pulsar is an open source, distributed messaging and streaming platform
> built for the cloud.
>
> The InfluxDB sink connector pulls messages from Pulsar topics and persists the
messages to InfluxDB.
>
> {{% cite %}}-- [Apache Pulsar](https://pulsar.apache.org/){{% /cite %}}
See _[InfluxDB sink connector](https://pulsar.apache.org/docs/en/io-influxdb-sink/)_
for details.
## Configure FluentD
> [Fluentd](https://www.fluentd.org/) is a cross-platform open-source data
> collection software project.
>
> {{% cite %}}-- [Wikipedia](https://en.wikipedia.org/wiki/Fluentd){{% /cite %}}
See _[influxdb-plugin-fluent](https://github.com/influxdata/influxdb-plugin-fluent)_
on GitHub for details.

View File

@ -30,16 +30,12 @@ Use [Grafana](https://grafana.com/) to query and visualize data stored in
> >
> {{% cite %}}-- [Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /cite %}} > {{% cite %}}-- [Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /cite %}}
<!-- TOC -->
- [Install Grafana or login to Grafana Cloud](#install-grafana-or-login-to-grafana-cloud) - [Install Grafana or login to Grafana Cloud](#install-grafana-or-login-to-grafana-cloud)
- [InfluxDB data source](#influxdb-data-source) - [InfluxDB data source](#influxdb-data-source)
- [Create an InfluxDB data source](#create-an-influxdb-data-source) - [Create an InfluxDB data source](#create-an-influxdb-data-source)
- [Query InfluxDB with Grafana](#query-influxdb-with-grafana) - [Query InfluxDB with Grafana](#query-influxdb-with-grafana)
- [Build visualizations with Grafana](#build-visualizations-with-grafana) - [Build visualizations with Grafana](#build-visualizations-with-grafana)
<!-- /TOC -->
## Install Grafana or login to Grafana Cloud ## Install Grafana or login to Grafana Cloud
If using the open source version of **Grafana**, follow the If using the open source version of **Grafana**, follow the

View File

@ -48,7 +48,7 @@ endpoint and write them to InfluxDB{{% cloud-only %}} Cloud{{% /cloud-only %}},
[metric parsing version](/influxdb/v2/reference/prometheus-metrics/) to use [metric parsing version](/influxdb/v2/reference/prometheus-metrics/) to use
_(version `2` is recommended)_. _(version `2` is recommended)_.
2. Add the [InfluxDB v2 output plugin](/telegraf/v1/plugins/#output-influxdb_v2) 2. Add the [InfluxDB v2 output plugin](/telegraf/v1/plugins/#output-influxdb_v2)
to your Telegraf configuration file and configure it to to write to to your Telegraf configuration file and configure it to write to
InfluxDB{{% cloud-only %}} Cloud{{% /cloud-only %}}. InfluxDB{{% cloud-only %}} Cloud{{% /cloud-only %}}.
##### Example telegraf.conf ##### Example telegraf.conf

View File

@ -9,17 +9,29 @@ menu:
parent: No-code solutions parent: No-code solutions
--- ---
Write data to InfluxDB by configuring third-party technologies that don't require coding.
A number of third-party technologies can be configured to send line protocol directly to InfluxDB. ## Prerequisites
- Authentication credentials for your InfluxDB instance: your InfluxDB host URL,
[organization](/influxdb/v2/admin/organizations/),
[bucket](/influxdb/v2/admin/buckets/), and an [API token](/influxdb/v2/admin/tokens/)
with write permission on the bucket.
If you're using any of the following technologies, check out the handy links below to configure these technologies to write data to InfluxDB (**no additional software to download or install**). To setup InfluxDB and create credentials, follow the
[Get started](/influxdb/v2/get-started/) guide.
- Access to one of the third-party tools listed in this guide.
You can configure the following third-party tools to send line protocol data
directly to InfluxDB without writing code:
{{% note %}} {{% note %}}
Many third-party integrations are community contributions. If there's an integration missing from the list below, please [open a docs issue](https://github.com/influxdata/docs-v2/issues/new/choose) to let us know. Many third-party integrations are community contributions.
If there's an integration missing from the list below, please [open a docs issue](https://github.com/influxdata/docs-v2/issues/new/choose) to let us know.
{{% /note %}} {{% /note %}}
- (Write metrics and log events only) [Vector 0.9 or later](#configure-vector) - [Vector 0.9 or later](#configure-vector)
- [Apache NiFi 1.8 or later](#configure-apache-nifi) - [Apache NiFi 1.8 or later](#configure-apache-nifi)
@ -32,61 +44,110 @@ Many third-party integrations are community contributions. If there's an integra
- [FluentD 1.x or later](#configure-fluentd) - [FluentD 1.x or later](#configure-fluentd)
#### Configure Vector ## Configure Vector
1. View the **Vector documentation**: > Vector is a lightweight and ultra-fast tool for building observability pipelines.
- For write metrics, [InfluxDB Metrics Sink](https://vector.dev/docs/reference/sinks/influxdb_metrics/) >
- For log events, [InfluxDB Logs Sink](https://vector.dev/docs/reference/sinks/influxdb_logs/) > {{% cite %}}-- [Vector documentation](https://vector.dev/docs/){{% /cite %}}
2. Under **Configuration**, click **v2** to view configuration settings.
3. Scroll down to **How It Works** for more detail:
- [InfluxDB Metrics Sink How It Works ](https://vector.dev/docs/reference/sinks/influxdb_metrics/#how-it-works)
- [InfluxDB Logs Sink How It Works](https://vector.dev/docs/reference/sinks/influxdb_logs/#how-it-works)
#### Configure Apache NiFi Configure Vector to write metrics and log events to an InfluxDB instance.
See the _[InfluxDB Processors for Apache NiFi Readme](https://github.com/influxdata/nifi-influxdb-bundle#influxdb-processors-for-apache-nifi)_ for details. 1. Configure your [InfluxDB authentication credentials](#prerequisites) for Vector to write to your bucket.
- View example configurations:
- [InfluxDB metrics sink configuration](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#configuration)
- [InfluxDB logs sink configuration](https://vector.dev/docs/reference/configuration/sinks/influxdb_logs/#example-configurations)
- Use the following Vector configuration fields for InfluxDB v2 credentials:
- [`endpoint`](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#endpoint):
the URL (including scheme, host, and port) for your InfluxDB instance
- [`org`](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#org):
the name of your InfluxDB organization
- [`bucket`](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#bucket):
the name of the bucket to write data to
- [`token`](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#token):
an API token with write permission on the specified bucket
#### Configure OpenHAB 3. Configure the data that you want Vector to write to InfluxDB.
- View [examples of metrics events and configurations](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#examples).
- View [Telemetry log metrics](https://vector.dev/docs/reference/configuration/sinks/influxdb_logs/#telemetry).
See the _[InfluxDB Persistence Readme](https://github.com/openhab/openhab-addons/tree/master/bundles/org.openhab.persistence.influxdb)_ for details. 4. For more detail, see the **How it works** sections:
- [InfluxDB metrics sink-How it works](https://vector.dev/docs/reference/configuration/sinks/influxdb_metrics/#how-it-works)
- [InfluxDB logs sink-How it works](https://vector.dev/docs/reference/configuration/sinks/influxdb_logs/#how-it-works)
#### Configure Apache JMeter ## Configure Apache NiFi
<!-- after doc updates are made, we can simplify to: See the _[Apache JMeter User's Manual - JMeter configuration](https://jmeter.apache.org/usermanual/realtime-results.html#jmeter-configuration)_ for details. --> > [Apache NiFi](https://nifi.apache.org/documentation/v1/) is a software project from the Apache Software Foundation designed to automate the flow of data between software systems.
>
> {{% cite %}}-- [Wikipedia](https://en.wikipedia.org/wiki/Apache_NiFi){{% /cite %}}
To configure Apache JMeter, complete the following steps in InfluxDB and JMeter. The InfluxDB processors for Apache NiFi lets you write NiFi Record structured
data into InfluxDB v2.
##### In InfluxDB See
_[InfluxDB Processors for Apache NiFi](https://github.com/influxdata/nifi-influxdb-bundle#influxdb-processors-for-apache-nifi)_
on GitHub for details.
1. [Find the name of your organization](/influxdb/v2/admin/organizations/view-orgs/) (needed to create a bucket and token). ## Configure OpenHAB
2. [Create a bucket using the influx CLI](/influxdb/v2/admin/buckets/create-bucket/#create-a-bucket-using-the-influx-cli) and name it `jmeter`.
3. [Create a token](/influxdb/v2/admin/tokens/create-token/).
##### In JMeter > The open Home Automation Bus (openHAB, pronounced ˈəʊpənˈhæb) is an open source, technology agnostic home automation platform
>
> {{% cite %}}-- [openHAB documentation](https://www.openhab.org/docs/){{% /cite %}}
> [The InfluxDB Persistence add-on] service allows you to persist and query states using the [InfluxDB] time series database.
>
> {{% cite %}}-- [openHAB InfluxDB persistence add-on](https://github.com/openhab/openhab-addons/tree/main/bundles/org.openhab.persistence.influxdb){{% /cite %}}
See
_[InfluxDB Persistence add-on](https://github.com/openhab/openhab-addons/tree/master/bundles/org.openhab.persistence.influxdb)_
on GitHub for details.
## Configure Apache JMeter
> [Apache JMeter](https://jmeter.apache.org/) is an Apache project that can be used as a load testing tool for
> analyzing and measuring the performance of a variety of services, with a focus
> on web applications.
>
> {{% cite %}}-- [Wikipedia](https://en.wikipedia.org/wiki/Apache_JMeter){{% /cite %}}
1. Create a [Backend Listener](https://jmeter.apache.org/usermanual/component_reference.html#Backend_Listener) using the _**InfluxDBBackendListenerClient**_ implementation. 1. Create a [Backend Listener](https://jmeter.apache.org/usermanual/component_reference.html#Backend_Listener) using the _**InfluxDBBackendListenerClient**_ implementation.
2. In the **Backend Listener implementation** field, enter: 2. In the **Backend Listener implementation** field, enter:
``` ```text
org.apache.jmeter.visualizers.backend.influxdb.influxdbBackendListenerClient org.apache.jmeter.visualizers.backend.influxdb.influxdbBackendListenerClient
``` ```
3. Under **Parameters**, specify the following: 3. Under **Parameters**, specify the following:
- **influxdbMetricsSender**: - **influxdbMetricsSender**:
``` ```text
org.apache.jmeter.visualizers.backend.influxdb.HttpMetricsSender org.apache.jmeter.visualizers.backend.influxdb.HttpMetricsSender
``` ```
- **influxdbUrl**: _(include the bucket and org you created in InfluxDB)_ - **influxdbUrl**: _(include the bucket and org you created in InfluxDB)_
``` ```text
http://localhost:8086/api/v2/write?org=my-org&bucket=jmeter http://localhost:8086/api/v2/write?org=my-org&bucket=jmeter
``` ```
- **application**: `InfluxDB2` - **application**: `InfluxDB2`
- **influxdbToken**: _your InfluxDB API token_ - **influxdbToken**: _your InfluxDB API token with write permission on the
specified bucket_
- Include additional parameters as needed. - Include additional parameters as needed.
4. Click **Add** to add the _**InfluxDBBackendListenerClient**_ implementation. 1. Click **Add** to add the _**InfluxDBBackendListenerClient**_ implementation.
#### Configure Apache Pulsar ## Configure Apache Pulsar
See _[InfluxDB sink connector](https://pulsar.apache.org/docs/en/io-influxdb-sink/)_ for details. > Apache Pulsar is an open source, distributed messaging and streaming platform
> built for the cloud.
>
> The InfluxDB sink connector pulls messages from Pulsar topics and persists the
messages to InfluxDB.
>
> {{% cite %}}-- [Apache Pulsar](https://pulsar.apache.org/){{% /cite %}}
#### Configure FluentD See _[InfluxDB sink connector](https://pulsar.apache.org/docs/en/io-influxdb-sink/)_
for details.
See the _[influxdb-plugin-fluent Readme](https://github.com/influxdata/influxdb-plugin-fluent)_ for details. ## Configure FluentD
> [Fluentd](https://www.fluentd.org/) is a cross-platform open-source data
> collection software project.
>
> {{% cite %}}-- [Wikipedia](https://en.wikipedia.org/wiki/Fluentd){{% /cite %}}
See _[influxdb-plugin-fluent](https://github.com/influxdata/influxdb-plugin-fluent)_
on GitHub for details.