2021-02-24 21:52:18 +00:00
# Testing
This document covers details that are only relevant if you are developing IOx and running the tests.
2022-03-24 12:53:25 +00:00
## "End to End" tests
The purpose of the "end to end tests" is highest level "integration"
test that can be run entirely within the `influxdb_iox` repository
with minimum dependencies that ensure all the plumbing is connected
correctly.
It is NOT meant to cover corner cases in implementation, which are
better tested with targeted tests in the various sub modules that make
up IOx.
Each of these tests starts up IOx as a sub process (aka runs the
`influxdb_iox` binary) and manipulates it either via a client or the
CLI. These tests should *not* manipulate or use the contents of any
subsystem crate.
### Prerequisites
The end to end tests currently require a connection to postgres
specified by a `DSN` , such as
`postgresql://localhost:5432/alamb` . Note that the required schema is
created automatically.
### Running
2022-05-12 18:53:31 +00:00
The end to end tests are run using the `cargo test --test end_to_end` command, after setting the
`TEST_INTEGRATION` and `TEST_INFLUXDB_IOX_CATALOG_DSN` environment variables. NOTE if you don't set
these variables the tests will "pass" locally (really they will be skipped).
2022-03-24 12:53:25 +00:00
For example, to run the end to end tests assuming the example postgres DSN:
2022-05-12 18:53:31 +00:00
2022-03-24 12:53:25 +00:00
```shell
2022-05-12 18:53:31 +00:00
TEST_INTEGRATION=1 TEST_INFLUXDB_IOX_CATALOG_DSN=postgresql://localhost:5432/alamb cargo test --test end_to_end
2022-03-24 12:53:25 +00:00
```
You can also see more logging using the `LOG_FILTER` variable. For example:
```shell
2022-05-12 18:53:31 +00:00
LOG_FILTER=debug,sqlx=warn,h2=warn TEST_INTEGRATION=1 TEST_INFLUXDB_IOX_CATALOG_DSN=postgresql://localhost:5432/alamb cargo test --test end_to_end
2022-03-24 12:53:25 +00:00
```
2021-08-06 21:04:44 +00:00
## Running the IOx server from source
### Starting the server
You can run IOx locally with a command like this (replacing `--data-dir` with your preferred location)
```shell
cargo run -- run -v --object-store=file --data-dir=$HOME/.influxdb_iox --server-id=42
```
### Loading data
In another terminal window, try loading some data. These commands will create a database called `parquet_db` and load the contents of `tests/fixtures/lineproto/metrics.lp` into it
```shell
cd influxdb_iox
./target/debug/influxdb_iox database create parquet_db
./target/debug/influxdb_iox database write parquet_db tests/fixtures/lineproto/metrics.lp
```
### Editing configuration
You can interactively edit the configuration of the IOx instance with a command like this:
```shell
./scripts/edit_db_rules localhost:8082 parquet_db
```
Which will bring up your editor with a file that looks like this. Any changes you make to the file will be sent to IOx as its new config.
In this case, these settings will cause data to be persisted to parquet almost immediately
```json
{
"rules": {
"name": "parquet_db",
"partitionTemplate": {
"parts": [
{
"time": "%Y-%m-%d %H:00:00"
}
]
},
"lifecycleRules": {
"bufferSizeSoft": "52428800",
"bufferSizeHard": "104857600",
"dropNonPersisted": true,
"immutable": false,
"persist": true,
"workerBackoffMillis": "1000",
"catalogTransactionsUntilCheckpoint": "100",
"lateArriveWindowSeconds": 1,
"persistRowThreshold": "1",
"persistAgeThresholdSeconds": 1,
"mubRowThreshold": "1",
"parquetCacheLimit": "0",
"maxActiveCompactionsCpuFraction": 1
},
"workerCleanupAvgSleep": "500s"
}
}
```
### Examining Parquet Files
You can use tools such as `parquet-tools` to examine the parquet files created by IOx. For example, the following command will show the contents of the `disk` table when persisted as parquet (note the actual filename will be different):
```shell
parquet-tools meta /Users/alamb/.influxdb_iox/42/parquet_db/data/disk/2020-06-11\ 16\:00\:00/1.4b1a7805-d6de-495e-844b-32fa452147c7.parquet
```
2021-02-24 21:52:18 +00:00
## Object storage
### To run the tests or not run the tests
If you are testing integration with some or all of the object storage options, you'll have more
setup to do.
2021-02-25 20:52:52 +00:00
By default, `cargo test -p object_store` does not run any tests that actually contact
any cloud services: tests that do contact the services will silently pass.
2021-02-24 21:52:18 +00:00
2021-04-22 18:20:27 +00:00
To run integration tests, use `TEST_INTEGRATION=1 cargo test -p object_store` , which will run the
tests that contact the cloud services and fail them if the required environment variables aren't
set.
2021-02-24 21:52:18 +00:00
### Configuration differences when running the tests
2021-10-27 07:31:41 +00:00
When running `influxdb_iox run database` , you can pick one object store to use. When running the tests,
2021-02-24 21:52:18 +00:00
you can run them against all the possible object stores. There's still only one
`INFLUXDB_IOX_BUCKET` variable, though, so that will set the bucket name for all configured object
stores. Use the same bucket name when setting up the different services.
2021-02-25 20:52:52 +00:00
Other than possibly configuring multiple object stores, configuring the tests to use the object
store services is the same as configuring the server to use an object store service. See the output
2021-10-27 07:31:41 +00:00
of `influxdb_iox run database --help` for instructions.
2021-03-25 19:22:51 +00:00
## InfluxDB 2 Client
The `influxdb2_client` crate may be used by people using InfluxDB 2.0 OSS, and should be compatible
2021-04-22 18:20:27 +00:00
with both that and IOx. If you want to run the integration tests for the client against InfluxDB
2.0 OSS, you will need to set `TEST_INTEGRATION=1` .
If you have `docker` in your path, the integration tests for the `influxdb2_client` crate will run
integration tests against `influxd` running in a Docker container.
2021-04-15 20:58:27 +00:00
2021-04-19 19:54:37 +00:00
If you do not want to use Docker locally, but you do have `influxd` for InfluxDB
2021-04-21 14:32:07 +00:00
2.0 locally, you can use that instead by running the tests with the environment variable
`INFLUXDB_IOX_INTEGRATION_LOCAL=1` .
2021-06-09 17:35:59 +00:00
## Kafka Write Buffer
2022-03-24 12:53:25 +00:00
By default, the integration tests for the Kafka-based write buffer are not run.
2021-06-09 17:35:59 +00:00
2021-11-10 16:48:16 +00:00
In order to run them you must set two environment variables:
2021-06-09 17:35:59 +00:00
2021-11-10 16:48:16 +00:00
* `TEST_INTEGRATION=1`
* `KAFKA_CONNECT` to a host and port where the tests can connect to a running Kafka broker
2021-06-09 17:35:59 +00:00
2021-11-10 16:48:16 +00:00
### Running Kafka Locally
2021-06-16 14:52:21 +00:00
2021-11-10 16:48:16 +00:00
[Redpanda ](https://vectorized.io/redpanda/ ) is a Kafka-compatible broker that can be used to run the tests, and is used
by the CI to test IOx.
2021-06-16 14:52:21 +00:00
2022-03-24 12:53:25 +00:00
Either follow the instructions on the website to install redpanda directly onto your system, or alternatively
2021-11-10 16:48:16 +00:00
it can be run in a docker container with:
2021-06-09 17:35:59 +00:00
```
2021-11-10 16:48:16 +00:00
docker run -d --pull=always --name=redpanda-1 --rm \
-p 9092:9092 \
-p 9644:9644 \
docker.vectorized.io/vectorized/redpanda:latest \
redpanda start \
--overprovisioned \
--smp 1 \
--memory 1G \
--reserve-memory 0M \
--node-id 0 \
--check=false
2021-06-09 17:35:59 +00:00
```
2021-11-10 16:48:16 +00:00
It is then just a case of setting the environment variables and running the tests as normal
2021-06-09 17:35:59 +00:00
```
2021-11-10 16:48:16 +00:00
TEST_INTEGRATION=1 KAFKA_CONNECT=localhost:9093 cargo test
2021-06-09 17:35:59 +00:00
```
2021-11-10 16:48:16 +00:00
Or to just run the Kafka tests
2021-06-16 15:12:26 +00:00
```
2021-11-10 16:48:16 +00:00
TEST_INTEGRATION=1 KAFKA_CONNECT=localhost:9093 cargo test -p write_buffer kafka --nocapture
2022-03-24 12:53:25 +00:00
```