Go to file
Edd Robinson 595d13956d test: benchmarks for general read_group case
This commit adds some initial benchmarks for the general read_group
approach using a hashing strategy. Benchmarks are as follows:

segment_read_group_all_time_vary_cardinality/cardinality_20_columns_2_rows_500000
                        time:   [23.335 ms 23.363 ms 23.397 ms]
                        thrpt:  [854.82  elem/s 856.07  elem/s 857.07
elem/s]
Found 8 outliers among 100 measurements (8.00%)
  4 (4.00%) high mild
  4 (4.00%) high severe
segment_read_group_all_time_vary_cardinality/cardinality_200_columns_2_rows_500000
                        time:   [34.266 ms 34.301 ms 34.346 ms]
                        thrpt:  [5.8231 Kelem/s 5.8307 Kelem/s 5.8367
Kelem/s]
Found 13 outliers among 100 measurements (13.00%)
  5 (5.00%) high mild
  8 (8.00%) high severe
segment_read_group_all_time_vary_cardinality/cardinality_2000_columns_2_rows_500000
                        time:   [48.788 ms 48.996 ms 49.238 ms]
                        thrpt:  [40.619 Kelem/s 40.820 Kelem/s 40.993
Kelem/s]
Found 11 outliers among 100 measurements (11.00%)
  3 (3.00%) high mild
  8 (8.00%) high severe
Benchmarking
segment_read_group_all_time_vary_cardinality/cardinality_20000_columns_3_rows_500000:
Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to
increase target time to 8.2s, or reduce sample count to 60.
segment_read_group_all_time_vary_cardinality/cardinality_20000_columns_3_rows_500000
                        time:   [80.133 ms 80.201 ms 80.287 ms]
                        thrpt:  [249.11 Kelem/s 249.37 Kelem/s 249.58
Kelem/s]
Found 3 outliers among 100 measurements (3.00%)
  1 (1.00%) high mild
  2 (2.00%) high severe

Benchmarking
segment_read_group_all_time_vary_columns/cardinality_20000_columns_2_rows_500000:
Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to
increase target time to 7.4s, or reduce sample count to 60.
segment_read_group_all_time_vary_columns/cardinality_20000_columns_2_rows_500000
                        time:   [73.692 ms 73.951 ms 74.245 ms]
                        thrpt:  [269.38 Kelem/s 270.45 Kelem/s 271.40
Kelem/s]
Found 13 outliers among 100 measurements (13.00%)
  13 (13.00%) high severe
Benchmarking
segment_read_group_all_time_vary_columns/cardinality_20000_columns_3_rows_500000:
Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to
increase target time to 8.1s, or reduce sample count to 60.
segment_read_group_all_time_vary_columns/cardinality_20000_columns_3_rows_500000
                        time:   [79.837 ms 79.934 ms 80.079 ms]
                        thrpt:  [249.75 Kelem/s 250.21 Kelem/s 250.51
Kelem/s]
Found 7 outliers among 100 measurements (7.00%)
  5 (5.00%) high mild
  2 (2.00%) high severe
Benchmarking
segment_read_group_all_time_vary_columns/cardinality_20000_columns_4_rows_500000:
Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to
increase target time to 9.7s, or reduce sample count to 50.
segment_read_group_all_time_vary_columns/cardinality_20000_columns_4_rows_500000
                        time:   [95.415 ms 95.549 ms 95.707 ms]
                        thrpt:  [208.97 Kelem/s 209.32 Kelem/s 209.61
Kelem/s]
Found 15 outliers among 100 measurements (15.00%)
  7 (7.00%) high mild
  8 (8.00%) high severe

segment_read_group_all_time_vary_rows/cardinality_20000_columns_2_rows_250000
                        time:   [38.897 ms 39.045 ms 39.227 ms]
                        thrpt:  [509.86 Kelem/s 512.22 Kelem/s 514.18
Kelem/s]
Found 13 outliers among 100 measurements (13.00%)
  4 (4.00%) high mild
  9 (9.00%) high severe
Benchmarking
segment_read_group_all_time_vary_rows/cardinality_20000_columns_2_rows_500000:
Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to
increase target time to 7.2s, or reduce sample count to 60.
segment_read_group_all_time_vary_rows/cardinality_20000_columns_2_rows_500000
                        time:   [71.965 ms 72.190 ms 72.445 ms]
                        thrpt:  [276.07 Kelem/s 277.04 Kelem/s 277.91
Kelem/s]
Found 21 outliers among 100 measurements (21.00%)
  4 (4.00%) low mild
  3 (3.00%) high mild
  14 (14.00%) high severe
Benchmarking
segment_read_group_all_time_vary_rows/cardinality_20000_columns_2_rows_750000:
Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to
increase target time to 10.7s, or reduce sample count to 40.
segment_read_group_all_time_vary_rows/cardinality_20000_columns_2_rows_750000
                        time:   [106.48 ms 106.58 ms 106.70 ms]
                        thrpt:  [187.43 Kelem/s 187.65 Kelem/s 187.82
Kelem/s]
Found 4 outliers among 100 measurements (4.00%)
  2 (2.00%) high mild
  2 (2.00%) high severe
Benchmarking
segment_read_group_all_time_vary_rows/cardinality_20000_columns_2_rows_1000000:
Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to
increase target time to 14.0s, or reduce sample count to 30.
segment_read_group_all_time_vary_rows/cardinality_20000_columns_2_rows_1000000
                        time:   [140.02 ms 140.14 ms 140.29 ms]
                        thrpt:  [142.57 Kelem/s 142.71 Kelem/s 142.84
Kelem/s]
Found 4 outliers among 100 measurements (4.00%)
  4 (4.00%) high severe

segment_read_group_pre_computed_groups_vary_cardinality/cardinality_2_columns_1_rows_500000
                        time:   [51.734 us 52.123 us 52.560 us]
                        thrpt:  [38.051 Kelem/s 38.371 Kelem/s 38.659
Kelem/s]
Found 18 outliers among 100 measurements (18.00%)
  3 (3.00%) high mild
  15 (15.00%) high severe
segment_read_group_pre_computed_groups_vary_cardinality/cardinality_20_columns_2_rows_500000
                        time:   [50.546 us 50.642 us 50.785 us]
                        thrpt:  [393.82 Kelem/s 394.93 Kelem/s 395.68
Kelem/s]
Found 8 outliers among 100 measurements (8.00%)
  3 (3.00%) low mild
  2 (2.00%) high mild
  3 (3.00%) high severe
segment_read_group_pre_computed_groups_vary_cardinality/cardinality_200_columns_2_rows_500000
                        time:   [267.47 us 270.23 us 273.10 us]
                        thrpt:  [732.33 Kelem/s 740.12 Kelem/s 747.75
Kelem/s]
segment_read_group_pre_computed_groups_vary_cardinality/cardinality_2000_columns_2_rows_500000
                        time:   [14.961 ms 15.033 ms 15.113 ms]
                        thrpt:  [132.33 Kelem/s 133.04 Kelem/s 133.68
Kelem/s]
Found 11 outliers among 100 measurements (11.00%)
  3 (3.00%) high mild
  8 (8.00%) high severe

segment_read_group_pre_computed_groups_vary_columns/cardinality_200_columns_1_rows_500000
                        time:   [84.825 us 84.938 us 85.083 us]
                        thrpt:  [2.3506 Melem/s 2.3546 Melem/s 2.3578
Melem/s]
Found 14 outliers among 100 measurements (14.00%)
  7 (7.00%) high mild
  7 (7.00%) high severe
segment_read_group_pre_computed_groups_vary_columns/cardinality_200_columns_2_rows_500000
                        time:   [258.81 us 259.33 us 260.05 us]
                        thrpt:  [769.08 Kelem/s 771.22 Kelem/s 772.77
Kelem/s]
Found 14 outliers among 100 measurements (14.00%)
  2 (2.00%) high mild
  12 (12.00%) high severe
Benchmarking
segment_read_group_pre_computed_groups_vary_columns/cardinality_200_columns_3_rows_500000:
Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to
increase target time to 6.1s, enable flat sampling, or reduce sample
count to 60.
segment_read_group_pre_computed_groups_vary_columns/cardinality_200_columns_3_rows_500000
                        time:   [1.1971 ms 1.2020 ms 1.2079 ms]
                        thrpt:  [165.58 Kelem/s 166.39 Kelem/s 167.07
Kelem/s]
Found 13 outliers among 100 measurements (13.00%)
  3 (3.00%) high mild
  10 (10.00%) high severe
segment_read_group_pre_computed_groups_vary_rows/cardinality_200_columns_2_rows_250000
                        time:   [252.42 us 252.58 us 252.75 us]
                        thrpt:  [791.31 Kelem/s 791.84 Kelem/s 792.32
Kelem/s]
Found 10 outliers among 100 measurements (10.00%)
  2 (2.00%) high mild
  8 (8.00%) high severe
segment_read_group_pre_computed_groups_vary_rows/cardinality_200_columns_2_rows_500000
                        time:   [271.68 us 272.46 us 273.59 us]
                        thrpt:  [731.01 Kelem/s 734.04 Kelem/s 736.15
Kelem/s]
Found 8 outliers among 100 measurements (8.00%)
  8 (8.00%) high severe
segment_read_group_pre_computed_groups_vary_rows/cardinality_200_columns_2_rows_750000
                        time:   [293.17 us 293.42 us 293.65 us]
                        thrpt:  [681.09 Kelem/s 681.63 Kelem/s 682.20
Kelem/s]
Found 9 outliers among 100 measurements (9.00%)
  1 (1.00%) low mild
  4 (4.00%) high mild
  4 (4.00%) high severe
segment_read_group_pre_computed_groups_vary_rows/cardinality_200_columns_2_rows_1000000
                        time:   [306.48 us 307.11 us 307.95 us]
                        thrpt:  [649.45 Kelem/s 651.22 Kelem/s 652.57
Kelem/s]
Found 5 outliers among 100 measurements (5.00%)
  3 (3.00%) high mild
  2 (2.00%) high severe
2020-12-17 11:10:26 +00:00
.circleci ci: only run CircleCI for /perf branches 2020-11-27 11:34:56 +00:00
.github ci: add GitHub actions 2020-11-27 11:34:56 +00:00
arrow_deps chore: Update arrow + other depenencies (#540) 2020-12-15 08:46:27 -05:00
benches test(bench): fix bad line_parser prometheus assert 2020-11-30 11:47:27 +00:00
data_types style: wrap comments 2020-12-11 18:22:26 +00:00
docker ci: remove IOx pre-building in rust build container 2020-12-03 11:58:13 +00:00
docs refactor: compile out trace! level for release builds 2020-12-14 12:06:53 +00:00
generated_types style: wrap comments 2020-12-11 18:22:26 +00:00
influxdb2_client style: wrap comments 2020-12-11 18:22:26 +00:00
influxdb_line_protocol style: wrap comments 2020-12-11 18:22:26 +00:00
influxdb_tsm style: unmangle wrapped diagrams 2020-12-14 13:14:36 +00:00
ingest Merge branch 'main' into dom/rustfmt-wrapping-unmangle 2020-12-14 13:58:56 +00:00
mem_qe style: wrap comments 2020-12-11 18:22:26 +00:00
object_store fix: Compiler errors missed in aws object store tests because CI wasn't checking them (#564) 2020-12-15 12:28:42 -05:00
packers style: wrap comments 2020-12-11 18:22:26 +00:00
query chore: Update arrow + other depenencies (#540) 2020-12-15 08:46:27 -05:00
segment_store test: benchmarks for general read_group case 2020-12-17 11:10:26 +00:00
server style: wrap comments 2020-12-11 18:22:26 +00:00
src fix: do not produce gRPC series frames for fields that only contain null values (#558) 2020-12-15 08:28:23 -05:00
test_helpers style: wrap comments 2020-12-11 18:22:26 +00:00
tests feat: implement read_group for Sum, Count and Mean aggregates (#557) 2020-12-15 09:35:00 -05:00
wal style: wrap comments 2020-12-11 18:22:26 +00:00
write_buffer feat: implement read_group for Sum, Count and Mean aggregates (#557) 2020-12-15 09:35:00 -05:00
.gitignore feat: basic store 2020-09-25 10:12:46 +01:00
CONTRIBUTING.md fix: Move Contributing text into separate document (#566) 2020-12-16 16:06:58 -05:00
Cargo.lock chore: Update arrow + other depenencies (#540) 2020-12-15 08:46:27 -05:00
Cargo.toml refactor: compile out trace! level for release builds 2020-12-14 12:06:53 +00:00
LICENSE-APACHE fix: Add LICENSE (#430) 2020-11-10 12:10:07 -05:00
LICENSE-MIT fix: Add LICENSE (#430) 2020-11-10 12:10:07 -05:00
README.md fix: Move Contributing text into separate document (#566) 2020-12-16 16:06:58 -05:00
rust-toolchain chore: update to latest version of arrow + update code (#486) 2020-11-25 14:46:35 -05:00
rustfmt.toml style: enforce comment wrapping 2020-12-11 18:15:23 +00:00

README.md

InfluxDB IOx

InfluxDB IOx (short for Iron Oxide, pronounced InfluxDB "eye-ox") is the future core of InfluxDB, an open source time series database. The name is in homage to Rust, the language this project is written in. It is built using Apache Arrow and DataFusion among other things. InfluxDB IOx aims to be:

  • The future core of InfluxDB; supporting industry standard SQL, InfluxQL, and Flux
  • An in-memory columnar store using object storage for persistence
  • A fast analytic database for structured and semi-structured events (like logs and tracing data)
  • A system for defining replication (synchronous, asynchronous, push and pull) and partitioning rules for InfluxDB time series data and tabular analytics data
  • A system supporting real-time subscriptions
  • A processor that can transform and do arbitrary computation on time series and event data as it arrives
  • An analytic database built for data science, supporting Apache Arrow Flight for fast data transfer

Persistence is through Parquet files in object storage. It is a design goal to support integration with other big data systems through object storage and Parquet specifically.

For more details on the motivation behind the project and some of our goals, read through the InfluxDB IOx announcement blog post. If you prefer a video that covers a little bit of InfluxDB history and high level goals for InfluxDB IOx you can watch Paul Dix's announcement talk from InfluxDays NA 2020. For more details on the motivation behind the selection of Apache Arrow, Flight and Parquet, read this.

Project Status

This project is very early and in active development. It isn't yet ready for testing, which is why we're not producing builds or documentation yet. If you're interested in following along with the project, drop into our community Slack channel #influxdb_iox. You can find links to join here.

We're also hosting monthly tech talks and community office hours on the project on the 2nd Wednesday of the month at 8:30 AM Pacific Time. The first InfluxDB IOx Tech Talk is on December 9th and you can find details here.

Quick Start

To compile and run InfluxDB IOx from source, you'll need a Rust compiler and a flatc FlatBuffers compiler.

Cloning the Repository

Using git, check out the code by cloning this repository. If you use the git command line, this looks like:

git clone git@github.com:influxdata/influxdb_iox.git

Then change into the directory containing the code:

cd influxdb_iox

The rest of the instructions assume you are in this directory.

Installing Rust

The easiest way to install Rust is by using rustup, a Rust version manager. Follow the instructions on the rustup site for your operating system.

By default, rustup will install the latest stable verison of Rust. InfluxDB IOx is currently using a nightly version of Rust to get performance benefits from the unstable simd feature. The exact nightly version is specified in the rust-toolchain file. When you're in the directory containing this repository's code, rustup will look in the rust-toolchain file and automatically install and use the correct Rust version for you. Test this out with:

rustc --version

and you should see a nightly version of Rust!

Installing flatc

InfluxDB IOx uses the FlatBuffer serialization format for its write-ahead log. The flatc compiler reads the schema in generated_types/wal.fbs and generates the corresponding Rust code.

Install flatc >= 1.12.0 with one of these methods as appropriate to your operating system:

Once you have installed the packages, you should be able to run:

flatc --version

and see the version displayed.

You won't have to run flatc directly; once it's available, Rust's Cargo build tool manages the compilation process by calling flatc for you.

Installing clang

An installation of clang is required to build the croaring dependency - if it is not already present, it can typically be installed with the system package manager.

clang --version
Apple clang version 12.0.0 (clang-1200.0.32.27)
Target: x86_64-apple-darwin20.1.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin

Specifying Configuration

OPTIONAL: There are a number of configuration variables you can choose to customize by specifying values for environment variables in a .env file. To get an example file to start from, run:

cp docs/env.example .env

then edit the newly-created .env file.

For development purposes, the most relevant environment variables are the INFLUXDB_IOX_DB_DIR and TEST_INFLUXDB_IOX_DB_DIR variables that configure where files are stored on disk. The default values are shown in the comments in the example file; to change them, uncomment the relevant lines and change the values to the directories in which you'd like to store the files instead:

INFLUXDB_IOX_DB_DIR=/some/place/else
TEST_INFLUXDB_IOX_DB_DIR=/another/place

Compiling and Starting the Server

InfluxDB IOx is built using Cargo, Rust's package manager and build tool.

To compile for development, run:

cargo build

which will create a binary in target/debug that you can run with:

./target/debug/influxdb_iox

You can compile and run with one command by using:

cargo run

When compiling for performance testing, build in release mode by using:

cargo build --release

which will create the corresponding binary in target/release:

./target/release/influxdb_iox

Similarly, you can do this in one step with:

cargo run --release

The server will, by default, start an HTTP API server on port 8080 and a gRPC server on port 8082.

Writing and Reading Data

Data can be stored in InfluxDB IOx by sending it in line protocol format to the /api/v2/write endpoint. Data is stored by organization and bucket names. Here's an example using curl with the organization name company and the bucket name sensors that will send the data in the tests/fixtures/lineproto/metrics.lp file in this repository, assuming that you're running the server on the default port:

curl -v "http://127.0.0.1:8080/api/v2/write?org=company&bucket=sensors" --data-binary @tests/fixtures/lineproto/metrics.lp

To query stored data, use the /api/v2/read endpoint with a SQL query. This example will return all data in the company organization's sensors bucket for the processes measurement:

curl -v -G -d 'org=company' -d 'bucket=sensors' --data-urlencode 'sql_query=select * from processes' "http://127.0.0.1:8080/api/v2/read"

Contributing

We welcome community contributions from anyone!

Read our Contributing Guide for instructions on how to make your first contribution.