Go to file
Marco Neumann 59accfe862
refactor: assorted fixes and prep work for #4124 (#4912)
* refactor: `TestPartition::update_sort_key` should return an `Arc`

The whole test framework is built around `Arc`s, so let's fix this
consistency issue.

* fix: actually calculate correct column set in test framework

* feat: check expected parquet file schema

While working on the querier I made some mistakes regarding schemas and
such a check would have greatly improved the debugging experience.

* feat: namespace cache expiration

* fix: improve parquet schema check

* fix: remove clone
2022-06-21 16:08:28 +00:00
.cargo ci: switch cargo-audit to cargo-deny 2022-03-07 15:35:41 +00:00
.circleci chore: Update CI checks to verify data generator build (#4857) 2022-06-15 10:31:14 +00:00
.github chore: add semantic PR and commit message checks (#4344) 2022-04-19 17:29:03 +00:00
.guppy chore: reduce dependencies and run `cargo update` (#4906) 2022-06-20 12:18:28 +00:00
arrow_util fix: reduce memory usage in ingester with less buffering prior to query engine (#4830) 2022-06-14 18:22:55 +00:00
backoff chore(deps): Bump tokio from 1.18.2 to 1.19.1 (#4783) 2022-06-06 14:15:12 +00:00
cache_system feat: measure "probably reloaded" cache loads (#4813) 2022-06-13 13:51:45 +00:00
clap_blocks chore(deps): Bump sqlx to 0.6.0 and uuid to 1 (#4894) 2022-06-17 10:28:28 +00:00
client_util chore(deps): Bump http from 0.2.7 to 0.2.8 (#4796) 2022-06-07 13:35:01 +00:00
compactor refactor: store per-file column set in catalog (#4908) 2022-06-21 10:26:12 +00:00
data_types refactor: store per-file column set in catalog (#4908) 2022-06-21 10:26:12 +00:00
datafusion chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `16.0.0 (#4851) 2022-06-14 16:31:40 +00:00
datafusion_util chore: Update datafusion (again) (#4788) 2022-06-07 08:17:56 +00:00
dml feat: propagate partition key through kafka 2022-06-20 13:42:51 +01:00
docker feat: add protobuf-compiler to CI image (#4180) 2022-03-30 17:36:27 +00:00
docs refactor: use new ingester<>querier wire protocol (#4867) 2022-06-16 08:02:28 +00:00
executor chore(deps): Bump tokio-util from 0.7.2 to 0.7.3 (#4784) 2022-06-06 14:46:27 +00:00
generated_types refactor: store per-file column set in catalog (#4908) 2022-06-21 10:26:12 +00:00
influxdb2_client chore(deps): Bump sqlx to 0.6.0 and uuid to 1 (#4894) 2022-06-17 10:28:28 +00:00
influxdb_iox refactor: store per-file column set in catalog (#4908) 2022-06-21 10:26:12 +00:00
influxdb_iox_client refactor: use new ingester<>querier wire protocol (#4867) 2022-06-16 08:02:28 +00:00
influxdb_line_protocol test: cover all string field types 2022-04-25 11:28:52 +01:00
influxdb_storage_client chore: Update arrow, arrow-flight, parquet, tonic, prost, etc (#4357) 2022-04-20 11:12:17 +00:00
influxdb_tsm chore(deps): Bump integer-encoding from 3.0.2 to 3.0.3 (#3827) 2022-02-23 09:37:26 +00:00
influxrpc_parser chore(deps): Bump sqlparser from 0.17.0 to 0.18.0 (#4792) 2022-06-07 10:00:53 +00:00
ingester refactor: store per-file column set in catalog (#4908) 2022-06-21 10:26:12 +00:00
iox_catalog refactor: store per-file column set in catalog (#4908) 2022-06-21 10:26:12 +00:00
iox_data_generator chore(deps): Bump sqlx to 0.6.0 and uuid to 1 (#4894) 2022-06-17 10:28:28 +00:00
iox_query refactor: Remove unused `Option` (#4839) 2022-06-15 10:24:51 +00:00
iox_tests refactor: assorted fixes and prep work for #4124 (#4912) 2022-06-21 16:08:28 +00:00
iox_time refactor: Rename time to iox_time (#4416) 2022-04-26 00:19:59 +00:00
ioxd_common refactor: store PartitionKey in DmlWrite 2022-06-15 15:48:54 +01:00
ioxd_compactor feat: teach compactor to compact smaller number of files (#4671) 2022-05-25 19:54:34 +00:00
ioxd_ingester feat: add concurrency limit for ingester queries (#4703) 2022-05-30 10:22:17 +00:00
ioxd_querier fix: Change the sharder to return error instead of panicking for no shards 2022-06-15 11:23:31 -04:00
ioxd_router fix: Change the sharder to return error instead of panicking for no shards 2022-06-15 11:23:31 -04:00
ioxd_test chore(deps): Bump tokio-util from 0.7.2 to 0.7.3 (#4784) 2022-06-06 14:46:27 +00:00
logfmt chore(deps): Bump once_cell from 1.11.0 to 1.12.0 (#4666) 2022-05-24 08:14:03 +00:00
metric feat: extend duration histograms down to 1ms (#4854) 2022-06-14 15:30:00 +00:00
metric_exporters feat: Results of running cargo hakari manage-deps 2021-11-19 09:21:57 -05:00
mutable_batch refactor: always generate a partition key 2022-06-15 15:38:02 +01:00
mutable_batch_lp feat: benchmark LP -> MutableBuffer conversion 2022-04-25 11:44:35 +01:00
mutable_batch_pb feat: propagate partition key through kafka 2022-06-20 13:42:51 +01:00
mutable_batch_tests refactor: store PartitionKey in DmlWrite 2022-06-15 15:48:54 +01:00
object_store_metrics feat: Log time spent requesting ingester partitions (#4806) 2022-06-14 17:58:19 +00:00
observability_deps refactor: disable max_debug static tracing level 2022-04-12 17:49:50 +01:00
packers chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `16.0.0 (#4851) 2022-06-14 16:31:40 +00:00
panic_logging feat: emit thread panic count metric 2022-04-20 12:30:02 +01:00
parquet_file refactor: assorted fixes and prep work for #4124 (#4912) 2022-06-21 16:08:28 +00:00
perf chore: remove rdkafka dependency (#3625) 2022-02-03 13:33:56 +00:00
predicate chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `16.0.0 (#4851) 2022-06-14 16:31:40 +00:00
querier refactor: assorted fixes and prep work for #4124 (#4912) 2022-06-21 16:08:28 +00:00
query_functions chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `16.0.0 (#4851) 2022-06-14 16:31:40 +00:00
query_tests refactor: stream query results from ingester to querier (#4875) 2022-06-16 12:58:50 +00:00
read_buffer refactor: remove table name from read buffer (#4910) 2022-06-21 11:57:28 +00:00
router feat: propagate partition key through kafka 2022-06-20 13:42:51 +01:00
schema chore(deps): Bump indexmap from 1.8.2 to 1.9.0 (#4891) 2022-06-17 07:42:36 +00:00
scripts chore: Update datafusion and upgrade arrow/parquet/arrow-flight to 13 (#4516) 2022-05-05 00:21:02 +00:00
service_common feat: expose query semaphore metrics (#4836) 2022-06-13 09:36:50 +00:00
service_grpc_catalog refactor: store per-file column set in catalog (#4908) 2022-06-21 10:26:12 +00:00
service_grpc_flight chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `16.0.0 (#4851) 2022-06-14 16:31:40 +00:00
service_grpc_influxrpc chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `16.0.0 (#4851) 2022-06-14 16:31:40 +00:00
service_grpc_object_store refactor: store per-file column set in catalog (#4908) 2022-06-21 10:26:12 +00:00
service_grpc_schema feat: Interrogate schema from querier (as well as router) (#4557) 2022-05-10 20:55:58 +00:00
service_grpc_testing chore: Update arrow, arrow-flight, parquet, tonic, prost, etc (#4357) 2022-04-20 11:12:17 +00:00
sharder fix: Change the sharder to return error instead of panicking for no shards 2022-06-15 11:23:31 -04:00
sqlx-hotswap-pool chore(deps): Bump sqlx to 0.6.0 and uuid to 1 (#4894) 2022-06-17 10:28:28 +00:00
test_fixtures chore: move `influxdb_iox` into a proper workspace package 2021-10-26 11:02:33 +02:00
test_helpers chore(deps): Bump tokio from 1.19.1 to 1.19.2 (#4795) 2022-06-07 12:37:55 +00:00
test_helpers_end_to_end feat: propagate partition key through kafka 2022-06-20 13:42:51 +01:00
tools feat: tool to graphically display plans (#803) 2021-02-15 13:03:12 +00:00
trace feat: improve trace naming (#3931) 2022-03-07 11:49:19 +00:00
trace_exporters chore(deps): Bump clap from 3.1.18 to 3.2.1 (#4848) 2022-06-14 15:42:18 +00:00
trace_http refactor: drop trace log verbosity 2022-03-09 10:38:20 +00:00
tracker feat: instrument semaphore "cancelled while pending" requests (#4876) 2022-06-16 12:33:39 +00:00
trogging chore(deps): Bump clap from 3.1.18 to 3.2.1 (#4848) 2022-06-14 15:42:18 +00:00
workspace-hack chore: reduce dependencies and run `cargo update` (#4906) 2022-06-20 12:18:28 +00:00
write_buffer feat: propagate partition key through kafka 2022-06-20 13:42:51 +01:00
write_summary refactor: Make `DMLWrite::sequence_number` a `SequenceNumber` (#4817) 2022-06-09 19:36:37 +00:00
.editorconfig chore: add indent to editorconfig 2021-06-28 12:52:03 +02:00
.gitattributes feat: implement jaeger-agent protocol directly (#2607) 2021-09-22 17:30:37 +00:00
.gitignore chore: ignore massif profiler outputs 2021-09-02 15:58:43 +02:00
.kodiak.toml chore: Set default to squash 2022-01-25 15:57:10 +01:00
.yamllint.yml ci: run yamllint 2021-11-29 15:11:15 +01:00
CONTRIBUTING.md docs(various): Improve Readability (#4768) 2022-06-02 18:01:06 +00:00
Cargo.lock chore: reduce dependencies and run `cargo update` (#4906) 2022-06-20 12:18:28 +00:00
Cargo.toml chore: reduce dependencies and run `cargo update` (#4906) 2022-06-20 12:18:28 +00:00
Dockerfile fix: Add protobuf compiler to docker build image (#4369) 2022-04-20 13:33:06 +00:00
Dockerfile.dockerignore fix: bring back GIT revision to our prod images (#3537) 2022-01-26 10:12:43 +00:00
LICENSE-APACHE fix: Add LICENSE (#430) 2020-11-10 12:10:07 -05:00
LICENSE-MIT fix: Add LICENSE (#430) 2020-11-10 12:10:07 -05:00
README.md docs(various): Improve Readability (#4768) 2022-06-02 18:01:06 +00:00
buf.yaml ci: run yamllint 2021-11-29 15:11:15 +01:00
deny.toml ci: simplify cargo deny (#4640) 2022-05-19 09:51:15 +00:00
rust-toolchain.toml chore: Update to Rust 1.61 2022-05-19 14:39:05 -04:00
rustfmt.toml chore: use Rust edition 2021 2021-10-25 10:58:20 +02:00

README.md

InfluxDB IOx

InfluxDB IOx (short for Iron Oxide, pronounced InfluxDB "eye-ox") is the future core of InfluxDB, an open source time series database. The name is in homage to Rust, the language this project is written in. It is built using Apache Arrow and DataFusion among other things. InfluxDB IOx aims to be:

  • The future core of InfluxDB; supporting industry standard SQL, InfluxQL, and Flux
  • An in-memory columnar store using object storage for persistence
  • A fast analytic database for structured and semi-structured events (like logs and tracing data)
  • A system for defining replication (synchronous, asynchronous, push and pull) and partitioning rules for InfluxDB time series data and tabular analytics data
  • A system supporting real-time subscriptions
  • A processor that can transform and do arbitrary computation on time series and event data as it arrives
  • An analytic database built for data science, supporting Apache Arrow Flight for fast data transfer

Persistence is through Parquet files in object storage. It is a design goal to support integration with other big data systems through object storage and Parquet specifically.

For more details on the motivation behind the project and some of our goals, read through the InfluxDB IOx announcement blog post. If you prefer a video that covers a little bit of InfluxDB history and high level goals for InfluxDB IOx you can watch Paul Dix's announcement talk from InfluxDays NA 2020. For more details on the motivation behind the selection of Apache Arrow, Flight and Parquet, read this.

Supported Platforms

As we commit to support platforms they will be added here. Our current goal is that the following platforms will be able to run InfluxDB IOx.

  • Linux x86 (x86_64-unknown-linux-gnu)
  • Darwin x86 (x86_64-apple-darwin)
  • Darwin arm (aarch64-apple-darwin)

This list is very unlikely to be complete; we will add more platforms based on our ability to support them effectively.

Project Status

This project is very early and in active development. It isn't yet ready for testing, which is why we're not producing builds or documentation yet.

If you would like contact the InfluxDB IOx developers, join the InfluxData Community Slack and look for the #influxdb_iox channel.

We're also hosting monthly tech talks and community office hours on the project on the 2nd Wednesday of the month at 8:30 AM Pacific Time.

Get started

  1. Install dependencies
  2. Clone the repository
  3. Configure the server
  4. Compiling and Running (You can also build a Docker image to run InfluxDB IOx.)
  5. Write and read data
  6. Use the CLI
  7. Use InfluxDB 2.0 API compatibility
  8. Run health checks
  9. Manually call the gRPC API

Install dependencies

To compile and run InfluxDB IOx from source, you'll need the following:

Rust

The easiest way to install Rust is to use rustup, a Rust version manager. Follow the instructions for your operating system on the rustup site.

rustup will check the rust-toolchain file and automatically install and use the correct Rust version for you.

Clang

Building InfluxDB IOx requires clang (for the croaring dependency). Check for clang by running clang --version.

clang --version
Apple clang version 12.0.0 (clang-1200.0.32.27)
Target: x86_64-apple-darwin20.1.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin

If clang is not already present, it can typically be installed with the system package manager.

lld

If you are building InfluxDB IOx on Linux then you will need to ensure you have installed the lld LLVM linker. Check if you have already installed it by running lld -version.

lld -version
lld is a generic driver.
Invoke ld.lld (Unix), ld64.lld (macOS), lld-link (Windows), wasm-ld (WebAssembly) instead

If lld is not already present, it can typically be installed with the system package manager.

protoc

If you are building InfluxDB IOx on Apple Silicon you may find that the build fails with an error containing:

failed to invoke protoc (hint: https://docs.rs/prost-build/#sourcing-protoc): Bad CPU type in executable (os error 86)

Prost bundles a protoc binary, which it uses if it cannot find a system alternative. Prior to version 0.9, the binary it chooses with the above error is an x86 one, which won't work if you do not have Rosetta installed on your system.

You can install Rosetta by running:

softwareupdate --install-rosetta

An alternative to installing Rosetta is to point Prost at an arm build of protoc. First, install protoc, e.g., via Homebrew:

brew update && brew install protobuf

Then set the following environment variables to point Prost at your system install:

PROTOC=/opt/homebrew/bin/protoc
PROTOC_INCLUDE=/opt/homebrew/include

IOx should then build correctly.

Postgres

The catalog is stored in Postgres (unless you're running in ephemeral mode). Postgres can be installed via Homebrew:

brew install postgres

then follow the instructions for starting Postgres either at system startup or on-demand.

Clone the repository

Clone this repository using git. If you use the git command line, this looks like:

git clone git@github.com:influxdata/influxdb_iox.git

Then change into the directory containing the code:

cd influxdb_iox

The rest of these instructions assume you are in this directory.

Configure the server

InfluxDB IOx can be configured using either environment variables or a configuration file, making it suitable for deployment in containerized environments.

For a list of configuration options, run influxdb_iox --help. For configuration options for specific subcommands, run influxdb_iox <subcommand> --help.

To use a configuration file, use a .env file in the working directory. See the provided example configuration file. To use the example configuration file, run:

cp docs/env.example .env

Compiling and Running

InfluxDB IOx is built using Cargo, Rust's package manager and build tool.

To compile for development, run:

cargo build

This creates a binary at target/debug/influxdb_iox.

Build a Docker image (optional)

Building the Docker image requires:

  • Docker 18.09+
  • BuildKit

To enable BuildKit by default, set { "features": { "buildkit": true } } in the Docker engine configuration, or run docker build withDOCKER_BUILDKIT=1

To build the Docker image:

DOCKER_BUILDKIT=1 docker build .

Ephemeral mode

To start InfluxDB IOx and store data in memory, after you've compiled for development, run:

./target/debug/influxdb_iox run all-in-one

By default this runs an "all-in-one" server with HTTP server on port 8080, router gRPC server on port 8081 and querier gRPC server on port 8082. When the server is stopped all data lost.

Local persistence mode

To start InfluxDB IOx and store the catalog in Postgres and data in the local filesystem to persist data across restarts, after you've compiled for development, run:

./target/debug/influxdb_iox run all-in-one --catalog-dsn postgres:///iox_shared --data-dir=~/iox_data

where --catalog-dsn is a connection URL to the Postgres database you wish to use, and --data-dir is the directory you wish to use.

Note that when the server is stopped all data that has not yet been written to parquet files will be lost.

Compile and run

Rather than building and running the binary in target, you can also compile and run with one command:

cargo run -- run all-in-one

Release mode for performance testing

To compile for performance testing, build in release mode then use the binary in target/release:

cargo build --release
./target/release/influxdb_iox run all-in-one

You can also compile and run in release mode with one step:

cargo run --release -- run all-in-one

Running tests

You can run tests using:

cargo test --all

See [docs/testing.md] for more information

Write and read data

Data can be written to InfluxDB IOx by sending line protocol format to the /api/v2/write endpoint or using the CLI.

For example, assuming you are running in local mode, this command will send data in the test_fixtures/lineproto/metrics.lp file to the company_sensors database.

./target/debug/influxdb_iox -vv write company_sensors test_fixtures/lineproto/metrics.lp --host http://localhost:8081

Note that --host http://localhost:8081 is required because the router and query services run on different gRPC ports and the CLI defaults to the querier's port, 8082.

To query the data stored in the company_sensors database:

./target/debug/influxdb_iox query company_sensors "SELECT * FROM cpu LIMIT 10"

Use the CLI

InfluxDB IOx is packaged as a binary with commands to start the IOx server, as well as a CLI interface for interacting with and configuring such servers.

The CLI itself is documented via built-in help which you can access by running influxdb_iox --help

Use InfluxDB 2.0 API compatibility

InfluxDB IOx allows seamless interoperability with InfluxDB 2.0.

Where InfluxDB 2.0 stores data in organizations and buckets, InfluxDB IOx stores data in namespaces. IOx maps organization and bucket pairs to namespaces with the two parts separated by an underscore (_): organization_bucket.

Here's an example using curl to send data into the company_sensors namespace using the InfluxDB 2.0 /api/v2/write API:

curl -v "http://127.0.0.1:8080/api/v2/write?org=company&bucket=sensors" --data-binary @test_fixtures/lineproto/metrics.lp

Run health checks

The HTTP API exposes a healthcheck endpoint at /health

$ curl http://127.0.0.1:8080/health
OK

The gRPC API implements the gRPC Health Checking Protocol. This can be tested with grpc-health-probe:

$ grpc_health_probe -addr 127.0.0.1:8082 -service influxdata.platform.storage.Storage
status: SERVING

Manually call the gRPC API

To manually invoke one of the gRPC APIs, use a gRPC CLI client such as grpcurl.

Tonic (the gRPC server library we're using) currently doesn't have support for gRPC reflection, hence you must pass all .proto files to your client. You can find a convenient grpcurl wrapper that does that in the scripts directory:

$ ./scripts/grpcurl -plaintext 127.0.0.1:8082 list
grpc.health.v1.Health
influxdata.iox.management.v1.ManagementService
influxdata.platform.storage.IOxTesting
influxdata.platform.storage.Storage
$ ./scripts/grpcurl -plaintext 127.0.0.1:8082 influxdata.iox.management.v1.ManagementService.ListDatabases
{
  "names": [
    "foobar_weather"
  ]
}

Contributing

We welcome community contributions from anyone!

Read our Contributing Guide for instructions on how to run tests and how to make your first contribution.

Architecture and Technical Documentation

There are a variety of technical documents describing various parts of IOx in the docs directory.