Go to file
Dom Dwyer 43300878bc fix(pb): encoding entirely NULL columns (#4272)
This commit changes the protobuf record batch encoding to skip entirely
NULL columns when serialising. This prevents the deserialisation from
erroring due to a column type inference failure.

Prior to this commit, when the system was presented a record batch such
as this:

            | time       | A    | B    |
            | ---------- | ---- | ---- |
            | 1970-01-01 | 1    | NULL |
            | 1970-07-05 | NULL | 1    |

Which would be partitioned by YMD into two separate partitions:

            | time       | A    | B    |
            | ---------- | ---- | ---- |
            | 1970-01-01 | 1    | NULL |

and:

            | time       | A    | B    |
            | ---------- | ---- | ---- |
            | 1970-07-05 | NULL | 1    |

Both partitions would contain an entirely NULL column.

Both of these partitioned record batches would be successfully encoded,
but decoding the partition fails due to the inability to infer a column
type from the serialised format which contains no values, which on the
wire, looks like:

            Column {
                column_name: "B",
                semantic_type: Field,
                values: Some(
                    Values {
                        i64_values: [],
                        f64_values: [],
                        u64_values: [],
                        string_values: [],
                        bool_values: [],
                        bytes_values: [],
                        packed_string_values: None,
                        interned_string_values: None,
                    },
                ),
                null_mask: [
                    1,
                ],
            },

In a column that is not entirely NULL, one of the "Values" fields would
be non-empty, and the decoder would use this to infer the type of the
column.

Because we have chosen to not differentiate between "NULL" and "empty"
in our proto encoding, the decoder cannot infer which field within the
"Values" struct the column belongs to - all are valid, but empty.

This commit prevents this type inference failure by skipping any columns
that are entirely NULL during serialisation, preventing the deserialiser
from having to process columns with ambiguous types.
2022-05-18 13:33:26 +01:00
.cargo ci: switch cargo-audit to cargo-deny 2022-03-07 15:35:41 +00:00
.circleci ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
.github chore: add semantic PR and commit message checks (#4344) 2022-04-19 17:29:03 +00:00
.guppy chore: remove azure git pin (#4564) 2022-05-11 10:21:17 +00:00
arrow_util feat: is all set/unset BitSet query methods 2022-05-18 13:33:24 +01:00
backoff chore(deps): Bump tokio from 1.17.0 to 1.18.0 (#4453) 2022-04-28 08:21:17 +00:00
cache_system refactor: `cache_system::backend::dual` 2022-05-18 11:39:30 +02:00
clap_blocks feat: querier RAM pool (#4593) 2022-05-17 13:11:20 +00:00
client_util refactor: Remove unused dependencies 2022-05-06 15:57:58 -04:00
compactor ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
data_types feat: querier RAM pool (#4593) 2022-05-17 13:11:20 +00:00
datafusion chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
datafusion_util chore: Update datafusion (again) (#4518) 2022-05-05 15:43:41 +00:00
dml refactor: Remove unused dependencies 2022-05-06 15:57:58 -04:00
docker feat: add protobuf-compiler to CI image (#4180) 2022-03-30 17:36:27 +00:00
docs ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
executor chore(deps): Bump tokio-util from 0.7.1 to 0.7.2 (#4605) 2022-05-16 11:42:31 +00:00
generated_types refactor: Move the code merging write infos to generated_types to share 2022-05-11 14:07:42 -04:00
grpc-router refactor: Remove unused dependencies 2022-05-06 15:57:58 -04:00
grpc-router-test-gen refactor: Remove unused dependencies 2022-05-06 15:57:58 -04:00
influxdb2_client chore(deps): Bump serde_json from 1.0.80 to 1.0.81 (#4514) 2022-05-04 14:10:35 +00:00
influxdb_iox ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
influxdb_iox_client chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
influxdb_line_protocol test: cover all string field types 2022-04-25 11:28:52 +01:00
influxdb_storage_client chore: Update arrow, arrow-flight, parquet, tonic, prost, etc (#4357) 2022-04-20 11:12:17 +00:00
influxdb_tsm chore(deps): Bump integer-encoding from 3.0.2 to 3.0.3 (#3827) 2022-02-23 09:37:26 +00:00
influxrpc_parser chore(deps): Bump sqlparser from 0.16.0 to 0.17.0 (#4560) 2022-05-11 10:09:22 +00:00
ingester ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
iox_catalog fix: Remove allow dead_code annotations from undead code 2022-05-06 16:58:02 -04:00
iox_data_generator fix: Remove allow dead_code and remove dead code 2022-05-06 16:58:03 -04:00
iox_gitops_adapter chore(deps): Bump schemars from 0.8.9 to 0.8.10 (#4627) 2022-05-18 10:23:34 +00:00
iox_query ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
iox_tests ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
iox_time refactor: Rename time to iox_time (#4416) 2022-04-26 00:19:59 +00:00
ioxd_common chore(deps): Bump tokio-util from 0.7.1 to 0.7.2 (#4605) 2022-05-16 11:42:31 +00:00
ioxd_compactor ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
ioxd_ingester ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
ioxd_querier ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
ioxd_router chore(deps): Bump tokio-util from 0.7.1 to 0.7.2 (#4605) 2022-05-16 11:42:31 +00:00
ioxd_test chore(deps): Bump tokio-util from 0.7.1 to 0.7.2 (#4605) 2022-05-16 11:42:31 +00:00
logfmt fix(logfmt): quote strings containing '=' 2022-04-19 16:47:18 +01:00
metric feat: inc/dec gauge metrics 2022-02-24 15:04:49 +00:00
metric_exporters feat: Results of running cargo hakari manage-deps 2021-11-19 09:21:57 -05:00
mutable_batch chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
mutable_batch_lp feat: benchmark LP -> MutableBuffer conversion 2022-04-25 11:44:35 +01:00
mutable_batch_pb fix(pb): encoding entirely NULL columns (#4272) 2022-05-18 13:33:26 +01:00
mutable_batch_tests chore: Update arrow, arrow-flight, parquet, tonic, prost, etc (#4357) 2022-04-20 11:12:17 +00:00
object_store_metrics feat: update to crates.io object_store version (#4595) 2022-05-13 16:26:07 +00:00
observability_deps refactor: disable max_debug static tracing level 2022-04-12 17:49:50 +01:00
packers chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
panic_logging feat: emit thread panic count metric 2022-04-20 12:30:02 +01:00
parquet_file chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
perf chore: remove rdkafka dependency (#3625) 2022-02-03 13:33:56 +00:00
predicate chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
querier refactor: remove unused `TableCache::id` 2022-05-18 11:39:30 +02:00
query_functions chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
query_tests ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
read_buffer chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
router feat: update to crates.io object_store version (#4595) 2022-05-13 16:26:07 +00:00
schema chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
scripts chore: Update datafusion and upgrade arrow/parquet/arrow-flight to 13 (#4516) 2022-05-05 00:21:02 +00:00
service_common ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
service_grpc_catalog refactor: rename `iox_catalog_service` to `service_grpc_catalog` for consistency (#4581) 2022-05-13 14:07:58 +00:00
service_grpc_flight ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
service_grpc_influxrpc ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
service_grpc_object_store feat: update to crates.io object_store version (#4595) 2022-05-13 16:26:07 +00:00
service_grpc_schema feat: Interrogate schema from querier (as well as router) (#4557) 2022-05-10 20:55:58 +00:00
service_grpc_testing chore: Update arrow, arrow-flight, parquet, tonic, prost, etc (#4357) 2022-04-20 11:12:17 +00:00
sqlx-hotswap-pool refactor: Remove unused dependencies 2022-05-06 15:57:58 -04:00
test_fixtures chore: move `influxdb_iox` into a proper workspace package 2021-10-26 11:02:33 +02:00
test_helpers chore(deps): Bump tokio from 1.18.1 to 1.18.2 (#4535) 2022-05-09 08:01:20 +00:00
test_helpers_end_to_end chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
tools feat: tool to graphically display plans (#803) 2021-02-15 13:03:12 +00:00
trace feat: improve trace naming (#3931) 2022-03-07 11:49:19 +00:00
trace_exporters chore(deps): Bump tokio from 1.17.0 to 1.18.0 (#4453) 2022-04-28 08:21:17 +00:00
trace_http refactor: drop trace log verbosity 2022-03-09 10:38:20 +00:00
tracker chore(deps): Bump tokio-util from 0.7.1 to 0.7.2 (#4605) 2022-05-16 11:42:31 +00:00
trogging feat: Add more logging to understand the flaky multi ingester test better (#4580) 2022-05-12 20:05:05 +00:00
workspace-hack chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619) 2022-05-17 14:13:03 +00:00
write_buffer feat: add SOCKS5 support to Kafka write buffer (#4623) 2022-05-17 15:21:35 +00:00
write_summary feat: Add more logging in particular situations to debug flaky test 2022-05-16 16:46:29 -04:00
.editorconfig chore: add indent to editorconfig 2021-06-28 12:52:03 +02:00
.gitattributes feat: implement jaeger-agent protocol directly (#2607) 2021-09-22 17:30:37 +00:00
.gitignore chore: ignore massif profiler outputs 2021-09-02 15:58:43 +02:00
.kodiak.toml chore: Set default to squash 2022-01-25 15:57:10 +01:00
.yamllint.yml ci: run yamllint 2021-11-29 15:11:15 +01:00
CONTRIBUTING.md feat: remove flatbuffer entry (#3045) 2021-11-05 20:19:24 +00:00
Cargo.lock fix(pb): encoding entirely NULL columns (#4272) 2022-05-18 13:33:26 +01:00
Cargo.toml ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
Dockerfile fix: Add protobuf compiler to docker build image (#4369) 2022-04-20 13:33:06 +00:00
Dockerfile.dockerignore fix: bring back GIT revision to our prod images (#3537) 2022-01-26 10:12:43 +00:00
LICENSE-APACHE fix: Add LICENSE (#430) 2020-11-10 12:10:07 -05:00
LICENSE-MIT fix: Add LICENSE (#430) 2020-11-10 12:10:07 -05:00
README.md doc(README) fix (#4492) 2022-05-01 14:44:06 +00:00
buf.yaml ci: run yamllint 2021-11-29 15:11:15 +01:00
deny.toml ci: fix cargo deny (#4629) 2022-05-18 09:38:35 +00:00
rust-toolchain.toml build: use rust 1.60 2022-04-11 12:41:27 +01:00
rustfmt.toml chore: use Rust edition 2021 2021-10-25 10:58:20 +02:00

README.md

InfluxDB IOx

InfluxDB IOx (short for Iron Oxide, pronounced InfluxDB "eye-ox") is the future core of InfluxDB, an open source time series database. The name is in homage to Rust, the language this project is written in. It is built using Apache Arrow and DataFusion among other things. InfluxDB IOx aims to be:

  • The future core of InfluxDB; supporting industry standard SQL, InfluxQL, and Flux
  • An in-memory columnar store using object storage for persistence
  • A fast analytic database for structured and semi-structured events (like logs and tracing data)
  • A system for defining replication (synchronous, asynchronous, push and pull) and partitioning rules for InfluxDB time series data and tabular analytics data
  • A system supporting real-time subscriptions
  • A processor that can transform and do arbitrary computation on time series and event data as it arrives
  • An analytic database built for data science, supporting Apache Arrow Flight for fast data transfer

Persistence is through Parquet files in object storage. It is a design goal to support integration with other big data systems through object storage and Parquet specifically.

For more details on the motivation behind the project and some of our goals, read through the InfluxDB IOx announcement blog post. If you prefer a video that covers a little bit of InfluxDB history and high level goals for InfluxDB IOx you can watch Paul Dix's announcement talk from InfluxDays NA 2020. For more details on the motivation behind the selection of Apache Arrow, Flight and Parquet, read this.

Supported Platforms

As we commit to support platforms they will be added here. Our current goal is that the following platforms will be able to run InfluxDB IOx.

  • Linux x86 (x86_64-unknown-linux-gnu)
  • Darwin x86 (x86_64-apple-darwin)
  • Darwin arm (aarch64-apple-darwin)

This list is very unlikely to be complete; we will add more platforms based on our ability to support them effectively.

Project Status

This project is very early and in active development. It isn't yet ready for testing, which is why we're not producing builds or documentation yet.

If you would like contact the InfluxDB IOx developers, join the InfluxData Community Slack and look for the #influxdb_iox channel.

We're also hosting monthly tech talks and community office hours on the project on the 2nd Wednesday of the month at 8:30 AM Pacific Time.

Get started

  1. Install dependencies
  2. Clone the repository
  3. Configure the server
  4. Compiling and Running (You can also build a Docker image to run InfluxDB IOx.)
  5. Write and read data
  6. Use the CLI
  7. Use InfluxDB 2.0 API compatibility
  8. Run health checks
  9. Manually call the gRPC API

Install dependencies

To compile and run InfluxDB IOx from source, you'll need the following:

Rust

The easiest way to install Rust is to use rustup, a Rust version manager. Follow the instructions for your operating system on the rustup site.

rustup will check the rust-toolchain file and automatically install and use the correct Rust version for you.

Clang

Building InfluxDB IOx requires clang (for the croaring dependency). Check for clang by running clang --version.

clang --version
Apple clang version 12.0.0 (clang-1200.0.32.27)
Target: x86_64-apple-darwin20.1.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin

If clang is not already present, it can typically be installed with the system package manager.

lld

If you are building InfluxDB IOx on Linux then you will need to ensure you have installed the lld LLVM linker. Check if you have already installed it by running lld -version.

lld -version
lld is a generic driver.
Invoke ld.lld (Unix), ld64.lld (macOS), lld-link (Windows), wasm-ld (WebAssembly) instead

If lld is not already present, it can typically be installed with the system package manager.

protoc

If you are building InfluxDB IOx on Apple Silicon you may find that the build fails with an error containing:

failed to invoke protoc (hint: https://docs.rs/prost-build/#sourcing-protoc): Bad CPU type in executable (os error 86)

Prost bundles a protoc binary, which it uses if it cannot find a system alternative. Prior to version 0.9, the binary it chooses with the above error is an x86 one, which won't work if you do not have Rosetta installed on your system.

You can install Rosetta by running:

softwareupdate --install-rosetta

An alternative to installing Rosetta is to point Prost at an arm build of protoc. First, install protoc, e.g., via Homebrew:

brew update && brew install protobuf

Then set the following environment variables to point Prost at your system install:

PROTOC=/opt/homebrew/bin/protoc
PROTOC_INCLUDE=/opt/homebrew/include

IOx should then build correctly.

Postgres

The catalog is stored in Postgres (unless you're running in ephemeral mode). Postgres can be installed via Homebrew:

brew install postgres

then follow the instructions for starting Postgres either at system startup or on-demand.

Clone the repository

Clone this repository using git. If you use the git command line, this looks like:

git clone git@github.com:influxdata/influxdb_iox.git

Then change into the directory containing the code:

cd influxdb_iox

The rest of these instructions assume you are in this directory.

Configure the server

InfluxDB IOx can be configured using either environment variables or a configutation file, making it suitable for deployment in containerized environments.

For a list of configuration options, run influxdb_iox --help. For configuration options for specific subcommands, run influxdb_iox <subcommand> --help.

To use a configuration file, use a .env file in the working directory. See the provided example configuration file. To use the example configuration file, run:

cp docs/env.example .env

Compiling and Running

InfluxDB IOx is built using Cargo, Rust's package manager and build tool.

To compile for development, run:

cargo build

This which will create a binary at target/debug/influxdb_iox.

Ephemeral mode

To start InfluxDB IOx and store data in memory, after you've compiled for development, run:

./target/debug/influxdb_iox run all-in-one

By default the server will start an HTTP server on port 8080 and a gRPC server on port 8082.

Local persistence mode

To start InfluxDB IOx and store the catalog in Postgres and data in the local filesystem to persist data across restarts, after you've compiled for development, run:

./target/debug/influxdb_iox run all-in-one --catalog-dsn postgres:///iox_shared --data-dir=~/iox_data

where --catalog-dsn is a connection URL to the Postgres database you wish to use, and --data-dir is the directory you wish to use.

Loading data in local mode

Because the services run on different gRPC ports, and because the CLI uses the gRPC write API, if you're using influxdb_iox database you have to set a --host with the correct gRPC

influxdb_iox -vv database write my_db test_fixtures/lineproto/metrics.lp --host http://localhost:8081

Compile and run

Rather than building and running the binary in target, you can also compile and run with one command:

cargo run -- run all-in-one

Release mode for performance testing

To compile for performance testing, build in release mode then use the binary in target/release:

cargo build --release
./target/release/influxdb_iox run all-in-one

You can also compile and run in release mode with one step:

cargo run --release -- run all-in-one

Running tests

To run all available tests in debug mode, you may want to set min stack size to avoid the current known stack overflow issue:

RUST_MIN_STACK=10485760 cargo test --all

Build a Docker image (optional)

Building the Docker image requires:

  • Docker 18.09+
  • BuildKit

To enable BuildKit by default, set { "features": { "buildkit": true } } in the Docker engine configuration, or run docker build withDOCKER_BUILDKIT=1

To build the Docker image:

DOCKER_BUILDKIT=1 docker build .

Write and read data

Data can be stored in InfluxDB IOx by sending it in line protocol format to the /api/v2/write endpoint or using the CLI. For example, here is a command that will send the data in the test_fixtures/lineproto/metrics.lp file in this repository, assuming that you're running the server on the default port into the company_sensors database, you can use:

influxdb_iox database write company_sensors test_fixtures/lineproto/metrics.lp

To query data stored in the company_sensors database:

influxdb_iox database query company_sensors "SELECT * FROM cpu LIMIT 10"

Use the CLI

InfluxDB IOx is packaged as a binary with commands to start the IOx server, as well as a CLI interface for interacting with and configuring such servers.

The CLI itself is documented via built-in help which you can access by running influxdb_iox --help

Use InfluxDB 2.0 API compatibility

InfluxDB IOx allows seamless interoperability with InfluxDB 2.0.

Where InfluxDB 2.0 stores data in organizations and buckets, InfluxDB IOx stores data in named databases. IOx maps organization and bucket pairs to databases named with the two parts separated by an underscore (_): organization_bucket.

Here's an example using curl to send data into the company_sensors database using the InfluxDB 2.0 /api/v2/write API:

curl -v "http://127.0.0.1:8080/api/v2/write?org=company&bucket=sensors" --data-binary @test_fixtures/lineproto/metrics.lp

Run health checks

The HTTP API exposes a healthcheck endpoint at /health

$ curl http://127.0.0.1:8080/health
OK

The gRPC API implements the gRPC Health Checking Protocol. This can be tested with grpc-health-probe:

$ grpc_health_probe -addr 127.0.0.1:8082 -service influxdata.platform.storage.Storage
status: SERVING

Manually call the gRPC API

To manually invoke one of the gRPC APIs, use a gRPC CLI client such as grpcurl.

Tonic (the gRPC server library we're using) currently doesn't have support for gRPC reflection, hence you must pass all .proto files to your client. You can find a conventient grpcurl wrapper that does that in the scripts directory:

$ ./scripts/grpcurl -plaintext 127.0.0.1:8082 list
grpc.health.v1.Health
influxdata.iox.management.v1.ManagementService
influxdata.platform.storage.IOxTesting
influxdata.platform.storage.Storage
$ ./scripts/grpcurl -plaintext 127.0.0.1:8082 influxdata.iox.management.v1.ManagementService.ListDatabases
{
  "names": [
    "foobar_weather"
  ]
}

Contributing

We welcome community contributions from anyone!

Read our Contributing Guide for instructions on how to run tests and how to make your first contribution.

Architecture and Technical Documenation

There are a variety of technical documents describing various parts of IOx in the docs directory.