Commit Graph

293 Commits (07772e8d2254fb734e7f826298559658a4964015)

Author SHA1 Message Date
Carol (Nichols || Goulding) 068096e7e1
fix: Rename data_types2 to data_types 2022-05-06 14:45:39 -04:00
Carol (Nichols || Goulding) 0541c6e40f
fix: Remove data_types crate where it's no longer used 2022-05-06 14:45:39 -04:00
Carol (Nichols || Goulding) 44209faa8e
fix: Move write buffer data types to write_buffer crate 2022-05-06 14:45:38 -04:00
Carol (Nichols || Goulding) 236edb9181
fix: Move Sequence type to data_types2 2022-05-06 14:45:38 -04:00
Carol (Nichols || Goulding) afdff2b1db
fix: Move DatabaseName to data_types2 2022-05-06 14:45:37 -04:00
Carol (Nichols || Goulding) 1ea4a40b1f
fix: Move NonEmptyString to data_types2 2022-05-06 14:45:37 -04:00
Carol (Nichols || Goulding) 3ab0788a94
fix: Move DeletePredicate types to data_types2 2022-05-06 14:45:37 -04:00
dependabot[bot] 420c306caa
chore(deps): Bump tokio from 1.17.0 to 1.18.0 (#4453)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.17.0 to 1.18.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.17.0...tokio-1.18.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-28 08:21:17 +00:00
二手掉包工程师 4b47d723b1
refactor: Rename time to iox_time (#4416)
Signed-off-by: hi-rustin <rustin.liu@gmail.com>

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-04-26 00:19:59 +00:00
Andrew Lamb 73bed810da
chore: Update arrow, arrow-flight, parquet, tonic, prost, etc (#4357)
* chore: Update datafusion

* chore: Update arrow/arrow-flight/parquet to 12

* chore: update datafusion correctly

* chore: Update prost, tonic, and dependents

* fix: Fixup some api changes

* fix: Update test output in db

* fix: Update test output in parquet_file

* fix: remove old pbjson types

* fix: Add "--experimental_allow_proto3_optional" flag

* chore: Run cargo hakari tasks

* fix: compile error

* chore: Update heappy

Co-authored-by: CircleCI[bot] <circleci@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-04-20 11:12:17 +00:00
dependabot[bot] 694ffd2238
chore(deps): Bump httparse from 1.6.0 to 1.7.0 (#4277)
Bumps [httparse](https://github.com/seanmonstar/httparse) from 1.6.0 to 1.7.0.
- [Release notes](https://github.com/seanmonstar/httparse/releases)
- [Commits](https://github.com/seanmonstar/httparse/compare/v1.6.0...v1.7.0)

---
updated-dependencies:
- dependency-name: httparse
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-12 09:37:08 +00:00
Dom Dwyer 506cdebf38 refactor: remove manual Debug impl
Derive the debug impl so it prints all the fields (specifically the
"number of sequencers configured" is pretty helpful in a test).

Manual impls drift over time and are more effort than the derive!
2022-04-05 12:02:07 +01:00
Andrew Lamb a384448b92
refactor: rename Sequence::id and Sequence::number field names (#4190)
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-03-31 15:17:58 +00:00
dependabot[bot] 17af5fcbd1
chore(deps): Bump tokio-util from 0.7.0 to 0.7.1 (#4154)
* chore(deps): Bump tokio-util from 0.7.0 to 0.7.1

Bumps [tokio-util](https://github.com/tokio-rs/tokio) from 0.7.0 to 0.7.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-util-0.7.0...tokio-util-0.7.1)

---
updated-dependencies:
- dependency-name: tokio-util
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* chore: Run cargo hakari tasks

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: CircleCI[bot] <circleci@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-03-29 08:39:02 +00:00
Dom Dwyer ce08292b89 fix: maybe_auto_create_directories infinite loop
Fixes symlinking in maybe_auto_create_directories() - previously it
would create a symlink specifying the target path relative to the
working dir, and not relative to the symlink.

If the working dir != the path in the WriteBufferCreationConfig,
subsequent calls would get stuck in an infinite loop attempting to
resolve the bad symlink.
2022-03-25 10:34:16 +00:00
Andrew Lamb 9b3f946c10
feat: all in 1 IOx NG mode (#3965)
* feat: Add all_in_one mode

* fix: doc

* docs: fix truncated docs

* refactor: correctly identify PG connections

* refactor: resolve failed merge

Co-authored-by: Dom Dwyer <dom@itsallbroken.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-03-15 16:28:37 +00:00
Raphael Taylor-Davies 7b3767628f
chore: reduce router log verbosity (#3805) (#3960)
* chore: reduce router log verbosity (#3805)

* fix: lint

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-03-07 11:38:51 +00:00
Raphael Taylor-Davies 02a2cfde54
feat: update rskafka (#3805) (#3910) 2022-03-03 10:53:47 +00:00
Raphael Taylor-Davies 44cfdc7aca
feat: update to latest rskafka logging (#3805) (#3896) 2022-03-02 11:05:03 +00:00
Raphael Taylor-Davies 143311f63f
feat: additional write buffer logging (#3805) (#3887)
* feat: additional write buffer logging (#3805)

* fix: assertion direction

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-03-01 17:08:57 +00:00
Raphael Taylor-Davies 792241c89d
feat: harden write buffer aggregator (#3805) (#3877)
* feat: harden write buffer aggregator (#3805)

* chore: more logs

* fix: build

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-02-28 18:31:24 +00:00
Marco Neumann 33851be3a5
chore: upgrade Rust to 1.59 (#3875)
Mostly a few new clippy crates around `flat_map`, `and_then`, and
"underscore locks" (!!!):
https://rust-lang.github.io/rust-clippy/master/index.html#let_underscore_lock

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-02-28 15:14:19 +00:00
dependabot[bot] ad3868ed7c
chore(deps): Bump tokio from 1.16.1 to 1.17.0 (#3814)
* chore(deps): Bump tokio from 1.16.1 to 1.17.0

Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.16.1 to 1.17.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.16.1...tokio-1.17.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* build: update workspace-hack

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Dom Dwyer <dom@itsallbroken.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-02-22 16:27:43 +00:00
Marco Neumann b1215805a8
fix: upgrade rskafka for bugfixes and race-free start offsets (#3775)
This fixes the concerns that were brought up during the review of #3748.
2022-02-17 11:22:25 +00:00
Marco Neumann 44ee0166a0
fix: start Kafka write buffer stream at "earliest" offset, not at "0" (#3748) 2022-02-15 13:36:59 +00:00
dependabot[bot] 89105ccfab
chore(deps): Bump tokio-util from 0.6.9 to 0.7.0 (#3743)
Bumps [tokio-util](https://github.com/tokio-rs/tokio) from 0.6.9 to 0.7.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/commits)

---
updated-dependencies:
- dependency-name: tokio-util
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-15 11:33:41 +00:00
Marco Neumann 4db27eec68
fix: defined behaviour when seeking to an unknown sequence number (#3745)
* chore: upgrade rskafka

* refactor: less cloning

* fix: defined behaviour when seeking to an unknown sequence number

The new, defined behavior is: "return an error once and then end the
stream".

Co-authored-by: Edd Robinson <me@edd.io>

Co-authored-by: Edd Robinson <me@edd.io>
2022-02-15 11:08:01 +00:00
Marco Neumann cf5a5b77cb
fix: use encoded data size estimation instead of mutable batch (#3734)
For sparse data the PB-encoded data (our Kafka wire format) is way
smaller than the MutableBatch (up to a factor 20). So lets use this one
to estimate the size during batching.
2022-02-14 16:58:38 +00:00
Marco Neumann 5aada6beb8
fix: hard-code prod kafka config (#3724)
Prod has a larger max msg. size for Kafka (10MB instead of 1MB), but
currently we're unable to wire all the write buffer configs through. As
a quick fix lets hard code the config. This however breaks the write
buffer when running under default Kafka (1MB), so we should reverse this
(tracked under #3723).
2022-02-11 11:39:01 +00:00
dependabot[bot] ad60dc6949
chore(deps): bump httparse from 1.5.1 to 1.6.0 (#3708)
Bumps [httparse](https://github.com/seanmonstar/httparse) from 1.5.1 to 1.6.0.
- [Release notes](https://github.com/seanmonstar/httparse/releases)
- [Commits](https://github.com/seanmonstar/httparse/compare/v1.5.1...v1.6.0)

---
updated-dependencies:
- dependency-name: httparse
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-02-10 09:54:42 +00:00
Marco Neumann 70881270c6
fix: increase default `producer_max_batch_size` (#3686)
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-02-09 12:09:58 +00:00
kodiakhq[bot] 35f60945e1
Merge branch 'main' into crepererum/wb_tracing_fixes 2022-02-08 16:44:30 +00:00
Raphael Taylor-Davies ca331503a5
feat: add WriteBufferErrorKind (#3664)
* feat: add WriteBufferErrorKind

* fix: test_offset_after_broken_message

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-02-08 15:34:05 +00:00
Marco Neumann fb5cfcdf23 fix: do not create aggregation span w/ invalid parent
When creating a new aggregation span, you MUST NOT just create a new
random span context and put its child span into a span recorder, because
the then only the child will be reported to the trace collector. Instead
create a new root span w/o any parent directly.

This makes jaeger slightly more happy and it won't complain about broken
spans anymore.
2022-02-08 15:56:59 +01:00
Marco Neumann d8cc4c9b1e fix: do not emit unnecessary spans during write aggregation 2022-02-08 15:56:59 +01:00
Marco Neumann d9cc9f5a2a
feat: expose write buffer connection config via CLI (#3651)
* feat: improve rskafka config error messages

* feat: expose write buffer connection config via CLI
2022-02-07 16:24:28 +00:00
Marco Neumann e2db1df11f
refactor: improve writer buffer consumer interface (#3631)
* refactor: improve writer buffer consumer interface

The change looks huge but is actually rather simple. To
understand the interface change, let me first explain what we want:

- be able to fetch watermarks for any sequencer
- have streams:
  - each streams tracks a sequencer and has an offset state (no read
    multiplexing)
  - we can seek a stream
  - seeking and streaming cannot be done at the same time (that would be
    weird and likely leads to many bugs both in write buffer and in the
    user code)
- ideally we don't need to create streams of all sequencers but can
  choose a subset

Before this change we had one mutable consumer struct where you can get
all streams and watermark functions (this mutable-borrows the consumer)
or you can seek a single stream (this also mutable-borrows the
consumer). This is a bit weird for multiple reasons:

- you cannot seek a single stream without dropping all of them
- the mutable-borrow construct makes it really difficult to pass the
  streams into separate threads
- the consumer is boxed (because its mutable) which makes it more
  difficult to handle in a large-scale application

What this change does is the following:

- you have an immutable consumer (similar to the producer)
- the consumer offers the following methods:
  - get the set of sequencer IDs
  - get watermark for any sequencer
  - get a stream handler (see next point) for any sequencer
- the stream handler captures the stream state (offset) and provides you
  a standard `Stream<_>` interface as well as a seek function.
  Mutable-borrows ensure that you cannot use both at the same time.

The stream handler provides you the stream via `handler.stream()`. It
doesn't implement `Stream<_>` itself because the way boxing, dynamic
dispatch work, and pinning interact (i.e. I couldn't get it to work
without the indirection).

As a bonus point (which we don't use however) you can now create
multiple streams for the same sequencer and they all have their own
offset.

* fix: review comments

Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>

Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-02-07 12:24:17 +00:00
Marco Neumann 50cff27b01
chore: remove rdkafka dependency (#3625)
All features are now covered by rskafka. This also removes the need to
specify a server ID for write buffer consumers. This was only used for
rdkafka since there we needed to specify a consumer group, even though
we did not use any transactions.
2022-02-03 13:33:56 +00:00
Marco Neumann bc4b7f8a5b
test: ensure that rskafka and rdkakfa work together (#3624)
* chore: upgrade rskafka + enable snappy support

* test: ensure that rskafka and rdkakfa work together

Before removing rdkafka ensure that:

- rskafka can consume existing messages produced by rdkafka so we do not
  need to drain existing topics
- rdkafka can consume new messages produced by rskafka so we can roll
  back

I ran the whole `write_buffer` test suite (including the newly added
tests) using Apache Kafka as well as Redpanda.

* test: ensure we handle consumer offset in error case correctly

* docs: explain test setup

Co-authored-by: Andrew Lamb <alamb@influxdata.com>

Co-authored-by: Andrew Lamb <alamb@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-02-03 12:52:42 +00:00
Marco Neumann 9567acd621
feat: expose all relevant configs for rskafka write buffers (#3599)
* feat: expose all relevant configs for rskafka write buffers

* refactor: `CreationConfig` => `TopicCreationConfig`
2022-02-02 09:35:54 +00:00
Marco Neumann 36a7d9b8f3
feat: flush interface for write buffer producers (#3595)
Closes #3504.

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-02-01 15:16:23 +00:00
Marco Neumann 22778a3a80
chore: upgrade rskafka and parking_lot (#3592) 2022-02-01 11:50:42 +00:00
Marco Neumann b326b62b44
feat: buffer writes when writing to RSKafka (#3520) 2022-02-01 10:07:52 +00:00
Raphael Taylor-Davies 4101d16f71
chore: feature flag consistency (#3574)
* chore: feature flag consistency

* chore: add aarch64-apple-darwin to hakari

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-01-28 16:38:59 +00:00
Paul Dix 16d584b2ff
feat: Add db_name/namespace to DmlWrite and DmlDelete (#3531)
* feat: Add db_name/namespace to DmlWrite and DmlDelete

This is required for the new ingester to be able to work with the write buffer. The protobuf that gets serialized over Kafka already includes the database name, it just wasn't getting carried through to the marshaled Dml operation.

* fix: database != namespace, propagation through write buffer

Co-authored-by: Marco Neumann <marco@crepererum.net>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-01-27 14:12:20 +00:00
Marco Neumann 2928254c0f
fix: test logging (#3536)
- Use a more standard way to setup the tracing subsystem (as described
  in tracing-subscriber docs)
- Also capture content from `log` crate
- Play nice w/ Rust's libtest message capture

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-01-26 10:28:51 +00:00
Dom b846ead320
feat(router2): shard writes/deletes into write buffer (#3499)
* feat: Sequencer wrapper

This type wraps an underlying WriteBufferWriter implementation, tagging
it with a sequencer ID it should use when enqueuing operations to the
buffer.

* feat: mock sharder

Implements a mock Sharder impl that returns pre-configured responses to
shard(), and captures the input to the call.

* feat: sharded write buffer

Implements sharding of ops into an underlying WriteBuffer.

Writes are sharded by some abstract Sharder impl, collated per shard to
maximise the size of each op (and therefore compression efficiency),
converted into a DML operation and then enqueued in parallel to the
underlying WriteBuffer implementation.

Deletes are modelled as being mapped to a single write buffer shard,
which is the case while we support sharding based on the table &
namespace only. Deletes will be extended to support (potentially)
multiple shards when column overrides are implemented.

* refactor: runtime write buffers

Switch from using static dispatch, to using a runtime specified
WriteBufferWriting implementation.

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-01-25 15:19:48 +00:00
Marco Neumann 76dd62a6c2 feat: RSKafka-driven write buffer 2022-01-20 12:36:10 +01:00
Andrew Lamb dd23056efd
chore: update datafusion, arrow, prost, tonic, pbjson, etc (#3455)
* chore: update datafusion, arrow, prost, tonic, etc

* fix: update pprof as well

* chore: update hakari

* fix: update pbjson

* chore: update heappy

* fix: hakari

* fix: workaround https://github.com/influxdata/influxdb_iox/issues/3458

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-01-13 17:07:15 +00:00
Raphael Taylor-Davies 4731996cf4
revert: #3243 (#3370)
* revert: "Merge pull request #3243 from influxdata/crepererum/improve_kafka_client_usage"

This reverts commit 6ebec4ff71, reversing
changes made to 7684794e5c.

* fix: merge conflicts
2021-12-13 20:33:39 +00:00
Carol (Nichols || Goulding) 87d8f4a85f
fix: Return error instead of panicking if Kafka support is requested but not included
Also add some tests around this behavior.
2021-12-09 10:04:27 -05:00
Carol (Nichols || Goulding) 8c7b3966de
fix: Organize imports 2021-12-09 08:49:34 -05:00
Carol (Nichols || Goulding) 403dcae93c
feat: Put kafka write_buffer code behind a feature flag
Which is off by default. This makes rdkafka optional to minimize
build-time dependencies for users that don't plan on using a Kafka write
buffer.
2021-12-09 08:49:34 -05:00
Marco Neumann e558f4c708 fix: address review comments 2021-12-07 09:43:47 +01:00
Marco Neumann 8f098d3ca1 fix: improve Kafka error handling during topic discovery
`MetadataTopic` has an `error` attached which we should check (I'm
wondering why this isn't a proper `Result` though).
2021-12-07 09:23:03 +01:00
Carol (Nichols || Goulding) 7499eac067
fix: Disable uuid serde feature; we're not actually serializing any UUIDs
Connects to #3117.
2021-12-06 09:37:31 -05:00
Carol (Nichols || Goulding) 16d8ae5e04
fix: Match tokio features to what's actually in use in each crate
Some crates listed features they don't use; other crates ware relying on
feature flags enabled by something else. I tested these changes by
disabling the workspace hack crate and testing each crate.
2021-12-06 09:37:16 -05:00
Carol (Nichols || Goulding) 02c297e850
fix: Always specify the parking_lot feature of tokio to get potential perf boost 2021-12-06 09:37:15 -05:00
Carol (Nichols || Goulding) 1a899b939e
fix: Remove redundant closures identified by clippy
https://rust-lang.github.io/rust-clippy/master/index.html#redundant_closure
2021-12-02 11:52:02 -05:00
Marco Neumann 332485d2c9 fix: use correct `bytes_read` in `DmlMeta`
- for file-based write buffers: Use headers + payload
- for Kafka-based write buffers: Use the estimation that we also use for
  other metrics
- as a side effect we can now just use `PartialEq` for more types

Fixes #3186.
2021-12-01 15:57:21 +01:00
Marco Neumann 459c14035c test: ensure that Kafka producers also generate sane metrics 2021-11-29 10:55:08 +01:00
Marco Neumann b7952c15a6 refactor: improve Kafka client usage
With the new rust-rdkafka release (merged in #3234) managing multiple
consumer streams becomes a bit easier. Also we can just reuse consumer
clients for multiple metadata requests. In total that provides:

- use only a single client connection for consumers (we had multiple
  connection attempts during startup and one client per stream)
- use only two clients for producers (sadly we need a consumer client to
  probe the partitions during startup)
- consumers no longer need to poll the stream to receive statistics
2021-11-29 10:39:09 +01:00
dependabot[bot] c729fc6a25
chore(deps): bump rdkafka from 0.27.0 to 0.28.0
Bumps [rdkafka](https://github.com/fede1024/rust-rdkafka) from 0.27.0 to 0.28.0.
- [Release notes](https://github.com/fede1024/rust-rdkafka/releases)
- [Changelog](https://github.com/fede1024/rust-rdkafka/blob/master/changelog.md)
- [Commits](https://github.com/fede1024/rust-rdkafka/compare/v0.27.0...v0.28.0)

---
updated-dependencies:
- dependency-name: rdkafka
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-29 08:18:02 +00:00
Marco Neumann 7f2e4f4342 refactor: remove write buffer direction
The direction was required when a database could read or write from/to a
write buffer. Now it is clear from the usage context of a write buffer
context which of the two applications is meant (databases read, routers
write) so the direction flag is no longer required.
2021-11-26 12:38:40 +01:00
Marco Neumann edf7becd20 fix: address review comments 2021-11-24 12:09:52 +01:00
Marco Neumann f75c12351d fix: do not forget outputs of file-based write buffer
The existing channel construction could lead to cases where streams
would consume messages, put them into the channel but then when the
stream gets dropped the message would be gone forever. So lets move from
a channel-based implementation to directly invoke the generator future,
so this buffering doesn't occur.

Fixes #3179.
2021-11-24 11:38:41 +01:00
kodiakhq[bot] a6a0eda142
Merge branch 'main' into crepererum/issue3030 2021-11-23 08:08:34 +00:00
Carol (Nichols || Goulding) 9fd4a560f5
feat: Results of running cargo hakari manage-deps 2021-11-19 09:21:57 -05:00
Raphael Taylor-Davies e32d367e85
feat: flush delete mailbox on persist (#3126) (#3147)
* feat: flush delete mailbox on persist (#3126)

* chore: review feedback

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-11-19 09:45:29 +00:00
Marco Neumann 7c72a993a3 fix: don't retry "forever" sending Kafka messages
When a Kafka broker pod is recreated (for whatever reason) and gets a
new IP while doing so, the following happened:

1. Old broker pod gets terminated, but is still reachable via DNS and
   TCP.
2. rdkafka looses its connection, re-creates it using the old IP. The
   TCP connection can be established (this heavily depends on the K8s
   network setup), but won't be able to send any messages because the
   old broker is already shutting down / dead.
3. New broker gets created w/ new IP (but same DNS name).
4. Somewhat in parallel to step 3: rdkafka gets informed by other
   brokers that the topic lost its leader and then that the topic has
   the new leader (which has the same identity as the old one). Since
   leader changes in Kafka can also happen when brokers are totally
   healthy, it doesn't conclude that its TCP connection might be broken
   and tries to send messages to the new broker via the old TCP
   connection.
5. It takes very long (~130s on my test setup) for the old
   rdkafka->broker TCP connection to break. Since
   `message.send.max.retries` has a default of `2147483647` rdkafka will
   not give up on the application level.
5. rdkafka re-connects, while doing so resolves via DNS the new broker
   IP and is happy.

An alternative fix that was tried: Use the `connect` rdkafka callback to
hook into the place where it would issue the UNIX `connect` call. There
we can manipulate the socket. Setting `TCP_USER_TIMEOUT` to 5000ms also
solves the issue somewhat, but might have different implications (also
it then takes around 5s to kill the connection). Since this is a more
hackish implementation and somewhat an unofficial way to configure
rdkafka, I decided against it.

Test Setup
==========

```rust
\#[tokio::test]
async fn write_forever() {
    maybe_start_logging();
    let conn = maybe_skip_kafka_integration!();
    let adapter = KafkaTestAdapter::new(conn);
    let ctx = adapter.new_context(NonZeroU32::new(1).unwrap()).await;

    let writer = ctx.writing(true).await.unwrap();
    let lp = "upc user=1 100";
    let sequencer_id = set_pop_first(&mut writer.sequencer_ids()).unwrap();

    for i in 1.. {
        println!("{}", i);

        let tables = mutable_batch_lp::lines_to_batches(lp, 0).unwrap();
        let write = DmlWrite::new(tables, DmlMeta::unsequenced(None));
        let operation = DmlOperation::Write(write);
        let res = writer.store_operation(sequencer_id, &operation).await;
        dbg!(res);

        tokio::time::sleep(Duration::from_secs(1)).await;
    }
}
```

Make sure to set the the rdkafka `log` config to `all`. Then use KinD,
setup a 3-node Strimzi cluster and start the test binary within the K8s
cluster. You need to start a debug container that is close enough to
your developer system (e.g. an old Debian DOES NOT work if you run
bleeding edge Arch):

```console
$(host) kubectl run -i --tty --rm debug --image=archlinux --restart=Never -n kafka -- bash
````

Then you copy over the test binary the container using [cargo-with](https://github.com/cbourjau/cargo-with):

```console
$(host) cargo with 'kubectl cp {bin} kafka/debug:/foo' -- test -p write_buffe
````

Within the container shell that you've just created, start the
forever-running test (make sure to set `KAFKA_CONNECT` according to your
Strimzi setup!):

```console
$(container) TEST_INTEGRATION=1 KAFKA_CONNECT=my-cluster-kafka-bootstrap:9092 RUST_BACKTRACE=1 RUST_LOG=debug ./foo write_forever --nocapture
````

The test should run and tell you that it is delivering messages. It also
tells you within the debug logs which broker it sends the messages to.
Now you need to kill the broker (in my example it was `my-cluster-kafka-1`):

```console
$(host) kubectl -n kafka delete pod my-cluster-kafka-1
````

The test should now stop to deliver messages and should error. Without
this patch it might take over 100s for it to recover even after the
deleted pod was re-created. With this patch it quickly is able to
deliver data again after the broker comes back online.

Fixes #3030.
2021-11-19 09:53:57 +01:00
Carol (Nichols || Goulding) a2454b542d
fix: Small cleanups in Cargo.tomls (#3160)
* fix: Add tokio rt-multi-thread feature so cargo test -p client_util compiles

* fix: Alphabetize dependencies

* fix: Add the data_types_conversions feature to get tests passing

* fix: Remove dev dependencies already listed under normal dependencies

* fix: Make sure the workspace is using the new resolver
2021-11-18 22:26:33 +00:00
Raphael Taylor-Davies 8155747735
feat: add write buffer delete encoding (#2731) (#3127)
* feat: add write buffer delete encoding (#2731)

* chore: fix doc

* chore: review feedback

* chore: review feedback

* chore: fmt

* chore: review feedback
2021-11-17 16:12:19 +00:00
Andrew Lamb b5a7bf03da
feat: Add kafka write buffer consumer metrics (#3129)
* feat: Add kafka write buffer consumer metrics

* refactor: use unwrap_or_else

* fix: Update bucket boundaries
2021-11-17 14:35:40 +00:00
Andrew Lamb d6c6e9a6c7
fix: Default kafka timeout to be shorter than gRPC timeout (60 sec --> 10 sec) (#3131)
* fix: Default kafka timeout to be shorter than gRPC timeout

* docs: fix link style
2021-11-17 12:19:53 +00:00
Marco Neumann 79929c8cf4 feat: add more Kafka metrics 2021-11-16 17:18:41 +01:00
Marco Neumann 9ee004946e fix: do not overload rdkafka w/ statistics 2021-11-16 17:18:41 +01:00
Marco Neumann e6fdd79a0f feat: emit Kafka stats as metrics instead of logs
This maps a subset of Kafka stats as metrics. The set can -- of course
-- be changed in the future depending on our needs.

Fixes #3100.
2021-11-16 17:18:41 +01:00
Raphael Taylor-Davies 553e412226
refactor: DMLOperation write path (#2731) (#3121)
* refactor: DMLOperation write path (#2731)

* chore: fmt

* chore: review feedback
2021-11-16 12:42:19 +00:00
Raphael Taylor-Davies a6d83a3026
feat: WriteBufferReader use DmlOperation (#2731) (#3096)
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-11-15 10:19:54 +00:00
Raphael Taylor-Davies 6f268f8260
refactor: extract DML types (#2731) (#3084)
* refactor: extract DML types (#2731)

* chore: fmt
2021-11-11 12:34:07 +00:00
Marco Neumann 11f4a1dee8 feat: add connection management for router 2021-11-08 11:11:52 +01:00
Raphael Taylor-Davies 60f0deaf1e
feat: remove flatbuffer entry (#3045) 2021-11-05 20:19:24 +00:00
Raphael Taylor-Davies 898567e221
feat: migrate server to DbWrite (#2724) (#3035)
* feat: migrate server to DbWrite (#2724)

* chore: print perf log output

* fix: don't suppress CI status code

* chore: review feedback

* fix: don't error on empty line protocol write payloads

* fix: test

* fix: test
2021-11-05 11:09:33 +00:00
Raphael Taylor-Davies 07ba629e2b
feat: migrate write buffer producer to MutableBatch and pbdata (#2743) (#3021)
* feat: migrate write buffer producer to MutableBatch and pbdata (#2743)

* fix: Kafka message content type header

* chore: fix doc
2021-11-04 10:20:40 +00:00
Raphael Taylor-Davies 08fcd87337
feat: use protobuf encoding in write buffer (#2724) (#3018) 2021-11-03 15:19:05 +00:00
Raphael Taylor-Davies 51c6348e54
refactor: extract codec module for write buffer (#2724) (#3017) 2021-11-03 14:07:33 +00:00
Marco Neumann 0d0c0cb42b refactor: move write buffer configs to new home
Write buffer configs will partially be shared by database and router
nodes, so lets move them into a shared home.
2021-11-02 10:17:01 +01:00
Raphael Taylor-Davies f1a6468e7b
feat: migrate write buffer consumer to use DbWrite (#2724) (#3003)
* feat: migrate write buffer consumer to use DbWrite (#2724)

* fix: doc

* chore: fmt

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-11-01 16:38:48 +00:00
dependabot[bot] c540b40f05
chore(deps): bump tokio from 1.12.0 to 1.13.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.12.0 to 1.13.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.12.0...tokio-1.13.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-01 11:21:59 +00:00
Raphael Taylor-Davies 6ceab054ab
refactor: move DbWrite to mutable_batch (#2986)
* refactor: move DbWrite to mutable_batch

* chore: fix doc

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-10-29 15:13:05 +00:00
Raphael Taylor-Davies 8a2410e161
feat: mutable batch write entry (#2724) (#2973)
* feat: mutable batch write entry (#2724)

* chore: lint

* chore: review feedback

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-10-28 20:15:28 +00:00
Marco Neumann 3af1504ed2 fix: race condition in file-based write buffer 2021-10-26 10:09:34 +02:00
Marco Neumann 93f6519c34 fix: address review comments 2021-10-26 10:09:34 +02:00
Marco Neumann d527775aec feat: allow gaps in file-based WB + improve error handling 2021-10-26 10:09:34 +02:00
Marco Neumann bc7244c48e chore: use Rust edition 2021 2021-10-25 10:58:20 +02:00
Marco Neumann a5f15e6e76 refactor: improve directory names 2021-10-19 16:15:22 +02:00
Marco Neumann 6ec0bd5bab feat: file-based write write_buffer
Closes #2849.
2021-10-19 15:26:43 +02:00
Marco Neumann b2698cca44 feat: add `format_jaeger_trace_context` 2021-10-19 15:26:05 +02:00
dependabot[bot] 32e18b6436
chore(deps): bump rdkafka from 0.26.0 to 0.27.0
Bumps [rdkafka](https://github.com/fede1024/rust-rdkafka) from 0.26.0 to 0.27.0.
- [Release notes](https://github.com/fede1024/rust-rdkafka/releases)
- [Changelog](https://github.com/fede1024/rust-rdkafka/blob/master/changelog.md)
- [Commits](https://github.com/fede1024/rust-rdkafka/compare/v0.26.0...v0.27.0)

---
updated-dependencies:
- dependency-name: rdkafka
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-10-19 11:49:34 +00:00
kodiakhq[bot] 45c2c26168
Merge branch 'main' into crepererum/write_buffer_optional_span_ctx 2021-10-15 07:25:21 +00:00
Marco Neumann 85be39de40 test: make `headers_case_handling` test easier to understand 2021-10-15 09:20:41 +02:00
Marco Neumann 2850487877 feat: make trace collector in Kafka consumer optional
The whole application might not have a trace collector configured in
which case we don't wanna produce any spans.
2021-10-15 09:20:40 +02:00
Marco Neumann d6da68b762 fix: do not panic when writing to an unknown sequencer 2021-10-14 17:27:19 +02:00
kodiakhq[bot] 61ec559eee
Merge branch 'main' into crepererum/write_buffer_span_ctx 2021-10-14 11:50:07 +00:00
Raphael Taylor-Davies e911cf9ac1
refactor: make WriteBufferConfigFactory interior mutable (#2829)
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-10-14 10:30:59 +00:00
Marco Neumann 5e06519afb feat: propagate trace information through write buffer 2021-10-14 11:07:41 +02:00
Raphael Taylor-Davies 8a82f92c5d
refactor: add TimeProvider abstraction (#2722) (#2815)
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-10-12 21:19:03 +00:00
Marco Neumann 90d2f1d60d refactor: use Kafka headers similar to HTTP 2021-10-12 16:31:06 +02:00
Marco Neumann b955ecc6a7 feat: add format header to Kafka messages
This allows us to transition to new formats in the future.
2021-10-12 16:31:06 +02:00
Raphael Taylor-Davies 0554173684
feat: migrate write buffer to TimeProvider (#2722) (#2804)
* feat: migrate write buffer to TimeProvider (#2722)

* chore: review feedback

Co-authored-by: Marco Neumann <marco@crepererum.net>

Co-authored-by: Marco Neumann <marco@crepererum.net>
2021-10-12 10:32:34 +00:00
Marco Neumann 896ce03415 refactor: use sets and maps for write buffer sequencers
Use the type to reflect that entries are unique and sorted.
2021-10-12 11:13:09 +02:00
Marco Neumann 2e8af77e41 feat: allow write buffer producers to read possible sequencer IDs 2021-10-12 10:37:32 +02:00
Raphael Taylor-Davies f35a49edd0
refactor: move Sequence to data_types (#2780)
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-10-11 09:23:00 +00:00
Marco Neumann 07a72a1daa chore: tell cargo-udeps that `dotenv` is needed for `write_buffer` 2021-09-22 11:09:39 +02:00
Raphael Taylor-Davies c33e5c22e6
feat: pull WriteBuffer consumer out of Db and onto Database (#2243) (#2525)
* feat: pull WriteBuffer consumer out of Db and onto Database (#2243)

* chore: restore WritingOnlyAllowedThroughWriteBuffer error

* refactor: remove WriteBufferConfig

* chore: fix docs

* chore: move WriteBufferConsumer tests out of db.rs

* chore: document WriteBufferFactory member functions

* chore: fmt

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-09-14 16:04:58 +00:00
Raphael Taylor-Davies 7e434d16d2
fix: MockBufferForReading waker registration (#2520)
* fix: MockBufferForReading waker registration

* chore: remove unnecessary sleep

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-09-13 21:21:21 +00:00
Marco Neumann bbb8898d36 refactor: make writer buffer auto-creation types nicer to read 2021-09-08 11:13:48 +02:00
Marco Neumann 2cc1297c96 fix: typos
Co-authored-by: Raphael Taylor-Davies <1781103+tustvold@users.noreply.github.com>
2021-09-07 18:24:59 +02:00
Marco Neumann c0cc239781 fix: improve Kafka error handling 2021-09-07 18:24:59 +02:00
Marco Neumann 801cf08be7 feat: auto-creation of sequencers by write buffer
For Kafka, that basically means that we create a topic if it doesn't
exist yet.

Closed #2455.
Fixes #2189.
2021-09-07 18:24:57 +02:00
Marco Neumann 924e460bf7 feat: sequencer auto-creation for mocked write buffer 2021-09-07 18:18:20 +02:00
Marco Neumann ecbd0165fc feat: prepare auto-creation of sequencers for mocked write buffer 2021-09-07 18:18:20 +02:00
Marco Neumann 82f9750ba7 test: prepare write buffer test suite for failable reader and writer creation
This is required to work w/ sequencer auto-creation.
2021-09-07 18:18:20 +02:00
Marco Neumann f1864813f4 docs: document write buffer test suite 2021-09-07 18:18:20 +02:00
Marco Neumann 2ea3b600d0 test: harden `assert_reader_content` a bit
- entries should be sorted by the stream, there is no need to sort the
  results
- ensure that there are no leftover entries in the stream by asserting
  that it is "pending"
2021-09-07 18:18:20 +02:00
Marco Neumann d5662328b0 refactor: `n_sequencers` should be non-zero 2021-09-07 18:18:20 +02:00
dependabot[bot] b67610d9b9
chore(deps): bump tokio from 1.10.1 to 1.11.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.10.1 to 1.11.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.10.1...tokio-1.11.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-06 09:11:38 +00:00
dependabot[bot] b1bb390893
chore(deps): bump parking_lot from 0.11.1 to 0.11.2
Bumps [parking_lot](https://github.com/Amanieu/parking_lot) from 0.11.1 to 0.11.2.
- [Release notes](https://github.com/Amanieu/parking_lot/releases)
- [Changelog](https://github.com/Amanieu/parking_lot/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Amanieu/parking_lot/compare/0.11.1...0.11.2)

---
updated-dependencies:
- dependency-name: parking_lot
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-09-06 01:18:24 +00:00
Marco Neumann a63eb53ac5 feat: forward connection config to Kafka write buffer 2021-09-02 16:53:31 +02:00
Marco Neumann ecf1f99ddb refactor: more flexible writer buffer config
This allows:

- different types (instead of guessing through the connection URL)
- sequencer counts (not used yet but will be by #2455)
- extensible configs (e.g. to configure Kafka in a more granular way,
  not wired up yet)
- future extensions (since we use a message now instead of a single
  string)

**BREAKING: This requires changes for deployed systems / existing DBs!**
2021-09-02 16:41:35 +02:00
Marco Neumann f81885c172 fix: limit kafka consumer queue to 10MB
I think that while this value limits the maximum memory consumption, it
is too optimistic.
2021-08-20 09:55:49 +02:00
Marco Neumann 6c0cefded3 fix: limit kafka consumer queue to 100MB
By default the queue will be filled to up to 1GB which makes finding
memory leaks quite hard.
2021-08-19 11:36:15 +02:00
Raphael Taylor-Davies 817ed4b46b
feat: enable rdkafka statistics (#2312) (#2314)
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2021-08-17 09:28:02 +00:00
Marco Neumann 4a3fe01743 test: don't overdo it 2021-08-16 18:31:45 +02:00
Marco Neumann 5caa2ad8ec fix: typo 2021-08-16 18:31:45 +02:00
Marco Neumann 1a7293015b test: allow write buffer mocks that always fail 2021-08-16 18:27:09 +02:00
Marco Neumann e3c263a006 test: add tooling to purge kafka topic data 2021-08-16 13:47:07 +02:00
Marco Neumann 21ebdee5a1 feat: make kafka topic creation code reusable 2021-08-16 13:47:07 +02:00
Marco Neumann a72bacae67 test: use proper ignore instead of commenting out 2021-08-12 11:38:02 +02:00
Marco Neumann a5c74f2798 feat: ability to inject mocked write buffers into server/database 2021-08-12 10:46:16 +02:00
Marco Neumann 55f68f0401 feat: add `type_name` to write buffer interface 2021-08-12 10:46:16 +02:00
Marco Neumann 32a5c8ca3a feat: add ability to clear messages from mocked write buffers 2021-08-12 10:46:16 +02:00
Dom 3de6b44e23
build: use new rustdoc lint name (#2261)
* fix: nocache feature code rot

The MBChunk::snapshot code when using the "nocache" option no longer
compiles - this commit updates it to match the not(nocache) code.

* build: use updated broken_intra_doc_links name

The broken_intra_doc_links lint was renamed
rustdoc::broken_intra_doc_links

https://doc.rust-lang.org/rustdoc/lints.html
2021-08-11 19:48:51 +00:00
Marko Mikulicic 99b358d481
feat(iox): Enable snappy compression in the producer 2021-07-29 16:37:11 +02:00
Marco Neumann eb310ecada fix: ensure that the Kafka producer TS we report is also truncated 2021-07-28 15:48:02 +02:00
Marco Neumann 0fcec6b742 refactor: move ingest timestamp from sequence to sequended entry 2021-07-28 15:40:35 +02:00
Marco Neumann d940d4f6d3 docs: explain how test timestamps are measured 2021-07-28 14:46:33 +02:00
Marco Neumann e736bc6953 feat: add ingest timestamp to `Sequence`
This allows us to track wall-clock ingest time for entries that we
receive via write buffer (e.g. Kafka).
2021-07-28 14:41:18 +02:00
Marco Neumann 55490c279a fix: Kafka watermark error for new partitions 2021-07-21 15:21:52 +02:00
Marco Neumann 5df88c70aa feat: add ability to fetch watermarks from write buffer 2021-07-21 11:59:52 +02:00