* feat: use a dedicated tokio threadpool for running queries
* feat: plumb number of executor threads through to command line
thread through command line
* fix: Logical merge conflict
* fix: another logical conflict
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
The initial benchmarks look like this on my i9 MBP:
```
Data in one open chunk and one closed chunk of mutable buffer/tag0/no_pred 1.00 91.0±2.55ms ? ?/sec
Data in one open chunk and one closed chunk of mutable buffer/tag0/with_pred 1.00 11.5±0.72ms ? ?/sec
Data in one open chunk and one closed chunk of mutable buffer/tag1/no_pred 1.00 120.3±5.10ms ? ?/sec
Data in one open chunk and one closed chunk of mutable buffer/tag1/with_pred 1.00 11.2±0.22ms ? ?/sec
Data in one open chunk and one closed chunk of mutable buffer/tag2/no_pred 1.00 203.2±8.45ms ? ?/sec
Data in one open chunk and one closed chunk of mutable buffer/tag2/with_pred 1.00 11.2±0.21ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag0/no_pred 1.00 100.3±3.73ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag0/with_pred 1.00 31.2±1.80ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag1/no_pred 1.00 126.7±2.29ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag1/with_pred 1.00 33.0±1.70ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag2/no_pred 1.00 212.0±6.86ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag2/with_pred 1.00 18.1±0.99ms ? ?/sec
Data in single open chunk of mutable buffer/tag0/no_pred 1.00 98.7±6.08ms ? ?/sec
Data in single open chunk of mutable buffer/tag0/with_pred 1.00 11.2±0.37ms ? ?/sec
Data in single open chunk of mutable buffer/tag1/no_pred 1.00 118.9±3.97ms ? ?/sec
Data in single open chunk of mutable buffer/tag1/with_pred 1.00 11.7±0.64ms ? ?/sec
Data in single open chunk of mutable buffer/tag2/no_pred 1.00 202.1±8.49ms ? ?/sec
Data in single open chunk of mutable buffer/tag2/with_pred 1.00 11.1±0.27ms ? ?/sec
Data in two read buffer chunks/tag0/no_pred 1.00 109.2±5.20ms ? ?/sec
Data in two read buffer chunks/tag0/with_pred 1.00 44.2±1.83ms ? ?/sec
Data in two read buffer chunks/tag1/no_pred 1.00 132.9±3.79ms ? ?/sec
Data in two read buffer chunks/tag1/with_pred 1.00 41.7±2.43ms ? ?/sec
Data in two read buffer chunks/tag2/no_pred 1.00 222.4±7.00ms ? ?/sec
Data in two read buffer chunks/tag2/with_pred 1.00 27.9±0.92ms ? ?/sec
```
* refactor: inline catalog crate to server
* refactor: Add fine grained (object level) catalog locking
* fix: Move mod definition and use to top of file
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: Rework Db to use Catalog for chunk state
* docs: Update server/src/db.rs
* fix: fmt
* fix: fmt
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* fix: test_helpers crate should only be a dev-dep
* fix: object_store no longer has a build script, so no longer needs a build dep
* chore: Alphabetize all Cargo.tomls
* chore: Update arrow + tokio deps
* chore: Use bleeding edge azure
* chore: Update aws + other deps
* fix: fmt
* fix: Switch to in-house version of routerify
* fix: Upgrade to hyper 0.14
The hyper::error module is now private; hyper::Error is the public
re-export
* fix: Upgrade cloud storage to get tokio upgrade
* fix: Upgrade open_telemetry
* fix: Do not call `panic::set_hook` during another panic
Doing so leads to a double panic which aborts the process.
* fix: new h2 error who dis
Co-authored-by: Carol (Nichols || Goulding) <carol.nichols@integer32.com>
Co-authored-by: Jake Goulding <jake.goulding@integer32.com>
* feat: Chunk Migration APIs and query data in the read buffer via SQL
* fix: Make code more consistent
* fix: fmt / clippy
* chore: Apply suggestions from code review
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* refactor: Remove unecessary Result and make chunks() infallable
* chore: Apply more suggestions from code review
Co-authored-by: Edd Robinson <me@edd.io>
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
Co-authored-by: Edd Robinson <me@edd.io>
Adds serialization with compression and checksum for WAL buffer segments.
This required a weird structure where the flatbuffer bytes of ReplicatedWrite were kept as a raw payload. I did this because otherwise each of the replicated writes would have been rebuilt in the segment.
The other thing that isn't ideal is that deserializing a segment actually marshals it into a Rust struct as opposed to keeping the entire thing as raw flatbuffers. We could update this later to have a concept of an open segment (regular rust stuct) and closed segments that are just the flatbuffers.
* feat: Implement write buffer to Parquet snapshotting
This introduces snapshot to the server packages to manage snapshotting. It also introduces a new trait for representing a Partition. There is a very crude API wired up in http_routes for testing purposes. Follow on work will bring the server package into http_routes and rework the snapshot API.
* chore(server): add logs for dropped WAL segments
Added logging for dropped writes and old segments in rollover scenarios
Also including a dep on tracing and dev-dep on test_helpers
Refs: #466
* chore(server): Add more context to logs
Minor cleanup around remove_oldest_segment usage
Suggestions from @alamb's review