influxdb/influxdb3_write/Cargo.toml

40 lines
1.1 KiB
TOML
Raw Normal View History

[package]
name = "influxdb3_write"
version.workspace = true
authors.workspace = true
edition.workspace = true
license.workspace = true
[dependencies]
data_types = { path = "../data_types" }
influxdb-line-protocol = { path = "../influxdb_line_protocol" }
iox_query = { path = "../iox_query" }
feat: Implement Persister for PersisterImpl (#24588) * feat: Implement Catalog r/w for persister This commit implements reading and writing the Catalog to the object store. This was already stubbed out functionality, but it just needed an implementation. Saving it to the object store is pretty straight forward as it just serializes it to JSON and writes it to the object store. For loading, it finds the most recently added Catalog based on the file name and returns that from the object store in it's deserialized form and returned to the caller. This commit also adds some tests to make sure that the above functionality works as intended. * feat: Implement Segment r/w for persister This commit continues the work on the persister by implementing the persist_segment and load_segment functions for the persister. Much like the Catalog implementation, it's serialized to JSON before being persisted to the object store in persist_segment. This is pretty straightforward. For the loading though we need to find the most recent n segment files and so we need to list them and then return the most recent n. This is a little more complicated to do, but there are comments in the code to make it easier to grok. We also implement more tests to make sure that this part of the persister works as expected. * feat: Implement Parquet r/w to persister This commit does a few things: - First we add methods to the persister trait for reading and writing parquet files as these were not stubbed out in prior commits - Secondly we add a method to serialize a SendableRecordBatchStream into Parquet bytes - With these in place implementing the trait methods is pretty straightforward: hand a path in and a stream and get back some metadata about the file persisted and also get the bytes back if loading from the store Of course we also add more tests to make sure this all works as expected. Do note that this does nothing to make sure that we bound how much memory is used or if this is the most efficient way to write parquet files. This is mostly to get things working with the understanding that future refinement on the approach might be needed. * fix: Update smallvec for crate advisory * fix: Implement better filename handling * feat: Handle loading > 1000 Segment Info files
2024-01-25 19:31:57 +00:00
object_store.workspace = true
observability_deps = { path = "../observability_deps" }
schema = { path = "../schema" }
fix: Failing CI on main (#24562) * fix: build, upgrade rustc, and deps This commit upgrades Rust to 1.75.0, the latest release. We also upgraded our dependencies to stay up to date and to clear out any uneeded deps from the lockfile. In order to make sure everything works this also fixes the build by upgrading the workspace-hack crate using cargo hikari and removing the `workspace.lint` that was in influxdb3_write that didn't need to be there, probably from a merge issue. With this we can build influxdb3 as our default on main, but this alone is not enough to fix CI and will be addressed in future commits. * fix: warnings for influxdb3 build This commit fixes the warnings emitted by `cargo build` when compiling influxdb3. Mainly it adds needed lifetimes and removes uneccesary imports and functions calls. * fix: all of the clippy lints This for the most part just applies suggested fixes by clippy with a few exceptions: - Generated type crates had additional allows added since we can't control what code gets made - Things that couldn't be automatically fixed were done so manually in particular adding a Send bound for traits that created a Future that should be Send We also had to fix a build issue by adding a feature for tokio-compat due to the upgrade of deps. The workspace crate was updated accordingly. * fix: failing test due to rust panic message change Inbetween rustc 1.72 and rustc 1.75 the way that error messages were displayed when panicing changed. One of our tests depended on the output of that behavior and this commit updates the error message to the new form so that tests will pass. * fix: broken cargo doc link * fix: cargo formatting run * fix: add workspace-hack to influxdb3 crates This was the last change needed to make sure that the workspace-hack crate CI lint would pass. * fix: remove tests that can not run anymore We removed iox code from this code base and as a result some tests cannot be run anymore and so this commit removes them from the code base so that we can get a green build.
2024-01-09 20:11:35 +00:00
workspace-hack = { version = "0.1", path = "../workspace-hack" }
arrow = { workspace = true }
async-trait = "0.1"
feat: add basic wal implementation for Edge (#24570) * feat: add basic wal implementation for Edge This WAL implementation uses some of the code from the wal crate, but departs pretty significantly from it in many ways. For now it uses simple JSON encoding for the serialized ops, but we may want to switch that to Protobuf at some point in the future. This version of the wal doesn't have its own buffering. That will be implemented higher up in the BufferImpl, which will use the wal and SegmentWriter to make data in the buffer durable. The write flow will be that writes will come into the buffer and validate/update against an in memory Catalog. Once validated, writes will get buffered up in memory and then flushed into the WAL periodically (likely every 10-20ms). After being flushed to the wal, the entire batch of writes will be put into the in memory queryable buffer. After that responses will be sent back to the clients. This should reduce the write lock pressure on the in-memory buffer considerably. In this PR: - Update the Wal, WalSegmentWriter, and WalSegmentReader traits to line up with new design/understanding - Implement wal (mainly just a way to identify segment files in a directory) - Implement WalSegmentWriter (write header, op batch with crc, and track sequence number in segment, re-open existing file) - Implement WalSegmentReader * refactor: make Wal return impl reader/writer * refactor: clean up wal segment open * fix: WriteBuffer and Wal usage Turn wal and write buffer references into a concrete type, rather than dyn. * fix: have wal loading ignore invalid files
2024-01-12 16:52:28 +00:00
byteorder = "1.3.4"
chrono = "0.4"
feat: add basic wal implementation for Edge (#24570) * feat: add basic wal implementation for Edge This WAL implementation uses some of the code from the wal crate, but departs pretty significantly from it in many ways. For now it uses simple JSON encoding for the serialized ops, but we may want to switch that to Protobuf at some point in the future. This version of the wal doesn't have its own buffering. That will be implemented higher up in the BufferImpl, which will use the wal and SegmentWriter to make data in the buffer durable. The write flow will be that writes will come into the buffer and validate/update against an in memory Catalog. Once validated, writes will get buffered up in memory and then flushed into the WAL periodically (likely every 10-20ms). After being flushed to the wal, the entire batch of writes will be put into the in memory queryable buffer. After that responses will be sent back to the clients. This should reduce the write lock pressure on the in-memory buffer considerably. In this PR: - Update the Wal, WalSegmentWriter, and WalSegmentReader traits to line up with new design/understanding - Implement wal (mainly just a way to identify segment files in a directory) - Implement WalSegmentWriter (write header, op batch with crc, and track sequence number in segment, re-open existing file) - Implement WalSegmentReader * refactor: make Wal return impl reader/writer * refactor: clean up wal segment open * fix: WriteBuffer and Wal usage Turn wal and write buffer references into a concrete type, rather than dyn. * fix: have wal loading ignore invalid files
2024-01-12 16:52:28 +00:00
crc32fast = "1.2.0"
crossbeam-channel = "0.5.11"
datafusion = { workspace = true }
datafusion_util = { path = "../datafusion_util" }
parking_lot = "0.11.1"
feat: Implement Persister for PersisterImpl (#24588) * feat: Implement Catalog r/w for persister This commit implements reading and writing the Catalog to the object store. This was already stubbed out functionality, but it just needed an implementation. Saving it to the object store is pretty straight forward as it just serializes it to JSON and writes it to the object store. For loading, it finds the most recently added Catalog based on the file name and returns that from the object store in it's deserialized form and returned to the caller. This commit also adds some tests to make sure that the above functionality works as intended. * feat: Implement Segment r/w for persister This commit continues the work on the persister by implementing the persist_segment and load_segment functions for the persister. Much like the Catalog implementation, it's serialized to JSON before being persisted to the object store in persist_segment. This is pretty straightforward. For the loading though we need to find the most recent n segment files and so we need to list them and then return the most recent n. This is a little more complicated to do, but there are comments in the code to make it easier to grok. We also implement more tests to make sure that this part of the persister works as expected. * feat: Implement Parquet r/w to persister This commit does a few things: - First we add methods to the persister trait for reading and writing parquet files as these were not stubbed out in prior commits - Secondly we add a method to serialize a SendableRecordBatchStream into Parquet bytes - With these in place implementing the trait methods is pretty straightforward: hand a path in and a stream and get back some metadata about the file persisted and also get the bytes back if loading from the store Of course we also add more tests to make sure this all works as expected. Do note that this does nothing to make sure that we bound how much memory is used or if this is the most efficient way to write parquet files. This is mostly to get things working with the understanding that future refinement on the approach might be needed. * fix: Update smallvec for crate advisory * fix: Implement better filename handling * feat: Handle loading > 1000 Segment Info files
2024-01-25 19:31:57 +00:00
parquet = { workspace = true }
thiserror = "1.0"
feat: add basic wal implementation for Edge (#24570) * feat: add basic wal implementation for Edge This WAL implementation uses some of the code from the wal crate, but departs pretty significantly from it in many ways. For now it uses simple JSON encoding for the serialized ops, but we may want to switch that to Protobuf at some point in the future. This version of the wal doesn't have its own buffering. That will be implemented higher up in the BufferImpl, which will use the wal and SegmentWriter to make data in the buffer durable. The write flow will be that writes will come into the buffer and validate/update against an in memory Catalog. Once validated, writes will get buffered up in memory and then flushed into the WAL periodically (likely every 10-20ms). After being flushed to the wal, the entire batch of writes will be put into the in memory queryable buffer. After that responses will be sent back to the clients. This should reduce the write lock pressure on the in-memory buffer considerably. In this PR: - Update the Wal, WalSegmentWriter, and WalSegmentReader traits to line up with new design/understanding - Implement wal (mainly just a way to identify segment files in a directory) - Implement WalSegmentWriter (write header, op batch with crc, and track sequence number in segment, re-open existing file) - Implement WalSegmentReader * refactor: make Wal return impl reader/writer * refactor: clean up wal segment open * fix: WriteBuffer and Wal usage Turn wal and write buffer references into a concrete type, rather than dyn. * fix: have wal loading ignore invalid files
2024-01-12 16:52:28 +00:00
tokio = { version = "1.35", features = ["macros", "fs", "io-util", "parking_lot", "rt-multi-thread", "sync", "time"] }
serde = { version = "1.0.197", features = ["derive"] }
serde_json = "1.0.114"
feat: add basic wal implementation for Edge (#24570) * feat: add basic wal implementation for Edge This WAL implementation uses some of the code from the wal crate, but departs pretty significantly from it in many ways. For now it uses simple JSON encoding for the serialized ops, but we may want to switch that to Protobuf at some point in the future. This version of the wal doesn't have its own buffering. That will be implemented higher up in the BufferImpl, which will use the wal and SegmentWriter to make data in the buffer durable. The write flow will be that writes will come into the buffer and validate/update against an in memory Catalog. Once validated, writes will get buffered up in memory and then flushed into the WAL periodically (likely every 10-20ms). After being flushed to the wal, the entire batch of writes will be put into the in memory queryable buffer. After that responses will be sent back to the clients. This should reduce the write lock pressure on the in-memory buffer considerably. In this PR: - Update the Wal, WalSegmentWriter, and WalSegmentReader traits to line up with new design/understanding - Implement wal (mainly just a way to identify segment files in a directory) - Implement WalSegmentWriter (write header, op batch with crc, and track sequence number in segment, re-open existing file) - Implement WalSegmentReader * refactor: make Wal return impl reader/writer * refactor: clean up wal segment open * fix: WriteBuffer and Wal usage Turn wal and write buffer references into a concrete type, rather than dyn. * fix: have wal loading ignore invalid files
2024-01-12 16:52:28 +00:00
snap = "1.0.0"
feat: Implement Persister for PersisterImpl (#24588) * feat: Implement Catalog r/w for persister This commit implements reading and writing the Catalog to the object store. This was already stubbed out functionality, but it just needed an implementation. Saving it to the object store is pretty straight forward as it just serializes it to JSON and writes it to the object store. For loading, it finds the most recently added Catalog based on the file name and returns that from the object store in it's deserialized form and returned to the caller. This commit also adds some tests to make sure that the above functionality works as intended. * feat: Implement Segment r/w for persister This commit continues the work on the persister by implementing the persist_segment and load_segment functions for the persister. Much like the Catalog implementation, it's serialized to JSON before being persisted to the object store in persist_segment. This is pretty straightforward. For the loading though we need to find the most recent n segment files and so we need to list them and then return the most recent n. This is a little more complicated to do, but there are comments in the code to make it easier to grok. We also implement more tests to make sure that this part of the persister works as expected. * feat: Implement Parquet r/w to persister This commit does a few things: - First we add methods to the persister trait for reading and writing parquet files as these were not stubbed out in prior commits - Secondly we add a method to serialize a SendableRecordBatchStream into Parquet bytes - With these in place implementing the trait methods is pretty straightforward: hand a path in and a stream and get back some metadata about the file persisted and also get the bytes back if loading from the store Of course we also add more tests to make sure this all works as expected. Do note that this does nothing to make sure that we bound how much memory is used or if this is the most efficient way to write parquet files. This is mostly to get things working with the understanding that future refinement on the approach might be needed. * fix: Update smallvec for crate advisory * fix: Implement better filename handling * feat: Handle loading > 1000 Segment Info files
2024-01-25 19:31:57 +00:00
bytes = "1.5.0"
futures-util = "0.3.30"
feat: add basic wal implementation for Edge (#24570) * feat: add basic wal implementation for Edge This WAL implementation uses some of the code from the wal crate, but departs pretty significantly from it in many ways. For now it uses simple JSON encoding for the serialized ops, but we may want to switch that to Protobuf at some point in the future. This version of the wal doesn't have its own buffering. That will be implemented higher up in the BufferImpl, which will use the wal and SegmentWriter to make data in the buffer durable. The write flow will be that writes will come into the buffer and validate/update against an in memory Catalog. Once validated, writes will get buffered up in memory and then flushed into the WAL periodically (likely every 10-20ms). After being flushed to the wal, the entire batch of writes will be put into the in memory queryable buffer. After that responses will be sent back to the clients. This should reduce the write lock pressure on the in-memory buffer considerably. In this PR: - Update the Wal, WalSegmentWriter, and WalSegmentReader traits to line up with new design/understanding - Implement wal (mainly just a way to identify segment files in a directory) - Implement WalSegmentWriter (write header, op batch with crc, and track sequence number in segment, re-open existing file) - Implement WalSegmentReader * refactor: make Wal return impl reader/writer * refactor: clean up wal segment open * fix: WriteBuffer and Wal usage Turn wal and write buffer references into a concrete type, rather than dyn. * fix: have wal loading ignore invalid files
2024-01-12 16:52:28 +00:00
[dev-dependencies]
arrow_util = { path = "../arrow_util" }
feat: add basic wal implementation for Edge (#24570) * feat: add basic wal implementation for Edge This WAL implementation uses some of the code from the wal crate, but departs pretty significantly from it in many ways. For now it uses simple JSON encoding for the serialized ops, but we may want to switch that to Protobuf at some point in the future. This version of the wal doesn't have its own buffering. That will be implemented higher up in the BufferImpl, which will use the wal and SegmentWriter to make data in the buffer durable. The write flow will be that writes will come into the buffer and validate/update against an in memory Catalog. Once validated, writes will get buffered up in memory and then flushed into the WAL periodically (likely every 10-20ms). After being flushed to the wal, the entire batch of writes will be put into the in memory queryable buffer. After that responses will be sent back to the clients. This should reduce the write lock pressure on the in-memory buffer considerably. In this PR: - Update the Wal, WalSegmentWriter, and WalSegmentReader traits to line up with new design/understanding - Implement wal (mainly just a way to identify segment files in a directory) - Implement WalSegmentWriter (write header, op batch with crc, and track sequence number in segment, re-open existing file) - Implement WalSegmentReader * refactor: make Wal return impl reader/writer * refactor: clean up wal segment open * fix: WriteBuffer and Wal usage Turn wal and write buffer references into a concrete type, rather than dyn. * fix: have wal loading ignore invalid files
2024-01-12 16:52:28 +00:00
test_helpers = { path = "../test_helpers" }