This commit changes the protobuf record batch encoding to skip entirely
NULL columns when serialising. This prevents the deserialisation from
erroring due to a column type inference failure.
Prior to this commit, when the system was presented a record batch such
as this:
| time | A | B |
| ---------- | ---- | ---- |
| 1970-01-01 | 1 | NULL |
| 1970-07-05 | NULL | 1 |
Which would be partitioned by YMD into two separate partitions:
| time | A | B |
| ---------- | ---- | ---- |
| 1970-01-01 | 1 | NULL |
and:
| time | A | B |
| ---------- | ---- | ---- |
| 1970-07-05 | NULL | 1 |
Both partitions would contain an entirely NULL column.
Both of these partitioned record batches would be successfully encoded,
but decoding the partition fails due to the inability to infer a column
type from the serialised format which contains no values, which on the
wire, looks like:
Column {
column_name: "B",
semantic_type: Field,
values: Some(
Values {
i64_values: [],
f64_values: [],
u64_values: [],
string_values: [],
bool_values: [],
bytes_values: [],
packed_string_values: None,
interned_string_values: None,
},
),
null_mask: [
1,
],
},
In a column that is not entirely NULL, one of the "Values" fields would
be non-empty, and the decoder would use this to infer the type of the
column.
Because we have chosen to not differentiate between "NULL" and "empty"
in our proto encoding, the decoder cannot infer which field within the
"Values" struct the column belongs to - all are valid, but empty.
This commit prevents this type inference failure by skipping any columns
that are entirely NULL during serialisation, preventing the deserialiser
from having to process columns with ambiguous types.
* fix: Add tokio rt-multi-thread feature so cargo test -p client_util compiles
* fix: Alphabetize dependencies
* fix: Add the data_types_conversions feature to get tests passing
* fix: Remove dev dependencies already listed under normal dependencies
* fix: Make sure the workspace is using the new resolver