Commit Graph

5 Commits (a38995ca0fe61edf8bcd7d8b25b60737810405df)

Author SHA1 Message Date
Dom Dwyer ddd6ab0ba4 refactor(write_buffer): pass IDs in wire format
This commit is part of a two-part change in order to add the table &
namespace IDs to the write buffer wire format. This commit forms the
first half; changing the producer to send the IDs.

In this commit the new ID values are never read on the consumer side,
ensuring there is no consumer dependency on them. This ensures they
remain operational during a rollout, where the consumer may be updated
to the latest code dependent on the IDs before the producer is updated
to send them. This also ensures we have a window of time where where the
consumers can be rolled back after being updated, and still handle
replaying messages in Kafka.
2022-11-02 13:28:56 +01:00
Dom Dwyer 61182f506b refactor: emit PartitionKey from partitioner
Changes the partitioning code to emit a PartitionKey, instead of a bare
String.
2022-06-15 15:38:02 +01:00
Andrew Lamb dde3c3922c
refactor: use consistent spelling of serialize (#4717) 2022-05-27 14:42:59 +00:00
Dom Dwyer 43300878bc fix(pb): encoding entirely NULL columns (#4272)
This commit changes the protobuf record batch encoding to skip entirely
NULL columns when serialising. This prevents the deserialisation from
erroring due to a column type inference failure.

Prior to this commit, when the system was presented a record batch such
as this:

            | time       | A    | B    |
            | ---------- | ---- | ---- |
            | 1970-01-01 | 1    | NULL |
            | 1970-07-05 | NULL | 1    |

Which would be partitioned by YMD into two separate partitions:

            | time       | A    | B    |
            | ---------- | ---- | ---- |
            | 1970-01-01 | 1    | NULL |

and:

            | time       | A    | B    |
            | ---------- | ---- | ---- |
            | 1970-07-05 | NULL | 1    |

Both partitions would contain an entirely NULL column.

Both of these partitioned record batches would be successfully encoded,
but decoding the partition fails due to the inability to infer a column
type from the serialised format which contains no values, which on the
wire, looks like:

            Column {
                column_name: "B",
                semantic_type: Field,
                values: Some(
                    Values {
                        i64_values: [],
                        f64_values: [],
                        u64_values: [],
                        string_values: [],
                        bool_values: [],
                        bytes_values: [],
                        packed_string_values: None,
                        interned_string_values: None,
                    },
                ),
                null_mask: [
                    1,
                ],
            },

In a column that is not entirely NULL, one of the "Values" fields would
be non-empty, and the decoder would use this to infer the type of the
column.

Because we have chosen to not differentiate between "NULL" and "empty"
in our proto encoding, the decoder cannot infer which field within the
"Values" struct the column belongs to - all are valid, but empty.

This commit prevents this type inference failure by skipping any columns
that are entirely NULL during serialisation, preventing the deserialiser
from having to process columns with ambiguous types.
2022-05-18 13:33:26 +01:00
Raphael Taylor-Davies e444fa4cb2
feat: pbdata encode (#2724) (#3009) 2021-11-02 18:31:53 +00:00