* feat: Change table_names to return either Some(set) or None, rather than a plan
* docs: improve comments
* docs: Apply suggestions from code review
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* fix: merge conflict
* fix: don't clone a string unless needed
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
This commit adds some benchmarks for `table_names` against the read
buffer's Database implementation. On my laptop these look like:
database_table_names_all_tables
time: [2.2104 us 2.2242 us 2.2381 us]
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
database_table_names_meta_pred_no_match
time: [1.8389 us 1.8488 us 1.8593 us]
Found 3 outliers among 100 measurements (3.00%)
1 (1.00%) high mild
2 (2.00%) high severe
database_table_names_single_pred_match
time: [5.5457 us 5.5694 us 5.5919 us]
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
database_table_names_multi_pred_match
time: [478.85 us 480.32 us 481.83 us]
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
database_table_names_multi_pred_match_multi_tables
time: [476.47 us 478.93 us 482.25 us]
Found 11 outliers among 100 measurements (11.00%)
4 (4.00%) high mild
7 (7.00%) high severe
This commit is a bit of a hack. The first thing I could think of. The
problem is that I want to be able to benchmark various modules in the
read buffer but I don't want to expose those internals via the external
API.
Becuase criterion only lets you exercise the exported API I needed to
expose some internals. I did this by creating a documented module
`benchmarks` in the `read_buffer` crate, which re-exports identifiers
that can be used by a criterion crate.
The idea is that it will be clear that this module is not part of the
public API.