Instead of a naive "remove + insert", use a proper insertion routine that touches the hash map only once. Note that in case of an override (i.e. the entry with this key already existed) we need to touch the heap twice, because the sort order likely changed (we don't optimize the "identical order" case here because it is pretty unlikely that this will happen in practice). **Perf results:** ```text insert_n_elements/0 time: [16.489 ns 16.497 ns 16.506 ns] change: [-8.1154% -7.9967% -7.8990%] (p = 0.00 < 0.05) Performance has improved. Found 4 outliers among 100 measurements (4.00%) 1 (1.00%) low mild 1 (1.00%) high mild 2 (2.00%) high severe insert_n_elements/1 time: [59.806 ns 59.839 ns 59.875 ns] change: [-14.241% -14.160% -14.086%] (p = 0.00 < 0.05) Performance has improved. Found 8 outliers among 100 measurements (8.00%) 7 (7.00%) high mild 1 (1.00%) high severe insert_n_elements/10 time: [601.58 ns 602.26 ns 603.09 ns] change: [-20.870% -20.714% -20.565%] (p = 0.00 < 0.05) Performance has improved. Found 9 outliers among 100 measurements (9.00%) 4 (4.00%) high mild 5 (5.00%) high severe insert_n_elements/100 time: [6.9096 µs 6.9161 µs 6.9246 µs] change: [-18.759% -18.667% -18.553%] (p = 0.00 < 0.05) Performance has improved. Found 7 outliers among 100 measurements (7.00%) 4 (4.00%) high mild 3 (3.00%) high severe insert_n_elements/1000 time: [107.71 µs 107.76 µs 107.82 µs] change: [-14.564% -14.427% -14.295%] (p = 0.00 < 0.05) Performance has improved. Found 4 outliers among 100 measurements (4.00%) 3 (3.00%) high mild 1 (1.00%) high severe insert_n_elements/10000 time: [2.8642 ms 2.8700 ms 2.8765 ms] change: [-11.079% -10.860% -10.605%] (p = 0.00 < 0.05) Performance has improved. Found 15 outliers among 100 measurements (15.00%) 15 (15.00%) high severe ```` Note that the results are even better for keys with more expansive hash functions (we have few in the querier). Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com> |
||
---|---|---|
.. | ||
benches | ||
src | ||
Cargo.toml |