From 6af7741c5f2b9a42238addf6e64a2515ccf9f6ef Mon Sep 17 00:00:00 2001 From: Xieql Date: Wed, 22 Dec 2021 14:23:23 +0800 Subject: [PATCH] [skip e2e] improve annotation (#13948) Signed-off-by: Xieql --- .../index/thirdparty/faiss/benchs/distributed_ondisk/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/internal/core/src/index/thirdparty/faiss/benchs/distributed_ondisk/README.md b/internal/core/src/index/thirdparty/faiss/benchs/distributed_ondisk/README.md index c2c792992b..25ba422173 100644 --- a/internal/core/src/index/thirdparty/faiss/benchs/distributed_ondisk/README.md +++ b/internal/core/src/index/thirdparty/faiss/benchs/distributed_ondisk/README.md @@ -137,7 +137,7 @@ bash run_on_cluster.bash make_index_vslices For a real dataset, the data would be read from a DBMS. In that case, reading the data and indexing it in parallel is worthwhile because reading is very slow. -## Splitting accross inverted lists +## Splitting across inverted lists The 200 slices need to be merged together. This is done with the script [merge_to_ondisk.py](merge_to_ondisk.py), that memory maps the 200 vertical slice indexes, extracts a subset of the inverted lists and writes them to a contiguous horizontal slice.