Ref https://github.com/milvus-io/milvus/issues/42053
This PR enable ngram to support more kinds of matches such as prefix and
postfix match.
---------
Signed-off-by: SpadeA <tangchenjie1210@gmail.com>
issue: https://github.com/milvus-io/milvus/issues/42900
@sunby Unfortunately, it is not that easy to fix as it was thought in
#43177
Upd: also handles `Inf` and `NaN` values, and the division by zero case
for `fp32` and `fp64`
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
use meta to get chunked column memory size to avoid getting cells
actually from storage.
related: #39173
---------
Signed-off-by: shaoting-huang <shaoting.huang@zilliz.com>
issue: #43040
This patch introduces a disk file writer that supports Direct IO.
Currently, it is exclusively utilized during the QueryNode load process.
Below is its parameters:
1. `common.diskWriteMode`
This parameter controls the write mode of the local disk, which is used
to write temporary data downloaded from remote storage.
Currently, only QueryNode uses 'common.diskWrite*' parameters. Support
for other components will be added in the future.
The options include 'direct' and 'buffered'. The default value is
'buffered'.
2. `common.diskWriteBufferSizeKb`
Disk write buffer size in KB, only used when disk write mode is
'direct', default is 64KB.
Current valid range is [4, 65536]. If the value is not aligned to 4KB,
it will be rounded up to the nearest multiple of 4KB.
3. `common.diskWriteNumThreads`
This parameter controls the number of writer threads used for disk write
operations. The valid range is [0, hardware_concurrency].
It is designed to limit the maximum concurrency of disk write operations
to reduce the impact on disk read performance.
For example, if you want to limit the maximum concurrency of disk write
operations to 1, you can set this parameter to 1.
The default value is 0, which means the caller will perform write
operations directly without using an additional writer thread pool.
In this case, the maximum concurrency of disk write operations is
determined by the caller's thread pool size.
Both parameters can be updated during runtime.
---------
Signed-off-by: Shawn Wang <shawn.wang@zilliz.com>
Ref #42053
This is the first PR for optimizing `LIKE` with ngram inverted index.
Now, only VARCHAR data type is supported and only InnerMatch LIKE
(%xxx%) query is supported.
How to use it:
```
milvus_client = MilvusClient("http://localhost:19530")
schema = milvus_client.create_schema()
...
schema.add_field("content_ngram", DataType.VARCHAR, max_length=10000)
...
index_params = milvus_client.prepare_index_params()
index_params.add_index(field_name="content_ngram", index_type="NGRAM", index_name="ngram_index", min_gram=2, max_gram=3)
milvus_client.create_collection(COLLECTION_NAME, ...)
```
min_gram and max_gram controls how we tokenize the documents. For
example, for min_gram=2 and max_gram=4, we will tokenize each document
with 2-gram, 3-gram and 4-gram.
---------
Signed-off-by: SpadeA <tangchenjie1210@gmail.com>
Signed-off-by: SpadeA-Tang <tangchenjie1210@gmail.com>
Fix issues in end-to-end tests:
1. **Split column groups based on schema**, rather than estimating by
average chunk row size. **Ensure column group consistency within a
segment**, to avoid errors caused by loading multiple column group
chunks simultaneously.
2. **Use sorted segmentId** when generating the stats binlog path, to
ensure consistent and correct file path resolution.
3. **Determine field IDs as follows**:
For multi-column column groups, retrieve the field ID list from
metadata.
For single-column column groups, use the column group ID directly as the
field ID.
related: #39173fix: #42862
---------
Signed-off-by: shaoting-huang <shaoting.huang@zilliz.com>
Ref #42626
This path tidy up path for scalar index including path for loading index
from remote storage and temporary path for buliding index.
---------
Signed-off-by: SpadeA <tangchenjie1210@gmail.com>
Related to #39173
`null_bitmap_data()` returns raw pointer of null bitmap of Array. While
after slicing, this bitmap is not rewritten due to zero copy
implementation, so the current start pos maybe non-zero while
FillFieldData generating column `valid_data` array.
This PR add `offset` param for `FillFieldData` method, and force all
invocation pass correct offset of `null_bitmap_data` ptr.
Also update milvus-storage commit fixing reader failed to return data
when buffer size smaller than row group size problem.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #42640
The search/query plan holded a reference to schema, which could be
destructed after schema change. This PR make plan hold a shared ptr to
it fixing dangling reference problem under concurrent read & schema
change.
This PR also remove field binlog check for loading index for old segment
with old schema may have binlog lack.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Ref https://github.com/milvus-io/milvus/issues/42148
This PR mainly enables segcore to support array of vector (read and
write, but not indexing). Now only float vector as the element type is
supported.
---------
Signed-off-by: SpadeA <tangchenjie1210@gmail.com>
Signed-off-by: SpadeA-Tang <tangchenjie1210@gmail.com>
Related to #39173#39718
In storage v2, the `lack_bin_rows` cannot be used since field id is not
column group id, which will not be matched forever.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #39173#41534
This pr fixes an issue that building mem index may report datatype not
match error when collection split fields into multiple groups
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/41435
this PR also makes HasRawData of ChunkedSegmentSealedImpl to return
based on metadata, without needing to load the cache just to answer this
simple question.
---------
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
issue: https://github.com/milvus-io/milvus/issues/41435
this PR also:
1. fixed the skip index for VARCHAR. before this PR, skip index of
VARCHAR uses the minmax of the entire column as the minmax of chunk 0,
and provides no minmax for other chunks.
2. refactored some skip index loading related code
3. partly fixed a bug in test_expr.cpp
---------
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
issue: #41348
related and optimized by #41347
---------
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
Co-authored-by: Sangho Park <hoyaspark@gmail.com>
issue: https://github.com/milvus-io/milvus/issues/41435
this PR is based on https://github.com/milvus-io/milvus/pull/41436.
Improvements include:
- Lazy Load support for Storage v1
- Use Low/High watermark to control eviction
- Caching Layer related config changes
- Removed ChunkCache related configs and code in golang
- Add `PinAllCells` helper method to CacheSlot class
- Modified ValueAt, RawAt, PrimitiveRawAt to Bulk version, to reduce
caching layer overhead
- Removed some unclear templated bulk_subscript methods
- CachedSearchIterator to store PinWrapper when searching on
ChunkedColumn, and removed unused contrustor.
---------
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
storage v2 chunked seal segment loading is based on caching layer. A
cell unit in storage v2 is a parquet row group in remote object storage,
containing all fields. Therefore, each field needs a proxy to do related
one field operations.
<img width="965" alt="Screenshot 2025-04-28 at 10 59 30"
src="https://github.com/user-attachments/assets/83e93a10-3b1d-4066-ac17-b996d5650416"
/>
related: #39173
---------
Signed-off-by: shaoting-huang <shaoting.huang@zilliz.com>
Related to #39718
This PR:
- Add reopen logic for growing & sealed segments
- Lazy reopen when schema version increases
- Add FinishLoad api for loading progress
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
fix: #39755
The following shows a simple benchmark where insert 1M docs where all
rows are "hello", the latency is segcore level, CPU is 9900K:
master: 2.62ms
this PR: 2.11ms
bench mark code:
```
TEST(TextMatch, TestPerf) {
auto schema = GenTestSchema({}, true);
auto seg = CreateSealedSegment(schema, empty_index_meta);
int64_t N = 1000000;
uint64_t seed = 19190504;
auto raw_data = DataGen(schema, N, seed);
auto str_col = raw_data.raw_->mutable_fields_data()
->at(1)
.mutable_scalars()
->mutable_string_data()
->mutable_data();
for (int64_t i = 0; i < N - 1; i++) {
str_col->at(i) = "hello";
}
SealedLoadFieldData(raw_data, *seg);
seg->CreateTextIndex(FieldId(101));
auto now = std::chrono::high_resolution_clock::now();
auto expr = GetMatchExpr(schema, "hello", OpType::TextMatch);
auto final = ExecuteQueryExpr(expr, seg.get(), N, MAX_TIMESTAMP);
auto end = std::chrono::high_resolution_clock::now();
auto duration =
std::chrono::duration_cast<std::chrono::microseconds>(end - now);
std::cout << "TextMatch query time: " << duration.count() << "ms"
<< std::endl;
}
```
---------
Signed-off-by: SpadeA <tangchenjie1210@gmail.com>