issue: https://github.com/milvus-io/milvus/issues/30687
We store all the varchar datas in an continuous address and use
string_view to quickly find them. In this case, using string_view.data()
directly will point to all rest varchar datas.
---------
Signed-off-by: longjiquan <jiquan.long@zilliz.com>
the old version Knowhere would copy the index data while loading, we
need to consider this to avoid OOM.
Knowhere provides a util function to indicate whether it will load the
index with disk, if not, we need to double the memory usage prediction
for index data
Signed-off-by: yah01 <yang.cen@zilliz.com>
See also #30651
Append operator of `std::filesystem::path` will replace whole path when
the param of "/" operation is an absolute path.
In "All-in-one" mode, this shall cause ChunkCache removing the original
vector data file when building chunk cache during/after load procedure.
This PR changes the ChunkCache path generation logic to a separate
function in which will check whether the file path is absolute or not.
If the file path is absolute, it removes the root path prefix and return
concatenated file path.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #29988
This pr adds full-support for wildcard pattern matching from end to end.
Before this pr, the users can only use prefix match in their expression,
for example, "like 'prefix%'". With this pr, more flexible syntax can be
combined.
To do so, this pr makes these changes:
- 1. support regex query both on index and raw data;
- 2. translate the pattern matching to regex query, so that it can be
handled by the regex query logic;
- 3. loose the limit of the expression parsing, which allows general
pattern matching syntax;
With the support of regex query in segcore backend, we can also add
mysql-like `REGEXP` syntax later easily.
---------
Signed-off-by: longjiquan <jiquan.long@zilliz.com>
issue: #28521#29732
include
1. list collection's import jobs
2. create a new import job
3. get the progress of an import job
fix:
1. mix the order of dbName & collectionName #29728
2. trace log keep the same as v1
3. support traceID
4. azure precheck, blob name cannot end with / #29703
---------
Signed-off-by: PowderLi <min.li@zilliz.com>
according to our benchmark, concurrency level 16 is enough to fully
utilize the object storage network bandwidth
Signed-off-by: yah01 <yang.cen@zilliz.com>
Allows proactive warming up of chunk cache. Original vector data will be
asynchronously loaded into the chunk cache during the load process. It
has the potential to significantly reduce query/search latency for a
certain duration after the load, albeit with a concurrent increase in
disk usage.
issue: https://github.com/milvus-io/milvus/issues/30181
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
before this, every time writting the index chunk data into the disk,
there are 4 I/O operations:
- open the file
- seek to the offset
- write the data
- close the file
this optimized this to open only once and continiously write all data.
This also makes it concurrent to load the files from object storage
Signed-off-by: yah01 <yang.cen@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/29020
Json can't not pass a max_int32 value to int32_t, so let knowhere check
value range by itself.
After fix this, pymilvus will report:
pymilvus.exceptions.MilvusException: <MilvusException: (code=65535,
message=fail to search on QueryNode 6: worker(6) query failed: => failed
to search: arithmetic overflow: param search_list_size should be at most
2147483647)>
Signed-off-by: cqy123456 <qianya.cheng@zilliz.com>
issue: #29793
Use `DocSetCollector` instead of `TopDocsCollector`, which will avoid
scoring and sorting.
---------
Signed-off-by: longjiquan <jiquan.long@zilliz.com>
See also #29803
This PR:
- Add trace span for `LoadIndex` & `LoadFieldData` in segment loader
- Add `TraceCtx` parameter for `Index.Load` in segcore
- Add span for ReadFiles & Engine Load for Memory/Disk Vector index
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
When the TimeTravel functionality was previously removed, it
inadvertently affected the MVCC functionality within the system. This PR
aims to reintroduce the internal MVCC functionality as follows:
1. Add MvccTimestamp to the requests of Search/Query and the results of
Search internally.
2. When the delegator receives a Query/Search request and there is no
MVCC timestamp set in the request, set the delegator's current tsafe as
the MVCC timestamp of the request. If the request already has an MVCC
timestamp, do not modify it.
3. When the Proxy handles Search and triggers the second phase ReQuery,
divide the ReQuery into different shards and pass the MVCC timestamp to
the corresponding Query requests.
issue: #29656
Signed-off-by: zhenshan.cao <zhenshan.cao@zilliz.com>
related to : #29417
cardinal indexes upload index files in `Serialize` interface, and throw
exception when the `Serialize` failed.
Signed-off-by: xianliang <xianliang.li@zilliz.com>
related: #25324
Search GroupBy function, used to aggregate result entities based on a
specific scalar column.
several points to mention:
1. Temporarliy, the whole groupby is implemented separated from
iterative expr framework **for the first period**
2. In the long term, the groupBy operation will be incorporated into the
iterative expr framework:https://github.com/milvus-io/milvus/pull/28166
3. This pr includes some unrelated mocked interface regarding alterIndex
due to some unworth-to-mention reasons. All these un-associated content
will be removed before the final pr is merged. This version of pr is
only for review
4. All other related details were commented in the files comparison
Signed-off-by: MrPresent-Han <chun.han@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/29230
this pr do these things:
1. add gpu brute force;
2. limit gpu index only support l2 / ip;
Signed-off-by: cqy123456 <qianya.cheng@zilliz.com>
issue: #29672
the storage account need privileges of actions
`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/*` at
least
Signed-off-by: PowderLi <min.li@zilliz.com>
The tests need to call a private method, Milvus uses `#define` to
replace private with public, the hack trick works but would be broken if
the including order changed.
This uses friend to make all things work well
Signed-off-by: yah01 <yang.cen@zilliz.com>
Signed-off-by: yah01 <yah2er0ne@outlook.com>
issue: #29494
1. link with install path's libblob-chunk-manager
2. performance of `ShouldBindWith` is better than `ShouldBindBodyWith`
3. the middleware shouldn't read the unrefreshed parameter repeatly
Signed-off-by: PowderLi <min.li@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/27704
Add inverted index for some data types in Milvus. This index type can
save a lot of memory compared to loading all data into RAM and speed up
the term query and range query.
Supported: `INT8`, `INT16`, `INT32`, `INT64`, `FLOAT`, `DOUBLE`, `BOOL`
and `VARCHAR`.
Not supported: `ARRAY` and `JSON`.
Note:
- The inverted index for `VARCHAR` is not designed to serve full-text
search now. We will treat every row as a whole keyword instead of
tokenizing it into multiple terms.
- The inverted index don't support retrieval well, so if you create
inverted index for field, those operations which depend on the raw data
will fallback to use chunk storage, which will bring some performance
loss. For example, comparisons between two columns and retrieval of
output fields.
The inverted index is very easy to be used.
Taking below collection as an example:
```python
fields = [
FieldSchema(name="pk", dtype=DataType.VARCHAR, is_primary=True, auto_id=False, max_length=100),
FieldSchema(name="int8", dtype=DataType.INT8),
FieldSchema(name="int16", dtype=DataType.INT16),
FieldSchema(name="int32", dtype=DataType.INT32),
FieldSchema(name="int64", dtype=DataType.INT64),
FieldSchema(name="float", dtype=DataType.FLOAT),
FieldSchema(name="double", dtype=DataType.DOUBLE),
FieldSchema(name="bool", dtype=DataType.BOOL),
FieldSchema(name="varchar", dtype=DataType.VARCHAR, max_length=1000),
FieldSchema(name="random", dtype=DataType.DOUBLE),
FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=dim),
]
schema = CollectionSchema(fields)
collection = Collection("demo", schema)
```
Then we can simply create inverted index for field via:
```python
index_type = "INVERTED"
collection.create_index("int8", {"index_type": index_type})
collection.create_index("int16", {"index_type": index_type})
collection.create_index("int32", {"index_type": index_type})
collection.create_index("int64", {"index_type": index_type})
collection.create_index("float", {"index_type": index_type})
collection.create_index("double", {"index_type": index_type})
collection.create_index("bool", {"index_type": index_type})
collection.create_index("varchar", {"index_type": index_type})
```
Then, term query and range query on the field can be speed up
automatically by the inverted index:
```python
result = collection.query(expr='int64 in [1, 2, 3]', output_fields=["pk"])
result = collection.query(expr='int64 < 5', output_fields=["pk"])
result = collection.query(expr='int64 > 2997', output_fields=["pk"])
result = collection.query(expr='1 < int64 < 5', output_fields=["pk"])
```
---------
Signed-off-by: longjiquan <jiquan.long@zilliz.com>
issue:https://github.com/milvus-io/milvus/issues/29230
this pr do two things about cagra index:
a.milvus yaml config support gpu memory settings
b.add cagra-params check
Signed-off-by: cqy123456 <qianya.cheng@zilliz.com>
Co-authored-by: yusheng.ma <yusheng.ma@zilliz.com>
Do not fill the invalid ids for the empty results, it will incur useless
memory overhead and reduce overhead when nq and topk is large.
---------
Signed-off-by: chasingegg <chao.gao@zilliz.com>
See also #28795
Orignal `C.NewSegment` may panic if some condition is not met, this pr
changes response struct to `CNewSegmentResult`, which contains
`C.CStatus` and may return catched exception
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>