When MvccTimestamp is set, it could be used as guarantee timestamp
directly instead of new ts allocated by scheduler reducing the waiting
time when delegator has tsafe lag
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue:https://github.com/milvus-io/milvus/issues/27576
# Main Goals
1. Create and describe collections with geospatial fields, enabling both
client and server to recognize and process geo fields.
2. Insert geospatial data as payload values in the insert binlog, and
print the values for verification.
3. Load segments containing geospatial data into memory.
4. Ensure query outputs can display geospatial data.
5. Support filtering on GIS functions for geospatial columns.
# Solution
1. **Add Type**: Modify the Milvus core by adding a Geospatial type in
both the C++ and Go code layers, defining the Geospatial data structure
and the corresponding interfaces.
2. **Dependency Libraries**: Introduce necessary geospatial data
processing libraries. In the C++ source code, use Conan package
management to include the GDAL library. In the Go source code, add the
go-geom library to the go.mod file.
3. **Protocol Interface**: Revise the Milvus protocol to provide
mechanisms for Geospatial message serialization and deserialization.
4. **Data Pipeline**: Facilitate interaction between the client and
proxy using the WKT format for geospatial data. The proxy will convert
all data into WKB format for downstream processing, providing column
data interfaces, segment encapsulation, segment loading, payload
writing, and cache block management.
5. **Query Operators**: Implement simple display and support for filter
queries. Initially, focus on filtering based on spatial relationships
for a single column of geospatial literal values, providing parsing and
execution for query expressions.
6. **Client Modification**: Enable the client to handle user input for
geospatial data and facilitate end-to-end testing.Check the modification
in pymilvus.
---------
Signed-off-by: tasty-gumi <1021989072@qq.com>
issue: #36672
The expression supports filling elements through templates, which helps
to reduce the overhead of parsing the elements.
---------
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
issue: #37158
Return the GuaranteeTS so that the subsequent requests following the
correct TS.
BeginTS is the current timestamp when the task is created.
The GuaranteeTS is the one parsed based on both consistency level and
beginTS, in PreExecute of the task on Proxy.
The delegator will wait until GuaranteeTS is met.
In PostExecute of the task on Proxy, the TS of the first iterator
request will be returned to the SDK and add it to the subsequent
requests.
Hence, if the default consistency level is Eventually or Bounded, the
order of TS will be
> Guarantee TS < BeginTS
If it returns the BeginTS, the second request will need to catch up and
result in extra 200ms max of latency, which results in something like
| Call | Latency |
| --- | --- |
| first call on `Next()` | 30ms |
| second call on `Next()` | 210ms |
| third call on `Next()` | 10ms |
| fourth call on `Next()` | 11 ms |
| ... | ... |
where we expect
| Call | Latency |
| --- | --- |
| first call on `Next()` | 30ms |
| second call on `Next()` | 10ms |
| third call on `Next()` | 10ms |
| fourth call on `Next()` | 11 ms |
| ... | ... |
Signed-off-by: Patrick Weizhi Xu <weizhi.xu@zilliz.com>
issue: #32252
This PR try to pre-allocate FieldData for Reduce operations in the Query
chain using typeutil.PrepareResultFieldData to avoid the overhead of
dynamically growing the slice during appendFieldData process.
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
related: #33137
adding has_more_result_tag for various level's reduce to rectify
reduce_stop_for_best
Signed-off-by: MrPresent-Han <chun.han@zilliz.com>
fix https://github.com/milvus-io/milvus/issues/32059
this pr fix two issues:
offset is not handled correctly without specify a limit
reduceStopForBest doesn't guarantee to return limit result even if there
are more result when there is small segment
Signed-off-by: xiaofanluan <xiaofan.luan@zilliz.com>
See also #30250
This PR add requery flag in query task. When reQuery flag is true, query
task shall skip partition name conversion and use pre-calculated
partitionIDs passed from search task.
TODO: hybrid search does not have partition id information. we shall
apply same logic for hybrid search later.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
When the TimeTravel functionality was previously removed, it
inadvertently affected the MVCC functionality within the system. This PR
aims to reintroduce the internal MVCC functionality as follows:
1. Add MvccTimestamp to the requests of Search/Query and the results of
Search internally.
2. When the delegator receives a Query/Search request and there is no
MVCC timestamp set in the request, set the delegator's current tsafe as
the MVCC timestamp of the request. If the request already has an MVCC
timestamp, do not modify it.
3. When the Proxy handles Search and triggers the second phase ReQuery,
divide the ReQuery into different shards and pass the MVCC timestamp to
the corresponding Query requests.
issue: #29656
Signed-off-by: zhenshan.cao <zhenshan.cao@zilliz.com>
See also #29113
The collection schema is crucial when performing search/query but some
of the information is calculated for every request.
This PR change schema field of cached collection info into a utility
`schemaInfo` type to store some stable result, say pk field,
partitionKeyEnabled, etc. And provided field name to id map for
search/query services.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
return the last error but not combining all errors, to improve
readability and erorr handling
resolve: #28572
---------
Signed-off-by: yah01 <yah2er0ne@outlook.com>