issue: #36672
The expression supports filling elements through templates, which helps
to reduce the overhead of parsing the elements.
---------
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
issue: #37156
1. Still need to record the current stats version.
2. Set it to 0 when the current stats version is not found.
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
Related to #36887
Remove non-hit pk delete record logic does not work since
`insert_record_.contain` does not work due to logic problem.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #37205
Previously releasing growing segments could be triggered by two
conditions:
- Sealed Segment with same id is loaded
- Segment start position is before target checkpoint ts
Which has a worst case that the corresponding sealed segment is
compacted and the checkpoint is pinned by a growing l0 segment.
This PR introduces a new rule that: a growing segment could be released
if the segment id appeared in current target dropped segment id list.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #35303#30404
This PR change return type of `DeleteCodec.Deserialize` from
`storage.DeleteData` to `DeltaData`, which
reduces the memory usage of interface header.
Also refine `storage.DeltaData` methods to make it easier to usage.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #36102
Previous PR #36107 add grpc inteceptor to observe rpc stats. Using same
strategy, this pr add gin middleware to observer restful v2 rpc stats.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #36887
`LoadDeltaLogs` API did not check memory usage. When system is under
high delete load pressure, this could result into OOM quit.
This PR add resource check for `LoadDeltaLogs` actions and separate
internal deltalog loading function with public one.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #36887
Previously using newly create pool per request shall cause goroutine
leakage. This PR change this behavior by using singleton delete pool.
This change could also provide better concurrency control over delete
memory usage.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Also remove conflit check when executing L0. The exclusive is already
guarenteed in scheduler
See also: #37140
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
Related to #37183
Utilize proxy metacache for `HasCollection` request, if collection
exists in metacache, it could be deducted that collection must exist in
system.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #37158
Return the GuaranteeTS so that the subsequent requests following the
correct TS.
BeginTS is the current timestamp when the task is created.
The GuaranteeTS is the one parsed based on both consistency level and
beginTS, in PreExecute of the task on Proxy.
The delegator will wait until GuaranteeTS is met.
In PostExecute of the task on Proxy, the TS of the first iterator
request will be returned to the SDK and add it to the subsequent
requests.
Hence, if the default consistency level is Eventually or Bounded, the
order of TS will be
> Guarantee TS < BeginTS
If it returns the BeginTS, the second request will need to catch up and
result in extra 200ms max of latency, which results in something like
| Call | Latency |
| --- | --- |
| first call on `Next()` | 30ms |
| second call on `Next()` | 210ms |
| third call on `Next()` | 10ms |
| fourth call on `Next()` | 11 ms |
| ... | ... |
where we expect
| Call | Latency |
| --- | --- |
| first call on `Next()` | 30ms |
| second call on `Next()` | 10ms |
| third call on `Next()` | 10ms |
| fourth call on `Next()` | 11 ms |
| ... | ... |
Signed-off-by: Patrick Weizhi Xu <weizhi.xu@zilliz.com>
Related to #37177
Previous PR #37160
Collection meta is not ref-ed when loading l0 segment in `RemoteLoad`
policy, which cause collection meta release when lots of l0 segment
released.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Timeout is a bad design for long running tasks, especially using a
static timeout config. We should monitor execution progress and fail the
task if the progress has been stale for a long time.
This pr is a small patch to stop DC from marking compaction tasks
timeout, while still waiting for DN to finish. The design is
self-conflicted. After this pr, mix and L0 compaction are no longer
controlled by DC timeout, but clustering is still under timeout control.
The compaction queue capacity grows larger for priority calc, hence
timeout compactions appears more often, and when timeout, the queuing
tasks will be timeout too, no compaction will success after.
See also: #37108, #37015
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
Related to #35303
Slice of `storage.PrimaryKey` will have extra interface cost for each
element, which may cause notable memory usage when delta row count
number is large.
This PR replaces PrimaryKey slice with PrimaryKeys interface saving the
extra interface cost.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #36621
1. Add API to access task runtime metrics, including:
- build index task
- compaction task
- import task
- balance (including load/release of segments/channels and some leader
tasks on querycoord)
- sync task
2. Add a debug model to the webpage by using debug=true or debug=false
in the URL query parameters to enable or disable debug mode.
Signed-off-by: jaime <yun.zhang@zilliz.com>
Related to #37112
Skip load logic used to work only when there is multiple segment load
info entires in load request. In continous delete case, delegator still
loads l0 segment, which occupies lot of memory.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
OSPP 2024 project:
https://summer-ospp.ac.cn/org/prodetail/247410235?list=org&navpage=org
Solutions:
- parser (planparserv2)
- add CallExpr in planparserv2/Plan.g4
- update parser_visitor and show_visitor
- grpc protobuf
- add CallExpr in plan.proto
- execution (`core/src/exec`)
- add `CallExpr` `ValueExpr` and `ColumnExpr` (both logical and
physical) for function call and function parameters
- function factory (`core/src/exec/expression/function`)
- create a global hashmap when starting milvus (see server.go)
- the global hashmap stores function signatures and their function
pointers, the CallExpr in execution engine can get the function pointer
by function signature.
- custom functions
- empty(string)
- starts_with(string, string)
- add cpp/go unittests and E2E tests
closes: #36559
Signed-off-by: Yinzuo Jiang <jiangyinzuo@foxmail.com>
issue: https://github.com/milvus-io/milvus/issues/37083
We use vector of string_view to save data temporally but real string
data will be released after record batch is deconstructed.
Change it to vector of string to avoid memory corruption.
---------
Signed-off-by: sunby <sunbingyi1992@gmail.com>
Related to #35303
Delta data is not needed when using `RemoteLoad` l0 forward policy. By
skipping load delta data, memory pressure could be eased if l0 segment
size/number is large.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #37054
after querycoord restart, segment_checker may release segment by mistake
due to next target isn't ready yet.
This PR requires release segment must happens after next target is
ready.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
Related to #35303
This PR add metrics for querynode delegator delete buffer information,
which is related to dml quota logic.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Relatedt #36887
DirectFoward streaming delete will cause memory usage explode if the
segments number was large. This PR add batching delete API and using it
for direct forward implementation.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #33550
cause wrong impl of UpdateCollectionNextTarget, if ReleaseCollection and
UpdateCollectionNextTarget happens at same time, the the released
partition's segment list may be add to target again, and delegator will
be marked as unserviceable due to lack of segment.
This PR fix the impl of UpdateCollectionNextTarget
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #35576
This pr is to cover those cases when queryHook optimize search params
and make the result size insufficient, add retry search mechanism and
add related metrics for alarming.
---------
Signed-off-by: chasingegg <chao.gao@zilliz.com>
issue: #36686
This pr will remove pre-marking segments as L2 during clustering
compaction in version 2.5, and ensure compatibility with version 2.4.
The core of this change is to **ensure that the many-to-many lineage
derivation logic is correct, making sure that both the parent and child
cannot simultaneously exist in the target segment view.**
feature:
- Clustering compaction no longer marks the input segments as L2.
- Add a new field `is_invisible` to `segmentInfo`, and mark segments
that have completed clustering but have not yet built indexes as
`is_invisible` to prevent them from being loaded prematurely."
- Do not mark the input segment as `Dropped` before the clustering
compaction is completed.
- After compaction fails, only the result segment needs to be marked as
Dropped.
compatibility:
- If the upgraded task has not failed, there are no compatibility
issues.
- If the status after the upgrade is `MetaSaved`, then skip the stats
task based on whether TmpSegments is empty.
- If the failure occurs before `MetaSaved`:
- there are no ResultSegments, and InputSegments have not been marked as
dropped yet.
- the level of input segments need to revert to LastLevel
- If the failure occurs after `MetaSaved`:
- ResultSegments have already been generated, and InputSegments have
been marked as Dropped. At this point, simply make the ResultSegments
visible.
- the level of ResultSegments needs to be set to L1(in order to
participate in mixCompaction)
---------
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
issue #34117
* Refactoring
* Added a capability to perform multiple bitwise `and` and `or`
operations in a single op
* AVX2, AVX512, ARM NEON, ARM SVE backed bitwise `and`, `op`, `xor` and
`sub` ops
* more unit tests for bitset
* fixed a bug in `or_with_count` for certain bitset sizes
* fixed a bug for certain offset values for inplace operations that take
two bitsets
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
issue: #36686
bug reason:
- The clustering compaction tasks on the datanode were never cleaned up.
- The clustering compaction task contains a mapping from clustering key
to buffer, this caused a large memory leak.
fix:
- clean the tasks on datanode by datacoord when clustering compaction
finished.
- reset the mapping that from clustering key to buffer on datanode when
clustering finished.
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
issue: #34553
when rootcoord trigger graceful stop progress, it will block until all
rpc finished. for create collection request, rootcoord need to block
until datacoord finish to watch all channels, but datacoord need to call
`rootcoord.Alloc` during watch channel, and rootcoord doesn't respond to
new request anymore. which cause create collection stucks, and graceful
stop progress stucks.
This PR remove the func call `rootcoord.Alloc` to solve the logic dead
lock during graceful stop progress.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #36868
if datacoord is syncing segments to datanode, and stop datacoord
happens, datacoord's stop progress will stuck until syncing segment
finished.
This PR add ctx to syncing segment, which will failed if stopping
datacoord happens.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/36835
currently searching BM25 output field using IP will end up in an error
in segcore which is hard to understand. now returning error in query
node delegator and provide more useful error message
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
Related to #36887
Forward delete to L0 segment will return error and mark l0 segment
offline causing delegator unserviceable
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Start position & level info is missing for growing segment loaded in
watch dml channel operation.
Level is important for metrics and start position is crucial for growing
exclude logic.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
In quota center, ignore the "DB not found error" to prevent it from
affecting the rate limiting of other databases.
/kind improvement
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
1. support isClusteringKey in restful api;
2. throw err if passed invalid 'enableDynamicField' params
3. parameters in indexparams are not processed properly, related with
#36365
Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/35853
* BM25 Function now takes no params, k1, b should be passed via index
params
* support BM25 full text search when metric type is not present in
search request
* add more strict validation with functions at collection creation time
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
This PR splits sealed segment to chunked data to avoid unnecessary
memory copy and save memory usage when loading segments so that loading
can be accelerated.
To support rollback to previous version, we add an option
`multipleChunkedEnable` which is false by default.
Signed-off-by: sunby <sunbingyi1992@gmail.com>
Support the new RESTful URL for retrieving/describing import progress:
`/v2/vectordb/jobs/import/describe`.
Deprecate the old URL: `/v2/vectordb/jobs/import/get_progress`.
issue: https://github.com/milvus-io/milvus/issues/36752
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
issue: #35922
add an enable_tokenizer param to varchar field: must be set to true so
that a varchar field can enable_match or used as input of BM25 function
---------
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
See #36550
This PR made 2 changes:
1. Introducing a prioritization mechanism, if
`dataCoord.compaction.taskPrioritizer` is set to `level`, compaction
tasks are always executed as the priority of L0>Mix>Clustering
2. `dataCoord.compaction.maxParallelTaskNum` now controls the
parallelism of executing tasks, not the task number of queue +
executing.
---------
Signed-off-by: Ted Xu <ted.xu@zilliz.com>
1. Optimize import scheduling strategic:
a. Revise slot weights, calculating them based on the number of files
and segments for both import and pre-import tasks.
b. Ensure that the DN executes tasks in ascending order of task ID.
2. Add time cost metric and log.
issue: https://github.com/milvus-io/milvus/issues/36600,
https://github.com/milvus-io/milvus/issues/36518
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
Directly add import segments from the meta, eliminating the dependency
on the segment manager.
issue: https://github.com/milvus-io/milvus/issues/34648
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
issue: #36464
This PR enable balance on querynode with different mem capacity, for
query node which has more mem capactity will be assigned more records,
and query node with the largest difference between assignedScore and
currentScore will have a higher priority to carry the new segment.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
Previous label case broken by #36107, this PR make all inbound label
using label constants from metrics package.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Native support for Google cloud storage using the Google Cloud Storage
libraries. Authentication is performed using GCS service account
credentials JSON.
Currently, Milvus supports Google Cloud Storage using S3-compatible APIs
via the AWS SDK. This approach has the following limitations:
1. Overhead: Translating requests between S3-compatible APIs and GCS can
introduce additional overhead.
2. Compatibility Limitations: Some features of the original S3 API may
not fully translate or work as expected with GCS.
To address these limitations, This enhancement is needed.
Related Issue: #36212
issue: #36490
After the query node changes from a delegator to a worker, proxy should
skip this querynode's health check.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #36488
when call TransferChannel/TransferSegment, querycoord will generate and
submit balance task to scheduler, if segment/channel's task already
exist in scheduler, submit task will failed.
to make TransferChannel/TransferSegment idempotent, we should skip to
submit if task already exist in scheduler.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #36536
query coord use `segmentTaskDeleta/channelTaskDelta` to measure the
executing workload for querynode in scheduler, and we maintains the
`segmentTaskDeleta/channelTaskDelta` by `scheulder.Add(task)` and
`scheduler.remove(task)`, but `scheduler.remove(task)` has been called
in unexpected way, which cause a wrong
`segmentTaskDeleta/channelTaskDelta` value and affect the segment assign
logic, causes segment unbalance.
This PR moves to compute the `segmentTaskDeleta/channelTaskDelta` when
access, to avoid the wrong value affect.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
Related to #36482
This PR reuses `SealedSegmentIDsRetrieved` field in `RetrieveResults`
struct to store segment id hint.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/36182
* improved `Column.h` to make the code much more readable and
maintainable, and added detailed comments.
* fixed an issue where `ArrayColumn::NumRows()` always returns 0 when
the mmap backing storage is a file.
* removed unused `ColumnBase` constructors and unnecessary members so we
don't get confused.
* Updated `test_chunk_cache.cpp` to make the tests parameterized: to
test both mmap enabled and disabled. Added sparse field in the test to
add coverage.
* re-enabled test `Sealed::GetSparseVectorFromChunkCache`.
* But 2 other disabled tests `Sealed::WarmupChunkCache` and
`Sealed::GetVectorFromChunkCache` remain disabled, there seems to be
errors. @bigsheeper PTAL.
---------
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
issue: #35821
After collection loaded, if we need to increase/decrease collection's
replica, we need to release and load it again.
milvus offers 4 solution to update loaded collection's replica, this PR
aims to dynamic change the replica number without release, and after
replica number changed, milvus will execute load replica or release
replica in async, and the replica loaded status can be checked by
getReplicas API.
Notice that if set too much replicas than querynode can afford,the new
replica won't be loaded successfully until enough querynode joins.
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #36426
the old constriant requires only segment on current target can be
balanced, which is wrong, and caused that segment can't be move out from
stopping node, if it's only exist in next target.
by design, stopping balance need to move out all segment on it by
balance task, thus the unfair old constriant should be removed.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #35859
This PR introduce two new param: toleranceFactor and checkRequestNum,
after every checkRequestNum request has been assigned, try to compute
querynode's workload score.
if the diff is less than the toleranceFactor, replica selection policy
will fallback to round_robin, which reduce the average cost to about
500ns.
if the diff is larger than the toleranceFactor, replica selection policy
will compute querynode's score to select the target node with smallest
score in every assigment.
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>