issue: #30723
This PR skip generate balance task when collection's target isn't ready.
also refine the check stale logic in query coord's scheduler, if channel
exist in current or next target, task won't be canceled.
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #30715
- Bug: Set nil struct pointer to describe nil interface.
Panic with segment violation when calling method on this nil struct
pointer.
Signed-off-by: chyezh <chyezh@outlook.com>
issue: https://github.com/milvus-io/milvus/issues/30687
We store all the varchar datas in an continuous address and use
string_view to quickly find them. In this case, using string_view.data()
directly will point to all rest varchar datas.
---------
Signed-off-by: longjiquan <jiquan.long@zilliz.com>
See also #30191
It turned out that in auto id and batch delete scenario actual memory
size of deltalog maybe way larger than deltalog file size. This PR add a
configurable expansion rate for deltalog memory usage to prevent
out-of-memory panicking during loading deltalogs.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #30150
For leader view distribution with offline nodes, a release task can
never be sent to querynode due to targetNode online check logic. Even
the request is dispatched, normal release task does not have "force"
flag when calling `delegator.ReleaseSegment`.
This PR adds a new type of querycoord task: LeaderTask, the
responsibility of which is to rectify leader view distribtion.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
the old version Knowhere would copy the index data while loading, we
need to consider this to avoid OOM.
Knowhere provides a util function to indicate whether it will load the
index with disk, if not, we need to double the memory usage prediction
for index data
Signed-off-by: yah01 <yang.cen@zilliz.com>
Related to #30191
When loading segment, segment loader shall check memory usage for
current loading task. Previously l0 segment was ignored but level zero
segment may actually cost lots of memory.
This PR adds back memory resource check for Level zero segment loading.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #30651
Append operator of `std::filesystem::path` will replace whole path when
the param of "/" operation is an absolute path.
In "All-in-one" mode, this shall cause ChunkCache removing the original
vector data file when building chunk cache during/after load procedure.
This PR changes the ChunkCache path generation logic to a separate
function in which will check whether the file path is absolute or not.
If the file path is absolute, it removes the root path prefix and return
concatenated file path.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Refine compaction interfaces in datacoord, support compaction result
with more than one segment. Prepare for major compaction.
related: #30633
Signed-off-by: wayblink <anyang.wang@zilliz.com>
1. Increase maxCount of L0 compaction tasks to 30
This could reduce the l0 compaction task number by 30% for
high-frequently-generated-small l0 segments, with the maximum size 64MB
stay not changed. So that l0 segments would accumulate slower and
decrease the mem presure caused by L0 segment for QueryNode
2. Add force Trigger for later manual timely l0 compaction triggers.
See also: #30191, #30556
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
issue: #30553
when datacoord with version 2.2 and querycoord with version 2.3 coexist
during rolling upgrade, `DescribeIndex/GetIndexInfo` will return
`unimplemented` error
This PR add retry on `DescribeIndex/GetIndexInfo`, to prevent load
collection failed during rolling upgrade from milvus 2.2 to 2.3.
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
flush rate control at collection level to avoid generate too much
segment.
0.1 qps by default.
issue: #29477
Signed-off-by: chyezh <ye.zhen@zilliz.com>
See also #30571
When `compactionExecutor` stops one compaction task, the `stop` method
will case `injectDone` called.
However in `executeTask` when `compact` method returns error, it shall
also invoke `injectDone` as well. That the reason `Unlock of unlocked
RWMutex` panicking happened.
This PR add sync.Once to make sure that `injectDone` is called only
once. We did not remove any of the `injectDone` since removal any of
those invocation may cause logic problem.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Data write through rawkv API may pollute tikv data. It should be
disallowed.
We will add this check to all repos that involves metadata access.
In the longer term, we should have a metadata service that implements
access control.
relate: #30029
Signed-off-by: yiwangdr <yiwangdr@gmail.com>
This PR mainly improve two items:
1. Target observer should refresh loading status during init time. An
uninitialized loading status blocks search/query. Currently, the target
observer refreshes every 10 seconds, i.e. we'd need to wait for 10s for
no reason. That's also the reason why we constantly see false log
"collection unloaded" upon mixcoord restarts.
2. Delete session when service is stopped. So that the new service
doesn't need to wait for the previous session to expire (~10s).
Item 1 is the major improvement of this PR, which should speed up init
time by 10s.
Item 2 is not a big concern in most cases as coordinators usually shut
down after stop(). In those cases, coordinator restart triggers serverID
change which further triggers an existing logic that deletes expired
session. This PR only fixes rare cases where serverID doesn't change.
integration test:
`go test -tags dynamic -v -coverprofile=profile.out -covermode=atomic
tests/integration/coordrecovery/coord_recovery_test.go -timeout=20m`
Performance after the change:
Average init time of coordinators: 10s
Hardware: M2 Pro
Test setup: 1000 collections with 1000 rows (dim=128) per collection.
issue: #29409
Signed-off-by: yiwangdr <yiwangdr@gmail.com>
See also #27675#30469
For a sync task, the segment could be compacted during sync task. In
previous implementation, this sync task will hold only the old segment
id as KeyLock, in which case compaction on compacted to segment may run
in parallel with delta sync of this sync task.
This PR introduces sync target segment verification logic. It shall
check target segment lock it's holding beforing actually syncing logic.
If this check failed, sync task shall return`errTargetSegementNotMatch`
error and make manager re-fetch the current target segment id.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
This PR changes the following to speed up L0 compaction and
prevent OOM:
1. Lower deltabuf limit to 16MB by default, so that each L0 segment
would be 4X smaller than before.
2. Add BatchProcess, use it if memory is sufficient
3. Iterator will Deserialize when called HasNext to avoid massive memory
peek
4. Add tracing in spiltDelta
See also: #30191
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
This pr decoups importing segment from flush process by:
1. Exclude the importing segment from the flush policy, this approch
avoids notifying the datanode to flush the importing segment, which may
not exist.
2. When RootCoord call Flush, DataCoord directly set the importing
segment state to `Flushed`.
issue: https://github.com/milvus-io/milvus/issues/30359
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
syncMgr.Block() will lock the segment when executing compaction.
Previous implementation was unable to Unblock thoese segments when
compaction failed. If next compaction of the same segments arrives,
it'll stuck forever and block all later compation tasks.
This PR makes sure compaction executor would Unblock these segments
after a failure compaction.
Apart form that, this PR also refines some logs and clean some codes of
compaction, compactor:
1. Log segment count instead of segmentIDs to avoid logging too many
segments
2. Flush RPC returns L1 segments only, skip L0 and L2
3. CompactionType is checked in `Compaction`, no need to check again
inside compactor
4. Use ligter method to replace `getSegmentMeta`
5. Log information for L0 compaction when encounters an error
See also: #30213
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
issue: #29988
This pr adds full-support for wildcard pattern matching from end to end.
Before this pr, the users can only use prefix match in their expression,
for example, "like 'prefix%'". With this pr, more flexible syntax can be
combined.
To do so, this pr makes these changes:
- 1. support regex query both on index and raw data;
- 2. translate the pattern matching to regex query, so that it can be
handled by the regex query logic;
- 3. loose the limit of the expression parsing, which allows general
pattern matching syntax;
With the support of regex query in segcore backend, we can also add
mysql-like `REGEXP` syntax later easily.
---------
Signed-off-by: longjiquan <jiquan.long@zilliz.com>
See also #27606
Previously l0 linear compaction will scan all target segment id from
metacache for each line of delta entry, which is not needed since
compaction target segments shall be all immutable.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #30404
`PrimaryKey` is used to hold pk values for both int64 & varchar data
type. Since it is an interface it may occupies more memory than pure
slices when holding a group of pks.
This PR add `PrimaryKeys` interface when some other module need to hold
lots of PrimaryKeys.
By using this interface, it could reduce the memory of pk slice to half
when using Int64 Pk data type and reduce interface cost for each row of
varchar as well.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
This PR introduces novel importv2 roles for datanode:
1. Executor: To execute tasks, a import task will be divided into the
following steps: read data -> hash data -> sync data;
2. Manager: To manage all the tasks;
issue: https://github.com/milvus-io/milvus/issues/28521
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
See also #27675
This PR adds back MemoryHighSyncPolicy implementation. Also change
MinSegmentSize & CheckInterval to configurable param item.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27675
BloomFilterSet.current shall be reset after RollStats, otherwise it will
keep tracking whole segment data causing the false positive ratio larger
than expected.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #28521#29732
include
1. list collection's import jobs
2. create a new import job
3. get the progress of an import job
fix:
1. mix the order of dbName & collectionName #29728
2. trace log keep the same as v1
3. support traceID
4. azure precheck, blob name cannot end with / #29703
---------
Signed-off-by: PowderLi <min.li@zilliz.com>
the proxy miss-returned nil while failed to listen the port, then the
server continues to run but we can't connect to service
resolve#30034
Signed-off-by: yah01 <yah2er0ne@outlook.com>
according to our benchmark, concurrency level 16 is enough to fully
utilize the object storage network bandwidth
Signed-off-by: yah01 <yang.cen@zilliz.com>
issue: #29772
1. `DropPartition` only invalidates the cache related to the partition.
2. `CreateAlias` does not invalidate the cache.
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
See also #30167
After support open telemetry tracing, we want to have traceID as well,
this PR adds util functions to set traceID with span & propagate traceID
between different context.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27606
`MultiRead` actually download file in sequence, which may lead to large
time consumption during l0 compaction download phase.
This PR make l0 compactor download deltalogs in parallel utilizing conc
package & io pool.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #25639
/kind improvement
When the number of vector columns increases, the number of rows per
segment will decrease. In order to reduce the impact on vector indexing
performance, it is necessary to increase the segment max limit.
If a collection has multiple vector fields with memory and disk indices
on different vector fields, the size limit after segment compaction is
the minimum of segment.maxSize and segment.diskSegmentMaxSize.
Signed-off-by: xige-16 <xi.ge@zilliz.com>
---------
Signed-off-by: xige-16 <xi.ge@zilliz.com>
issue: #30102#30225
we should read MetricType from SearchResult,
because query node never
1. read metricType from LoadMeta
2. store to collection
3. set SearchRequest.MetricType
Signed-off-by: PowderLi <min.li@zilliz.com>
issue: #29772
The shardLeaders cache does not actively expire, update the cache when
search/query fails.
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
After #28873, PartitionID and CollectionID should be filled in
CompactionSegmentBinlog so that DataNode can compose
the correct logPath. However There're some places left forgotten to fill
in the information, causing Datanode downloading `xxx/0/0/xxxx/xxxx`
binlogs during compaction
See also: #30213
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
See also #30273
This PR:
- Rename confusing `LoadIndexInfo` to `UpdateIndexInfo` for LocalSegment
- Use `DynamicPool` instead of `LoadPool` for `UpdateSealedSegmentIndex`
- Fix cgo call missing pool control
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Allows proactive warming up of chunk cache. Original vector data will be
asynchronously loaded into the chunk cache during the load process. It
has the potential to significantly reduce query/search latency for a
certain duration after the load, albeit with a concurrent increase in
disk usage.
issue: https://github.com/milvus-io/milvus/issues/30181
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
before this, every time writting the index chunk data into the disk,
there are 4 I/O operations:
- open the file
- seek to the offset
- write the data
- close the file
this optimized this to open only once and continiously write all data.
This also makes it concurrent to load the files from object storage
Signed-off-by: yah01 <yang.cen@zilliz.com>
See also #30250
This PR add requery flag in query task. When reQuery flag is true, query
task shall skip partition name conversion and use pre-calculated
partitionIDs passed from search task.
TODO: hybrid search does not have partition id information. we shall
apply same logic for hybrid search later.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #30111
Segments could be "Flushed" only by `FlushSegments` grpc call from
datacoord by design. There are two possible reason to cause one segment
got flushed multiple times.
- Segment is in flushing state during multiple epoch in flowgraph
- Segment is flushed by flushTs & Flush segments
So this pr fix:
- Remove state change logic form FlushTs policy
- Change Flush segment into three stage way: Sealed->Flushing->Flushed
preventing multiple Flushed=true operations.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/29020
Json can't not pass a max_int32 value to int32_t, so let knowhere check
value range by itself.
After fix this, pymilvus will report:
pymilvus.exceptions.MilvusException: <MilvusException: (code=65535,
message=fail to search on QueryNode 6: worker(6) query failed: => failed
to search: arithmetic overflow: param search_list_size should be at most
2147483647)>
Signed-off-by: cqy123456 <qianya.cheng@zilliz.com>
Resolves#30167
This PR add tracing for all compaction from the task start in datacoord
and execution procedures in datanode.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
This PR discontinuing the subscription to the mq and, instead, employing
the channel checkpoint as the DML and starting position for the import
segments.
issue: https://github.com/milvus-io/milvus/issues/30106
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
Add a counter monitoring metric for the ratelimited rpc requests with
labels: proxy nodeID, rpc request type, and state.
issue: https://github.com/milvus-io/milvus/issues/30052
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
issue: #30150
This PR fix three problems:
1. leader checker use wrong node id when generate release task, which
cause the release task finished immediately
2. the release request generated by leader_checker doesn't set the
`force` flag, the operation to clean leader view on delegator will fail.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
See also: #30121#27675
This PR changes the delete buffering logic:
- Write buffer shall buffer insert first
- Then the delete messages shall be evaluated
- Whether PK matches previous Bloom filter, which ts is always smaller
- Whether PK matches insert data which has smaller timestamp
- Then the segment bloom filter is updates by the newly buffered pk rows
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also: #28873
When datanode returns error or go offline during GetCompactionResult
call, the compress binlog logic will panic since it was using a nil
result
This PR move it after the CheckRPCCall error to prevent this case.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
This PR also fixes bugs in l0 compactor where
l0 results would never be removed from datanode
See also: #30099
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
don't store logPath in meta to reduce memory, when service get
segmentinfo, generate logpath from logid.
#28885
Signed-off-by: lixinguo <xinguo.li@zilliz.com>
Co-authored-by: lixinguo <xinguo.li@zilliz.com>
issue: #30000
related to: [milvus-proto
#202](https://github.com/milvus-io/milvus-proto/pull/202)
1. replace collSchema.AutoID with primaryField.AutoID
2. show `enableDynamic` & `enableDynamicField` at the same time
3. avoid data race about the access to metacache
Signed-off-by: PowderLi <min.li@zilliz.com>
issue: #29841
This PR fix leader checker use wrong check interval, which causes leader
checker trigger too frequency
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #29841
if segment loaded, submit load segment task for it isn't permitted, to
avoid load segment twice. but this logic blocks the leader checker to
correct leader view by `LoadSegment`
This PR remove the segment loaded check, to fix that leader checker
cann't submit load task
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
this avoids a corner case: after load index failed, this index can be
never loaded as it has been added into the segment's index map
Signed-off-by: yah01 <yang.cen@zilliz.com>
issue: #29677#29838
during get shard leaders, if qeurynode doesn't ack the heartbeat than
10s, querycoord will treat it as unavailable, and won't return shard
leader on it. but when querynode has a full cpu usage, it's easily to
stuck for more than 10s without ack the heartbeat, which cause no shard
leader to search/query.
This PR remove heartbeat lag logic during get shard leaders
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
the recent changes move the level 0 segments list to a new proto field,
which leads to the QueryCoord can't see the level 0 segments, handle the
new changes
fix#29907
Signed-off-by: yah01 <yang.cen@zilliz.com>
when apply dynamic config changes, we should format the value to proper
unit
This PR fix update rate limit config with wrong value.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
See also: #29650
Either segment dml position & channel checkpoint could be newer in some
cases. This PR make PackLoadSegments use the newer one improving load
performance during cases where there are lots of upsert.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also: #29657
Datanode Compactor use estimated row number from schema to decide when
to sync the batch of data when executing compaction. This est value
could go way from actual size when the schema contains variable field(
say VarChar, JSON, etc.)
This PR make compactor able to check the actual buffer data size and
make it possible to sync when buffer is actually beyong max binglog
size.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #29814
if channel is not subscribed yet, the generated load segment task will
be remove from task scheduler due to the load segment task need to be
transfer to worker node by shard leader.
This PR skip generate load segment task when channel is not subscribed
yet.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
Delete detail log will be large and hard to read when log level is
debug. This PR change the log to stringer and print only pk range,
number.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
If segment has more than 128 log fils, drop segment will exceed etcd txn
ops limit, which will failed the drop segment request
This PR drop segment meta info with prefix, to avoid drop segment meta
failed
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #29793
Use `DocSetCollector` instead of `TopDocsCollector`, which will avoid
scoring and sorting.
---------
Signed-off-by: longjiquan <jiquan.long@zilliz.com>
See also #29803
This PR:
- Add trace span for `LoadIndex` & `LoadFieldData` in segment loader
- Add `TraceCtx` parameter for `Index.Load` in segcore
- Add span for ReadFiles & Engine Load for Memory/Disk Vector index
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also: #27675
The bloom filter set initialized new BF with fixed configured `n`. This
value is always larger than the actual batch size and causes generated
BF using more memory.
This PR make write buffer to initialize BF with estimated batch size
from schema & configuration value.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
the growing segments contribute to this metric while inserting and
putting into the manager, but the current impl inserts data before
putting the segments into manager, which leads to double contributions
fix: #29766
Signed-off-by: yah01 <yah2er0ne@outlook.com>
See also #29803
This PR:
- Add trace span for collection/partition load
- Use TraceSpan to generate Segment/ChannelTasks when loading
- Refine BaseTask trace tag usage
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also: #29113
Add a new utitliy function in `pkg/util/typetuil` to pre-allocate field
data slice capacity acoording to search limit. This shall avoid copying
the data during `AppendFieldData` when previous slice is out of space.
And shall also save CPU time during high paylog.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
fix: #29757
In previous code, `ColumnBasedInsertMsgToInsertData` adds empty field if
the insertMsg parameter does not have the column schema defined. This
may lead to unexpected behavior of caller functions.
This PR:
- Add column missing check
- Add column length check
- Generate BlobInfo for ColumnBasedInsertMsgToInsertData result
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
When the TimeTravel functionality was previously removed, it
inadvertently affected the MVCC functionality within the system. This PR
aims to reintroduce the internal MVCC functionality as follows:
1. Add MvccTimestamp to the requests of Search/Query and the results of
Search internally.
2. When the delegator receives a Query/Search request and there is no
MVCC timestamp set in the request, set the delegator's current tsafe as
the MVCC timestamp of the request. If the request already has an MVCC
timestamp, do not modify it.
3. When the Proxy handles Search and triggers the second phase ReQuery,
divide the ReQuery into different shards and pass the MVCC timestamp to
the corresponding Query requests.
issue: #29656
Signed-off-by: zhenshan.cao <zhenshan.cao@zilliz.com>
Once a role is granted to a user, the user should automatically possess
the privilege information associated with that role.
issue: #29710
Signed-off-by: zhenshan.cao <zhenshan.cao@zilliz.com>
See also #27349
The segment level label in querynode used `Legacy` before segment level
was correctly passed in Load request. Now this attribute is still using
legacy so the metrics does not look right.
This PR add paramter for `NewSegment` and passes corrent values for each
invocation.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
This pr will make milvus load delta logs concurrently, which should
decrease the latency of loading a segment.
/kind improvement
---------
Signed-off-by: longjiquan <jiquan.long@zilliz.com>
related to : #29417
cardinal indexes upload index files in `Serialize` interface, and throw
exception when the `Serialize` failed.
Signed-off-by: xianliang <xianliang.li@zilliz.com>
issue: #29453
sync distribution by rpc will also call loadSegment/releaseSegment,
which may cause all kinds of concurrent case on same segment, such as
concurrent load and release on one segment.
This PR add leader_checker which generate load/release task to correct
the leader view, instead of calling sync distribution by rpc
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
related: #25324
Search GroupBy function, used to aggregate result entities based on a
specific scalar column.
several points to mention:
1. Temporarliy, the whole groupby is implemented separated from
iterative expr framework **for the first period**
2. In the long term, the groupBy operation will be incorporated into the
iterative expr framework:https://github.com/milvus-io/milvus/pull/28166
3. This pr includes some unrelated mocked interface regarding alterIndex
due to some unworth-to-mention reasons. All these un-associated content
will be removed before the final pr is merged. This version of pr is
only for review
4. All other related details were commented in the files comparison
Signed-off-by: MrPresent-Han <chun.han@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/29230
this pr do these things:
1. add gpu brute force;
2. limit gpu index only support l2 / ip;
Signed-off-by: cqy123456 <qianya.cheng@zilliz.com>
issue: #29672
the storage account need privileges of actions
`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/*` at
least
Signed-off-by: PowderLi <min.li@zilliz.com>
This PR defines the new import reader interfaces and implement a binlog
reader for import.
issue: https://github.com/milvus-io/milvus/issues/28521
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
See also #29699
Querycoord panicked when tried to pop from an empty heap. We assume the
heap shall not be empty, but in some branch, the candidate is never
pushed back.
This PR put pop & push in a closure and adds a defer call to push item
back.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27675
`Allocator.Alloc` and `Allocator.AllocOne` might be invoked multiple
times if there were multiple blobs set in one sync task.
This PR add pre-fetch logic for all blobs and cache logIDs in sync task
so that at most only one call of the allocator is needed.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #29113
The collection schema is crucial when performing search/query but some
of the information is calculated for every request.
This PR change schema field of cached collection info into a utility
`schemaInfo` type to store some stable result, say pk field,
partitionKeyEnabled, etc. And provided field name to id map for
search/query services.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #29625
This PR:
- Add a new implemention of `DeleteBuffer`: listDeleteBuffer
- holds cacheBlock slice
- `Put` method append new delete data into last block
- when a block is full, append a new block into the list
- Add `TryDiscard` method for `DeleteBuffer` interface
- For doubleCacheBuffer, do nothing
- For listDeleteBuffer, try to evict "old" blocks, which are blocks
before the first block whose start ts is behind provided ts
- Add checkpoint field for `UpdateVersion` sync action, which shall be
used to discard old cache delete block
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27675
Fix logic problem introduced by #29413, which is serializer tries to
merge statslog list while level segments do not have statslog. This
shall result returning error. `writeBufferBase` ignores this error but
it shall only ignore `ErrSegmentNotFound`.
This PR add logic checking segment level before execution of merging
statslog list. And add error type check for getSyncTask failure.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27675
For `GetRecoveryInfo` & `GetRecoveryInfoV2`, Level zero segment ids
shall be specified in vchan info so that querycoord could re-fetch
current segment info during watch procedure without having all segment
info
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
The tests need to call a private method, Milvus uses `#define` to
replace private with public, the hack trick works but would be broken if
the including order changed.
This uses friend to make all things work well
Signed-off-by: yah01 <yang.cen@zilliz.com>
Signed-off-by: yah01 <yah2er0ne@outlook.com>
issue: #29494
1. link with install path's libblob-chunk-manager
2. performance of `ShouldBindWith` is better than `ShouldBindBodyWith`
3. the middleware shouldn't read the unrefreshed parameter repeatly
Signed-off-by: PowderLi <min.li@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/27704
Add inverted index for some data types in Milvus. This index type can
save a lot of memory compared to loading all data into RAM and speed up
the term query and range query.
Supported: `INT8`, `INT16`, `INT32`, `INT64`, `FLOAT`, `DOUBLE`, `BOOL`
and `VARCHAR`.
Not supported: `ARRAY` and `JSON`.
Note:
- The inverted index for `VARCHAR` is not designed to serve full-text
search now. We will treat every row as a whole keyword instead of
tokenizing it into multiple terms.
- The inverted index don't support retrieval well, so if you create
inverted index for field, those operations which depend on the raw data
will fallback to use chunk storage, which will bring some performance
loss. For example, comparisons between two columns and retrieval of
output fields.
The inverted index is very easy to be used.
Taking below collection as an example:
```python
fields = [
FieldSchema(name="pk", dtype=DataType.VARCHAR, is_primary=True, auto_id=False, max_length=100),
FieldSchema(name="int8", dtype=DataType.INT8),
FieldSchema(name="int16", dtype=DataType.INT16),
FieldSchema(name="int32", dtype=DataType.INT32),
FieldSchema(name="int64", dtype=DataType.INT64),
FieldSchema(name="float", dtype=DataType.FLOAT),
FieldSchema(name="double", dtype=DataType.DOUBLE),
FieldSchema(name="bool", dtype=DataType.BOOL),
FieldSchema(name="varchar", dtype=DataType.VARCHAR, max_length=1000),
FieldSchema(name="random", dtype=DataType.DOUBLE),
FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=dim),
]
schema = CollectionSchema(fields)
collection = Collection("demo", schema)
```
Then we can simply create inverted index for field via:
```python
index_type = "INVERTED"
collection.create_index("int8", {"index_type": index_type})
collection.create_index("int16", {"index_type": index_type})
collection.create_index("int32", {"index_type": index_type})
collection.create_index("int64", {"index_type": index_type})
collection.create_index("float", {"index_type": index_type})
collection.create_index("double", {"index_type": index_type})
collection.create_index("bool", {"index_type": index_type})
collection.create_index("varchar", {"index_type": index_type})
```
Then, term query and range query on the field can be speed up
automatically by the inverted index:
```python
result = collection.query(expr='int64 in [1, 2, 3]', output_fields=["pk"])
result = collection.query(expr='int64 < 5', output_fields=["pk"])
result = collection.query(expr='int64 > 2997', output_fields=["pk"])
result = collection.query(expr='1 < int64 < 5', output_fields=["pk"])
```
---------
Signed-off-by: longjiquan <jiquan.long@zilliz.com>
See also #27675
For now, Level zero segments shall always be synced as `Flushed` ones.
This PR fixes when level zero segments selected by policies other than
flush ts policy will be synced as growing state.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #29575
Add `getCollectionTarget` method which is atomic when scope is
`CurrentTargetFirst` or `NextTargetFirst`
Also return error when executor finds no channel in target manager
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
the array type can't be compacted, the system could continue with the
inserted segments, but these segments can be never compacted
fix#29503
---------
Signed-off-by: yah01 <yah2er0ne@outlook.com>
issue: #29523
readable shard leader should still be the old one during channel
balance, if the new shard leader is not ready.
This PR fixed that query coord choose wrong shard leader during balance
channel
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
See also #29526
Previous PR removed flushed segment info from request, which causes
pipeline failing to exclude flushed segment info
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
THe results don't meet our requirements, and the code hasn't been
maintained for a long time.
See also: #29447
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
`WatchDmChannel` only need growing segment info, this PR removes fetch
segmentInfos when fill watch dml channel request.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
See also #27515
When Delegator processes delete data, it forwards delete data with only
segment id specified. When two segments has same segment id but one is
growing and the other is sealed, the delete will be applied to both
segments which causes delete data out of order when concurrent load
segment occurs.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
`AssignSegment` method defines how to assign segment to nodes, but
score_based_balance implement another assign logic in
`genStoppingSegmentPlan`
This PR rewrite gen stopping segment plan based on assign segment.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
milvus branch 2.3 add `loadType` in CollectionLoadInfo, so for
collection meta upgrade from 2.2, we should add `loadType` to
CollectionLoadInfo. This PR update CollectionLoadInfo with `loadType`
when meet a old version CollectionLoadInfo
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue:https://github.com/milvus-io/milvus/issues/29230
this pr do two things about cagra index:
a.milvus yaml config support gpu memory settings
b.add cagra-params check
Signed-off-by: cqy123456 <qianya.cheng@zilliz.com>
Co-authored-by: yusheng.ma <yusheng.ma@zilliz.com>
See also #27675
Since serialization segment buffer does not related to sync manager can
shall be done before submit into sync manager. So that the pk statistic
file could be more accurate and reduce complex logic inside sync
manager.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
the executor always fetches the latest segment info, so we could consume
from the latest checkpoint, which could save much time while deleted
many entities
Signed-off-by: yah01 <yang.cen@zilliz.com>
See also #29092
`FlushSegments` transfer only `Growing` segment to flushing, if the
segment is in `Sealed` state before Datanode watch channel, the state
will never got satisfied for a segment be selected to be flushed.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
the mmap mode isn't controlled by the config anymore, so we don't
require user to set it when they enabled mmap mode.
Signed-off-by: yah01 <yah2er0ne@outlook.com>
support enable/disable mmap for index, the user could alter the index's
mode by `AlterIndex` method
related: https://github.com/milvus-io/milvus/issues/21866
---------
Signed-off-by: yah01 <yah2er0ne@outlook.com>
Signed-off-by: yah01 <yang.cen@zilliz.com>
This pr:
1. Handles the time tick delay error when converting old error codes to
milvus errors.
2. Enhances quota error messages by eliminating "force deny" and
substituting it with "quota exceeded."
issue: https://github.com/milvus-io/milvus/issues/29288
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
See also #29332
The segment may be released before or during the request when delegator
tries to forward delete request to yet. Currently, these two situation
returns different error code.
In this particular case, `ErrSegmentNotLoaded` and `ErrSegmentNotFound`
shall both be ignored preventing return search service unavailable by
mistake.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #28898
This PR move the `ProxyClientManager` to util package, in case of
reusing it's implementation in querycoord
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
related to #28427
Add a jitter in syncStatleBuffer policy so all segments won't flush at
the same time
Signed-off-by: xiaofanluan <xiaofan.luan@zilliz.com>
Do not fill the invalid ids for the empty results, it will incur useless
memory overhead and reduce overhead when nq and topk is large.
---------
Signed-off-by: chasingegg <chao.gao@zilliz.com>
See also #29327
Change channel checkpoint metrics to unix seconds instead of checkpoint
timestamp lag value
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
many growing segments may be created in a short time and there is no
restriction to the process, the CGO call will leave many threads
related: #29282
Signed-off-by: yah01 <yah2er0ne@outlook.com>
See also #29113
- Unify partition info refresh logic
- Prevent parse partition names for each partition key search request
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #28960 [milvus-proto
#212](https://github.com/milvus-io/milvus-proto/issues/212)
add new configuration: builtinRoles
user can define roles in config file: `milvus.yaml`
there is an example:
1. db_ro, only have read privileges, include load
2. db_rw, read and write privileges, include create/drop/rename
collection
3. db_admin, not only read and write privileges, but also user
administration
Signed-off-by: PowderLi <min.li@zilliz.com>
issue: #29004
add a new parameter: `params`, which is a map[string]float64;
but now only 2 valid item: radius + range_filter;
Signed-off-by: PowderLi <min.li@zilliz.com>
See also #29223
This PR make `conc.Pool` resizable by adding `Resize` method for it.
Also make newly added datanode `MaxParallelSyncMgrTasks` config
refreshable
---------
Signed-off-by: Congqi.Xia <congqi.xia@zilliz.com>
Since the sync manager is global in datanode now, the old
`maxParallelSyncTaskNum` does not fit into current implementation
anymore.
This PR add a new param item for sync mgr parallel control and enlarge
default value
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
we found the load got stuck probably, and reviewed the logs.
the target observer seems not working, the reason is the taskDispatcher
removes the task in a goroutine, and modifies the task status after
committing the task into the goroutine pool, but this may happen after
the task removed, which leads to the task will never be removed
related #29086
Signed-off-by: yah01 <yang.cen@zilliz.com>
issue: #28622
After we support balance segment with growing segment count #28623, if
we balance segment and channel at same time, some segments need to be
rebalanced after balance channel finish.
This PR skip balance segment when channel need be balanced.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>