fix some typos (#27851)

1. fix some typos in md,yaml #22893

Signed-off-by: Sheldon <chuanfeng.liu@zilliz.com>
pull/27861/head
Sheldon 2023-10-24 09:30:10 +08:00 committed by GitHub
parent 6e6de17a8c
commit 351c64b606
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
15 changed files with 26 additions and 26 deletions

View File

@ -288,7 +288,7 @@ start the cluster on your host machine
```shell
$ ./build/builder.sh make install // build milvus
$ ./build/build_image.sh // build milvus lastest docker image
$ ./build/build_image.sh // build milvus latest docker image
$ docker images // check if milvus latest image is ready
REPOSITORY TAG IMAGE ID CREATED SIZE
milvusdb/milvus latest 63c62ff7c1b7 52 minutes ago 570MB

View File

@ -27,7 +27,7 @@ pipeline {
}
stages {
stage('Generat Image Tag') {
stage('Generate Image Tag') {
steps {
script {
def date = sh(returnStdout: true, script: 'date +%Y%m%d').trim()

View File

@ -15,7 +15,7 @@
# limitations under the License.
# This is the configuration file for the etcd server.
# Only standalone users with embeded etcd should change this file, others could just keep this file As Is.
# Only standalone users with embedded etcd should change this file, others could just keep this file As Is.
# All the etcd client should be added to milvus.yaml if necessary
# Human-readable name for this member.

View File

@ -111,7 +111,7 @@ mq:
pulsar:
address: localhost # Address of pulsar
port: 6650 # Port of Pulsar
webport: 80 # Web port of pulsar, if you connect direcly without proxy, should use 8080
webport: 80 # Web port of pulsar, if you connect directly without proxy, should use 8080
maxMessageSize: 5242880 # 5 * 1024 * 1024 Bytes, Maximum size of each message in pulsar.
tenant: public
namespace: default
@ -346,7 +346,7 @@ dataCoord:
balanceInterval: 360 #The interval for the channelBalancer on datacoord to check balance status
segment:
maxSize: 512 # Maximum size of a segment in MB
diskSegmentMaxSize: 2048 # Maximun size of a segment in MB for collection which has Disk index
diskSegmentMaxSize: 2048 # Maximum size of a segment in MB for collection which has Disk index
sealProportion: 0.23
# The time of the assignment expiration in ms
# Warning! this parameter is an expert variable and closely related to data integrity. Without specific

View File

@ -74,7 +74,7 @@ Supposing we have segments `s1, s2, s3`, corresponding positions `p1, p2, p3`
const filter_threshold = recovery_time
// mp means msgPack
for mp := seeking(p1) {
if mp.position.endtime < filter_threshod {
if mp.position.endtime < filter_threshold {
if mp.position < p3 {
filter s3
}

View File

@ -86,7 +86,7 @@ type createCollectionTask struct {
}
```
- `PostExecute`, `CreateCollectonTask` does nothing at this phase, and return directly.
- `PostExecute`, `CreateCollectionTask` does nothing at this phase, and return directly.
4. `RootCoord` would wrap the `CreateCollection` request into `CreateCollectionReqTask`, and then call function `executeTask`. `executeTask` would return until the `context` is done or `CreateCollectionReqTask.Execute` is returned.
@ -104,7 +104,7 @@ type CreateCollectionReqTask struct {
}
```
5. `CreateCollectionReqTask.Execute` would alloc `CollecitonID` and default `PartitionID`, and set `Virtual Channel` and `Physical Channel`, which are used by `MsgStream`, then write the `Collection`'s meta into `metaTable`
5. `CreateCollectionReqTask.Execute` would alloc `CollectionID` and default `PartitionID`, and set `Virtual Channel` and `Physical Channel`, which are used by `MsgStream`, then write the `Collection`'s meta into `metaTable`
6. After `Collection`'s meta written into `metaTable`, `Milvus` would consider this collection has been created successfully.

View File

@ -127,7 +127,7 @@ future work.
For DqRequest, request and result data are written to the stream. The request data will be written to DqRequestChannel,
and the result data will be written to DqResultChannel. Proxy will write the request of the collection into the
DqRequestChannel, and the DqReqeustChannel will be jointly subscribed by a group of query nodes. When all query nodes
DqRequestChannel, and the DqRequestChannel will be jointly subscribed by a group of query nodes. When all query nodes
receive the DqRequest, they will write the query results into the DqResultChannel corresponding to the collection. As
the consumer of the DqResultChannel, Proxy is responsible for collecting the query results and aggregating them,
The result is then returned to the client.

View File

@ -31,7 +31,7 @@ ConstantExpr :=
| UnaryArithOp ConstantExpr
Constant :=
INTERGER
INTEGER
| FLOAT_NUMBER
UnaryArithOp :=
@ -64,7 +64,7 @@ CmpOp :=
| "=="
| "!="
INTERGER := 整数
INTEGER := 整数
FLOAT_NUM := 浮点数
IDENTIFIER := 列名
```

View File

@ -61,7 +61,7 @@ The rules system shall follow is:
{% note %}
**Note:** Segments meta shall be updated *BEFORE* changing the channel checkpoint in case of datanode crashing during the prodedure. Under this premise, reconsuming from the old checkpoint shall recover all the data and duplidated entires will be discarded by segment checkpoints.
**Note:** Segments meta shall be updated *BEFORE* changing the channel checkpoint in case of datanode crashing during the prodedure. Under this premise, reconsuming from the old checkpoint shall recover all the data and duplidated entries will be discarded by segment checkpoints.
{% endnote %}
@ -78,7 +78,7 @@ The winning option is to:
**Note:** `Datacoord` reloads from metastore periodically.
Optimization 1: reload channel checkpoint first, then reload segment meta if newly read revision is greater than in-memory one.
Optimization 2: After `L0 segemnt` is implemented, datacoord shall refresh growing segments only.
Optimization 2: After `L0 segment` is implemented, datacoord shall refresh growing segments only.
{% endnote %}

View File

@ -2,13 +2,13 @@
Growing segment has the following additional interfaces:
1. `PreInsert(size) -> reseveredOffset`: serial interface, which reserves space for future insertion and returns the `reseveredOffset`.
1. `PreInsert(size) -> reservedOffset`: serial interface, which reserves space for future insertion and returns the `reservedOffset`.
2. `Insert(reseveredOffset, size, ...Data...)`: write `...Data...` into range `[reseveredOffset, reseveredOffset + size)`. This interface is allowed to be called concurrently.
2. `Insert(reservedOffset, size, ...Data...)`: write `...Data...` into range `[reservedOffset, reservedOffset + size)`. This interface is allowed to be called concurrently.
1. `...Data...` contains row_ids, timestamps two system attributes, and other columns
2. data columns can be stored either row-based or column-based.
3. `PreDelete & Delete(reseveredOffset, row_ids, timestamps)` is a delete interface similar to insert interface.
3. `PreDelete & Delete(reservedOffset, row_ids, timestamps)` is a delete interface similar to insert interface.
Growing segment stores data in the form of chunk. The number of rows in each chunk is restricted by configs.

View File

@ -107,7 +107,7 @@ type Session struct {
}
// NewSession is a helper to build Session object.
// ServerID, ServerName, Address, Exclusive will be assigned after registeration.
// ServerID, ServerName, Address, Exclusive will be assigned after registration.
// metaRoot is a path in etcd to save session information.
// etcdEndpoints is to init etcdCli when NewSession
func NewSession(ctx context.Context, metaRoot string, etcdEndpoints []string) *Session {}

View File

@ -7,7 +7,7 @@
```go
type Client interface {
CreateChannels(req CreateChannelRequest) (CreateChannelResponse, error)
DestoryChannels(req DestoryChannelRequest) error
DestroyChannels(req DestroyChannelRequest) error
DescribeChannels(req DescribeChannelRequest) (DescribeChannelResponse, error)
}
```
@ -32,10 +32,10 @@ type CreateChannelResponse struct {
}
```
- _DestoryChannels_
- _DestroyChannels_
```go
type DestoryChannelRequest struct {
type DestroyChannelRequest struct {
ChannelNames []string
}
```

View File

@ -105,7 +105,7 @@ type MilvusService interface {
CreatePartition(ctx context.Context, request *milvuspb.CreatePartitionRequest) (*commonpb.Status, error)
DropPartition(ctx context.Context, request *milvuspb.DropPartitionRequest) (*commonpb.Status, error)
HasPartition(ctx context.Context, request *milvuspb.HasPartitionRequest) (*milvuspb.BoolResponse, error)
LoadPartitions(ctx context.Context, request *milvuspb.LoadPartitonRequest) (*commonpb.Status, error)
LoadPartitions(ctx context.Context, request *milvuspb.LoadPartitionRequest) (*commonpb.Status, error)
ReleasePartitions(ctx context.Context, request *milvuspb.ReleasePartitionRequest) (*commonpb.Status, error)
GetPartitionStatistics(ctx context.Context, request *milvuspb.PartitionStatsRequest) (*milvuspb.PartitionStatsResponse, error)
ShowPartitions(ctx context.Context, request *milvuspb.ShowPartitionRequest) (*milvuspb.ShowPartitionResponse, error)
@ -225,7 +225,7 @@ type CollectionSchema struct {
Fields []*FieldSchema
}
type LoadPartitonRequest struct {
type LoadPartitionRequest struct {
Base *commonpb.MsgBase
DbID UniqueID
CollectionID UniqueID

View File

@ -134,7 +134,7 @@ type PartitionStatesResponse struct {
- _LoadPartitions_
```go
type LoadPartitonRequest struct {
type LoadPartitionRequest struct {
Base *commonpb.MsgBase
DbID UniqueID
CollectionID UniqueID

View File

@ -78,7 +78,7 @@ certs = $dir/certs # Where the issued certs are kept
crl_dir = $dir/crl # Where the issued crl are kept
database = $dir/index.txt # database index file.
#unique_subject = no # Set to 'no' to allow creation of
# several ctificates with same subject.
# several certificates with same subject.
new_certs_dir = $dir/newcerts # default place for new certs.
certificate = $dir/cacert.pem # The CA certificate
@ -89,7 +89,7 @@ crl = $dir/crl.pem # The current CRL
private_key = $dir/private/cakey.pem# The private key
RANDFILE = $dir/private/.rand # private random number file
x509_extensions = usr_cert # The extentions to add to the cert
x509_extensions = usr_cert # The extensions to add to the cert
# Comment out the following two lines for the "traditional"
# (and highly broken) format.
@ -141,7 +141,7 @@ default_bits = 2048
default_keyfile = privkey.pem
distinguished_name = req_distinguished_name
attributes = req_attributes
x509_extensions = v3_ca # The extentions to add to the self signed cert
x509_extensions = v3_ca # The extensions to add to the self signed cert
# Passwords for private keys if not present they will be prompted for
# input_password = secret