ValidateGroupBy was returning an error if a tag does not exist
but it appears that function was supposed to be validating that
a field name was not used as a group by field.
Fixes#3326
A short write has occurred and we do not have enough bytes to determine
the size of the payload. This is corrupted record that we should drop.
Instead of panicing, log the error and advance the queue since the error
at this location is unreoverable currently.
Fixes#3436
Newlines in a string field would cause the parser to return
the line prematurely causing "unbalanced quotes" errors. This
makes the line scanning aware of quote fields so that the whole
line is returned.
Fixes#3545
Previously, parseRegex could return an empty RegexLiteral
and the expression parser would put that into the right-hand
side of the expression, causing a nil-pointer panic when
the query was later executed. This change adds a check at
the parsing level and returns an error message if a regex
operator (e.g. =~) is not followed by an actual regex.
With this change, the query engine code gathers information about
shards and tagsets by working with individual shards, collating the
information, and returning that to the client. It does not assume that any
particular shard is local, and accesses all shards through abstracted
Mappers, of which there are two types -- a Mapper type for Raw queries
and a second type for Aggregate queries. There are corresponding
Executors for each type of Mapper, but both types of Executors share the
same interface.
Writing points that were not sorted by time could cause very high
CPU usages and increased latencies because each point inserted would
cause the in-memory cache to be resorted. The worst case would be
writing a large batch of N points in reverse time order which would
invoke N sorts of the slice.
This patch keeps track of which slices need to be sorted and sorts
them once at the end. In the previous example, the N sorts becomes
one. There is still a pathalogical case that would require N/2 sorts.
For example, 10000 points split across 5000 series. Each series has two
points that are in reverse time order. This would incur 5000 sorts still.
Fixes#3159
This commit adds a write ahead log to the shard. Entries are cached
in memory and periodically flushed back into the index. The WAL and
the cache are both partitioned into buckets so that flushing doesn't
stop the world as long.
Field values that were out of range for the type would panic the database
when being inserted because the parser would allow them as valid points.
This change prevents those invalid values from being parsed and instead
returns an error.
An alternative fix considered was to handle the error and clamp the value
to the min/max value for the type. This would treat numeric range errors
slightly differently than other type erros which might lead to confusion.
The simplest fix with the current parser would be to just convert each field
to the type at parse time. Unfortunately, this adds extra memory allocations
and lowers throughput significantly. Since out of range values are less common
than in-range values, some heuristics are used to determine when the more
expensive type parsing and range checking is performed. Essentially, we only
do the slow path when we cannot determine that the value is in an acceptable
type range.
Fixes#3127