Previously, tags had a `shouldCopy` flag to indicate if those tags
referenced an underlying buffer and should be copied to allow GC.
Unfortunately, this prevented tags from being copied that were
created and referenced the mmap which caused segfaults.
This change removes the `shouldCopy` flag and replaces it with a
`forceCopy` argument in `CreateSeriesIfNotExists()`. This allows
the write path to indicate that tags must be cloned on insert.
This change delays Tag cloning until a new series is found, and will
only clone Tags acquired from `ParsePoints...` and not those referencing
the mmap-ed files (TSM) that are created on startup.
This leak seems to have been introduced in 8aa224b22d,
present in 1.1.0 and 1.1.1.
When points were parsed from HTTP payloads, their tags and fields
referred to subslices of the request body; if any tag set introduced a
new series, then those tags then were stored in the in-memory series
index objects, preventing the HTTP body from being garbage collected. If
there were no new series in the payload, then the request body would be
garbage collected as usual.
Now, we clone the tags before we store them in the index. This is an
imperfect fix because the Point still holds references to the original
tags, and the Point's field iterator also refers to the payload buffer.
However, the current write code path does not retain references to the
Point or its fields; and this change will likely be obsoleted when TSI
is introduced.
This change likely fixes#7827, #7810, #7778, and perhaps others.
I haven't been able to reproduce creating a point without any fields,
but we've seen points in the wild that have been marshalled with no
fields - that is, the length header for fields is uint32(0) and a
well-formed encoded time follows.
Attempting to unmarshal points via NewPointFromBytes returns
ErrPointMustHaveAField, so it seems better to fail earlier with the same
error, rather than allowing those points to be serialized in the first
place.
A string field w/ a trailing slash before the quote would parse incorrectly
because the quote would be seen as escaped. We have to treat \\ as an
escape sequence within strings in order to handle this.
The `partial` tag has been added to the JSON response of a series and
the result so that a client knows when more of the series or result will
be sent in a future JSON chunk.
This helps interactive clients who don't want to wait for all of the
data to know if it is done processing the current series or the current
result. Previously, the client had to guess if the next chunk would
refer to the same result or a new result and it had to match the name
and tags of the two series to know if they were the same series. Now,
the client just needs to check the `partial` field included with the
response to know if it should expect more.
Fixed `max-row-limit` so it counts rows instead of results and it
truncates the response when the `max-row-limit` is reached.
The FieldIterator is used to scan over the fields of a point, providing
information, and delaying parsing/decoding the value until it is needed.
This change uses this new type to avoid the allocation of a map for the
fields which is then thrown away as soon as the points get converted
into columns within the datastore.
+ Remove a heap alloc in (Point).HashID() and (Row).tagsHash()
(According to `-gcflags -m`).
+ Direct port from the stdlib.
+ Fuzz test for equivalence to stdlib version.
+ Save one alloc per line when writing with the bulk protocol.
Over a longer period of writes, this allocation shows up quite
a bit in profiles since the slice needs to be resized frequently.
This scans the slice to count how many lines are going to be parsed
in order to pre-allocate the slice capacity. It's slightly slower,
but creates less garbage in the long run.
The v2 UDP client will attempt to split points that exceed the
configured payload size. It will only do this for points that have a
timestamp specified.
Negative timestamps are now supported. We also now refuse two
nanoseconds that are at the edge of the minimum time window. One of the
nanoseconds we do not accept is because we need MinInt64 to be used for
some internal comparisons in the TSM engine and it was causing an
underflow when we subtracted one from the minimum time. The second is so
we can have one minimum time that signifies the default minimum that
nobody can write to (so we can implicitly rewrite the timestamp on
aggregate queries) but still use the explicit timestamp if it is given
to us by the user. We aren't able to tell the difference between if the
user provided it or if it was implicit without those values being
different.
If the default minimum time is used with an aggregate query, we rewrite
the time to be the epoch for backwards compatibility since we believe
that's more important than supporting that extra nanosecond.
Truncate the time interval output of the monitor service to be on even
time intervals rather than on every minute based on the start time. This
normalizes the output from the monitor service.
The tsdb package had a substantial amount of dead code related to the
old query engine still in there. It is no longer used, so it was removed
since it was left unmaintained. There is likely still more code that is
the same, but wasn't found as part of this code cleanup.
influxql has dead code show up because of the code generation so it is
not included in this pruning.
The highest time represented by a nanosecond needs to be used for an
exclusive range, so the maximum time needs to be one less than the
possible maximum number of nanoseconds representable by an int64 so that
we don't lose a point at that one time.
Previously worked in the open source version because the timestamp used
for finding a shard would be truncated by the retention policy so the
lookup time didn't run into this edge case because it didn't rest on the
truncation boundary. Since that point didn't really belong in that shard
group and was placed there by mistake, it's best to fix this bug since
the timestamp used to create the shard group should be capable of
retrieving it.
The code for parsing a key our of the WAL or TSM files in the engine
was naive and didn't account for measurements with escape chars. This
uses the correct parsing code to parse and load them correctly.
Fixes#6496
Influx does not support fields with empty names or points
with no fields.
NewPoint is changed to validate that all field names are non-empty.
AddField is removed because we now require that all fields are
specified on construction.
NewPointFromByte is changed to return an error if a unmarshaled
binary point does not have any fields.
newFieldsFromBinary is changed to prevent an infinite loop that
can arise while attempting to parse corrupt binary point data.
TestNewPointsWithBytesWithCorruptData is changed to reflect the
change in the behaviour of NewPointFromByte.
Signed-off-by: Jon Seymour <jon@wildducktheories.com>
These tests check that NewPoint and NewPointFromBytes return an error if:
* arguments specify a point with no fields
* arguments specify a point with a field that has an empty name
These tests also check that Point.Fields() always returns, even in the presence
of corrupt binary data read with NewPointFromBytes.
These tests fail at this commit and are fixed by a subsequent commit.
Signed-off-by: Jon Seymour <jon@wildducktheories.com>
byte.Replace will return a copy of the input even when nothing has
been replaced. This is called in lower level query execution and
create some garbage that isn't necessary.
Quotes are handled differently in the line protocol depending on when
they are encountered. Quotes in field values matter, quotes anywhere
else don't.
`scanLine()` didn't understand this difference and treated all quotes
the same as ones for tag values. This resulted in `scanLine()` reading
the wrong amount of data sometimes when quotes were involved.
This fixes#5204.
Quotes are not supposed to be significant in field keys, but are
significant in field values. The code as it currently was would
consider quotes in a key to be significant, but the later parser that
would unmarshal the fields from the byte string did not consider those
quotes to be significant. This meant that the following string:
"a=1
The line protocol parser would see a mismatched quote instead of a valid
input to the line protocol. But more nefariously, the following string:
"a=1"=2
The line protocol parser would ignore the first equals since it is
located in the quotation marks and think this was a valid input. It
would then pass it on to the field parser who would panic and die when
it tried to parse `1"=2` as a number.
Fixes#4076.