* feat(1.x,file_store): port metrics for merge work
This commit ports metrics around merging tsm blocks when executing a
query. These will appear in EXPLAN ANALYZE results. The new information
records the time spent merging blocks, the number of blocks merged,
roughly the number of values merged into the first block of each
ReadBlock call, and the number of times that single calls to ReadBlock
have more than 4 block merges. The multiblock merge is sequential and
might benefit from a tree merge algorithm. The latter stat helps
identify if the engineering effort would be fruitful.
* closes#26614
* chore: switch to a timer for duration printing of times
* chore: rename method
* fix: avoid race and use new atomic primitive
Without this PR, the export-parquet tool would report on type conflict
errors and not name conflict errors in the schema if type conflicts were
encountered first. It stopped checking for validation issues once type
conflicts were found.
This PR changes it so that both type and name schema issues are both
identified and reported in the commands output. Either still fails an
export to parquet; but in --dry-run mode the validation is an useful
tool to check for schemas that will be an issue in parquet of
influxdbv3.
* follows #25297
* feat: Adds time_format param for httpd
* This PR will add a time_format parameter which takes in the value "epoch" or "rfc3339". The default will be "epoch" depending on the value output timestamps will be formatted in epoch or rfc3339.
Closes FR#615
* feat: Adding some changes
* error if incorrect param
* update naming for converter function
* combine tests
* chore: fmt'ing
* feat: A few modifications
* Rename convertToRfc3339Nano to convertToTimeFormat
* allow time formatting to be passed as parameter
* adjust error handling to use already defined timeFormats
* merge test data for test cases to reduce boilerplate
I believe that there is something happening which causes CurrentCompactionN() to always be greater than 0. Thus making Partition.Wait() hang forever.
Taking a look at some profiles where this issue occurs. I'm seeing a consistent one where we're stuck on Partition.Wait()
```
-----------+-------------------------------------------------------
1 runtime.gopark
runtime.chanrecv
runtime.chanrecv1
github.com/influxdata/influxdb/tsdb/index/tsi1.(*Partition).Wait
github.com/influxdata/influxdb/tsdb/index/tsi1.(*Partition).Close
github.com/influxdata/influxdb/tsdb/index/tsi1.(*Index).close
github.com/influxdata/influxdb/tsdb/index/tsi1.(*Index).Close
github.com/influxdata/influxdb/tsdb.(*Shard).closeNoLock
github.com/influxdata/influxdb/tsdb.(*Shard).Close
github.com/influxdata/influxdb/tsdb.(*Store).DeleteShard
github.com/influxdata/influxdb/services/retention.(*Service).DeletionCheck.func3
github.com/influxdata/influxdb/services/retention.(*Service).DeletionCheck
github.com/influxdata/influxdb/services/retention.(*Service).run
github.com/influxdata/influxdb/services/retention.(*Service).Open.func1
-----------+-------------------------------------------------------
```
Defer'ing compaction count cleanup inside goroutines should help with any hanging current compaction counts.
Modify currentCompactionN to be a sync atomic.
Adding a debug level log within Compaction.Wait() should aid in debugging.
Stop publishing nightly changelog since we do not publish nightly
build artifacts. This addresses issues with dependent projects
that check status of CI for influxdb.
Closes: #26538
Stop noisy logging about phantom shards that do not belong to the
current node by checking the shard ownership before logging about the
phantom shard. Note that only the logging was inaccurate. This did not
accidentally remove shards from the metadata that weren't really phantom
shards due to checks in `DropShardMetaRef` implementations.
closes: #26525
Previously
```go
// StartOptHoldOff will create a hold off timer for OptimizedCompaction
func (e *Engine) StartOptHoldOff(holdOffDurationCheck time.Duration, optHoldoffStart time.Time, optHoldoffDuration time.Duration) {
startOptHoldoff := func(dur time.Duration) {
optHoldoffStart = time.Now()
optHoldoffDuration = dur
e.logger.Info("optimize compaction holdoff timer started", logger.Shard(e.id), zap.Duration("duration", optHoldoffDuration), zap.Time("endTime", optHoldoffStart.Add(optHoldoffDuration)))
}
startOptHoldoff(holdOffDurationCheck)
}
```
was not passing the data by reference which meant we were never modifying the `optHoldoffDuration` and `optHoldoffStart` vars.
This PR also adds additional logging to Optimized level 5 compactions to clear up a little bit of confusion around log messages.
Adds a command to export data into per-shard
parquet files. To do so, the command iterates
over the shards, creates a cumulative schema
over the series of a measurement (i.e. a super-set
of tags and fields) and exports the data to a
parquet file per measurement and shard.
Limit number of concurrent optimized compactions so that level compactions do not get starved. Starved level compactions result in a sudden increase in disk usage.
Add [data] max-concurrent-optimized-compactions for configuring maximum number of concurrent optimized compactions. Default value is 1.
Co-authored-by: davidby-influx <dbyrne@influxdata.com>
Co-authored-by: devanbenz <devandbenz@gmail.com>
Closes: #26315
Log the reason for a point being dropped,
the type of boundary violated, and the
time that was the boundary. Prints the
maximum and minimum points (by time)
that were dropped
closes https://github.com/influxdata/influxdb/issues/26252
* fix: better time formatting and additional testing
* fix: differentiate point time boundary violations
* chore: clean up switch statement
* fix: improve error messages
PlanOptimize is being checked far too frequently. This PR is the simplest change that can be made in order to ensure that PlanOptimize is not being ran too much. To alleviate the frequency I've added a lastWrite parameter to PlanOptimize and added an additional test that mocks the edge cause out in the wild that led to this PR.
Previously in test cases for PlanOptimize I was not checked to see if certain cases would be picked up by Plan I've adjusted a few of the existing test cases after modifying Plan and PlanOptimize to have the same lastWrite time.
* chore(ci): push artifacts to public bucket (#25435)
Clean cherry-pick of #25435 to master-1.x.
(cherry picked from commit ca80b243ed)
* chore: port #24491 to master-1.x
Port a portion of #24491 that was not included in previous cherry-picks to master-1.x
Multiple subqueries in a FROM clause caused a
panic, insead of returning an error because
they are syntactically invalid. This corrects
that problem
closes https://github.com/influxdata/influxdb/issues/26139
* feat: Add CompactPointsPerBlock config opt
This PR adds an additional parameter for influxd
CompactPointsPerBlock. It adjusts the DefaultAggressiveMaxPointsPerBlock
to 10,000. We had discovered that with the points per block set to
100,000 compacted TSM files were increasing. After modifying the
points per block to 10,000 we noticed that the file sizes decreased.
The value has been set as a parameter that can be adjusted by administrators
this allows there to be some tuning if compression problems are encountered.
Add more robust temporary file removal
on a failed compaction. Don't halt on
a failed removal, and don't assume a
failed compaction won't generate
temporary files.
closes https://github.com/influxdata/influxdb/issues/26068
A field could be created in memory but not
saved to disk if a later field in that
point was invalid (type conflict, too big)
Ensure that if a field is created, it is
saved.
There was a window where a race between writes with
differing types for the same field were being validated.
Lock the MeasurementFields struct during field
validation to avoid this.
closes https://github.com/influxdata/influxdb/issues/23756
The error type check for errBlockRead was incorrect,
and bad TSM files were not being moved aside when
that error was encountered. Use errors.Join,
errors.Is, and errors.As to correctly unwrap multiple
errors.
Closes https://github.com/influxdata/influxdb/issues/25838
* feat: Modify optimized compaction to cover edge cases
This PR changes the algorithm for compaction to account for the following
cases that were not previously accounted for:
- Many generations with a groupsize over 2 GB
- Single generation with many files and a groupsize under 2 GB
- Where groupsize is the total size of the TSM files in said shard directory.
- shards that may have over a 2 GB group size but
many fragmented files (under 2 GB and under aggressive
point per block count)
closes https://github.com/influxdata/influxdb/issues/25666
* feat: This PR adds -tsm file flag to export
Adds the ability to use influx_inspect export to export data from a single tsm file, for example influx_inspect export -out - -tsmfile 000000006-000000002.tsm.bad -database thermo -retention autogen.
There are a number of code paths in Compactor.write which
on error can lead to leaked file handles to temporary files.
This, in turn, prevents the removal of the temporary files until
InfluxDB is rebooted, releasing the file handles.
closes https://github.com/influxdata/influxdb/issues/25724