... by extracting the db/rp from the given path.
Now that the code has "standardized" on extracting db/rp this way, the
ShardLocation struct is no longer necessary and thus has been removed.
We're back on the previous style of passing the path and walPath to
NewShard.
Use sync.Pool for some temporary buffers used while encoding instead of
allocatin new ones each time. Also increased the default buffer size which
might be too small. Probably need to make this a config var.
The test to see if the destination buffer for encoding and decoding a WAL
entry was broken and would cause a panic if there were large batches that
would overflow the buffer size.
Fixes#5075
* Update cache loader to delete entries from cache
* Add cache.Delete()
* Update delete to look at keys in the Cache in addition to the FileStore
* Update cache compaction to never happen if the cache is empty
* remove rolloverTSMFileSize constant that is no longer used
* remove the maxGenerationFileCount since it is no longer a limitation that's necessary with the new compaction scheme. We no longer read WAL segments as part of the compaction so memory is only used as we read in each individual key
* remove minFileCount and switch to a user configurable variable
* remove the mutex from WALSegmentWriter. There's never more than one open in the WAL at one time and it's not exported through any function so the lock on the WAL should be used. This simplified keeping track of the last write time and removed a bunch of unnecessary locks.
* update WALSegmentWriter.Write to take the compressed bytes so that encoding and compression can occur before the call to write (while we don't hold the WAL lock)
* remove a bunch of unnecessary locking in WAL.writeToLog
* Add check for TSM file magic number and vesion
* Remove old tsm, log, and unused cursor code
* Remove references to tsm1dev everywhere except in the inspector
* Clean up config options for compaction and snapshotting
* Remove old TSM configuration options
* Update the config.sample.toml with TSM options
* Update WAL compact to force if it has been cold for writes for a configurable period of time (1h by default)
* Add InfluxQLType to Values to map the TSM type to InfluxQL
* Fix bug in WAL where close wouldn't nil out the currentSegment after closing it
* Export writeSnapshot to be used in tests, add argument to run it async or not
* Update reloadCache to load temporary metadata information in the engine
* Update LoadMetadataIndex to use the temp WAL metadata information
Something broke with writing to the WAL now that compactions are running
concurrently. There was also a performance problem with Next/Prev doing
twice as many searches as necessary.
* Update cache to have a single slice of values for a key (removed checkpoints)
* Changed compact.Plan to only worry about TSM files.
* Updated Plan to not return an error since there was no case in which it would.
* Update WAL to not keep stats since they're no longer needed.
* Update engine to flush the Cache/WAL to a new TSM file when the min threshold is hit.
* Split compact logic between TSM compacts and WAL/Cache writes.
* Remove unnecessary merge iterator, wal segment iterator, and other no longer necessary stuff.
* Remove the asending bool from the Dedupe method. Values should always be in ascending order. It's up to the cursor to iterate through values based on the direction. Giving the cursor responsibility makes it so we don't need to sort, dedupe or reallocate anything for different query orders.
* Updated engine to use its locks to ensure writes and cache flushes don't cause a race.
* Update all tests with new signatures. Removed a bunch of tests around TSM rewrites and WAL segment iteration that are no longer necessary.
This is the existing WAL + cache implementation. Moving it to a separate file
so that it can remain intact while a refactoring to a independent WAL can occur.
The WAL was also named Log in the code so this names file more closely to the concept
in the code.
Mainly for debugging as since this should not happen going forward. Since
there may be points with NaN already stored in the WAL, this is helpful for
troubleshooting panics.
When a database is dropped, removing old segments returns an error
because the files are already gone. Using RemoveAll handles this
case more gracefully.
If a drop database is executed while writes are in flight, a panic
could occur because the WAL would fail to write to the DB dirs where
had been removed.
Partil fix for #4538
Close acquired the cacheLock and writeLock in a different order than flush. If addToCache was also
running in a goroutine (acquiring cacheLock), a deadlock could happen.
panic: error opening new segment file for wal: open /var/folders/lj/vlbynqp52pxdxxlxx64j6bk80000gn/T/tsm1-test709000715/_00002.wal: no such file or directory
goroutine 8 [running]:
github.com/influxdb/influxdb/tsdb/engine/tsm1.(*Log).writeToLog(0xc820098500, 0x1, 0xc8201584b0, 0x1c, 0x45, 0x0, 0x0)
/Users/jason/go/src/github.com/influxdb/influxdb/tsdb/engine/tsm1/wal.go:427 +0xc19