* feat: Add last cache create/delete to WAL
This moves the LastCacheDefinition into the WAL so that it can be serialized there. This ended up being a pretty large refactor to get the last cache creation to work through the WAL.
I think I also stumbled on a bug where the last cache wasn't getting initialized from the catalog on reboot so that it wouldn't actually end up caching values. The refactored last cache persistence test in write_buffer/mod.rs surfaced this.
Finally, I also had to update the WAL so that it would persist if there were only catalog updates and no writes.
Fixes#25203
* fix: typos
* refactor: Make Level0Duration part of WAL
I noticed this during some testing and cleanup with other PRs. The WAL had its own level_0_duration and the write buffer had a different one, which would cause some weird problems if they weren't the same. This refactors Level0Duration to be in the WAL and fixes up the tests.
As an added bonus, this surfaced a bug where multiple L0 blocks getting persisted in the same snapshot wasn't supported. So now snapshot details can have many files per table.
* fix: have persisted files always return in descending data time order
* fix: sort record batches for test verification
This extends the system tables available with a new `parquet_files` table
which will list the parquet files associated with a given table in a
database.
Queries to system.parquet_files must provide a table_name predicate to
specify the table name of interest.
The files are accessed through the QueryableBuffer.
In addition, a test was added to check success and failure modes of the
new system table query.
Finally, the Persister trait had its associated error type removed. This
was somewhat of a consequence of how I initially implemented this change,
but I felt cleaned the code up a bit, so I kept it in the commit.
This enforces the use of a host identifier prefix in all object store
paths (currently, for parquet files, catalog files, and snapshot files).
The persister retains the host identifier prefix, and uses it when
constructing paths.
The WalObjectStore also holds the host identifier prefix, so that it can
use it when saving and loading WAL files.
The influxdb3 binary requires a new argument 'host-id' to be passed that
is used to specify the prefix.
* fix: query bugs with buffer
This fixes three different bugs with the buffer. First was that aggregations would fail because projection was pushed down to the in-buffer data that de-duplication needs to be called on. The test in influxdb3/tests/server/query.rs catches that.
I also added a test in write_buffer/mod.rs to ensure that data is correctly queryable when combining with different states: only data in buffer, only data in parquet files, and data across both. This showed two bugs, one where the parquet data was being doubled up (parquet chunks were being created in write buffer mod and in queryable buffer. The second was that the timestamp min max on table buffer would panic if the buffer was empty.
* refactor: PR feedback
* fix: fix wal replay and buffer snapshot
Fixes two problems uncovered by adding to the write_buffer/mod.rs test. Ensures we can replay wal data and that snapshots work properly with replayed data.
* fix: run cargo update to fix audit
* fix: wal skip persist and notify if empty buffer
This fixes the WAL so that it will skip persisting a file and notifying the file notifier if the wal buffer is empty.
* fix: fix last cache persist test
* feat: refactor WAL and WriteBuffer
There is a ton going on here, but here are the high level things. This implements a new WAL, which is backed entirely by object store. It then updates the WriteBuffer to be able to work with how the new WAL works, which also required an update to how the Catalog is modified and persisted.
The concept of Segments has been removed. Previously there was a separate WAL per segment of time. Instead, there is now a single WAL that all writes and updates flow into. Data within the write buffer is organized by Chunk(s) within tables, which is based on the timestamp of the row data. These are known as the Level0 files, which will be persisted as Parquet into object store. The default chunk duration for level 0 files is 10 minutes.
The WAL is written as single files that get created at the configured WAL flush interval (1s by default). After a certain number of files have been created, the server will attempt to snapshot the WAL (default is to snapshot the first 600 files of the WAL after we have 900 total, i.e. snapshot 10 minutes of WAL data).
The design goal with this is to persist 10 minute chunks of data that are no longer receiving writes, while clearing out old WAL files. This works if data getting written in around "now" with no more than 5 minutes of delay. If we continue to have delayed writes, a snapshot of all data will be forced in order to clear out the WAL and free up memory in the buffer.
Overall, this structure of a single wal, with flushes and snapshots and chunks in the queryable buffer led to a simpler setup for the write buffer overall. I was able to clear out quite a bit of code related to the old segment organization.
Fixes#25142 and fixes#25173
* refactor: address PR feedback
* refactor: wal to replay and background flush on new
* chore: remove stray println