* test: add test for operations.system_tables
* fix: only show operations for current database
* fix: update test
* fix: improve test
* refactor: filter in Schema provider rather than in job tracker
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
When we get the flatbuffers, we won't have the server ID in addition to
the flatbuffers-- it's in the flatbuffers. But we want to validate the
`ServerId` once when the `SequencedEntry` is created so that its
`server_id` method can assume it has a valid `ServerId`.
Pre-sized channels get full when the results to send over them are larger than the capacities. This causes significant runtime overhead and slows down query performance.
This commit removes the intermediate channels. The potential downside to this approach is there may be more buffering which could increase memory usage during query and also block a thread for longer periods of time.
Chunks now always have "data" (aka exactly 1 table including
schema/columns). Open chunks can only be created with data. Rollovers do
NOT create open chunks anymore (this is now only done for incoming
data).
Closes#1296.
This changes the hierarchy from
```
database -> partition -> chunk -> table
```
to
```
database -> partition -> table -> chunk
```
Only the high-level APIs are changed for now. The chunk states (like
MutableBuffer and ReadBuffer) still multiplex tables, although they will
always only get a single table assigned (or no table if no data was
presented yet).
Closes#1256.
* refactor: plumb executor into all Db instances
* refactor: Route all query executions through worker pool
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: use a dedicated tokio threadpool for running queries
* feat: plumb number of executor threads through to command line
thread through command line
* fix: Logical merge conflict
* fix: another logical conflict
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
The initial benchmarks look like this on my i9 MBP:
```
Data in one open chunk and one closed chunk of mutable buffer/tag0/no_pred 1.00 91.0±2.55ms ? ?/sec
Data in one open chunk and one closed chunk of mutable buffer/tag0/with_pred 1.00 11.5±0.72ms ? ?/sec
Data in one open chunk and one closed chunk of mutable buffer/tag1/no_pred 1.00 120.3±5.10ms ? ?/sec
Data in one open chunk and one closed chunk of mutable buffer/tag1/with_pred 1.00 11.2±0.22ms ? ?/sec
Data in one open chunk and one closed chunk of mutable buffer/tag2/no_pred 1.00 203.2±8.45ms ? ?/sec
Data in one open chunk and one closed chunk of mutable buffer/tag2/with_pred 1.00 11.2±0.21ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag0/no_pred 1.00 100.3±3.73ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag0/with_pred 1.00 31.2±1.80ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag1/no_pred 1.00 126.7±2.29ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag1/with_pred 1.00 33.0±1.70ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag2/no_pred 1.00 212.0±6.86ms ? ?/sec
Data in open chunk of mutable buffer, and one chunk of read buffer/tag2/with_pred 1.00 18.1±0.99ms ? ?/sec
Data in single open chunk of mutable buffer/tag0/no_pred 1.00 98.7±6.08ms ? ?/sec
Data in single open chunk of mutable buffer/tag0/with_pred 1.00 11.2±0.37ms ? ?/sec
Data in single open chunk of mutable buffer/tag1/no_pred 1.00 118.9±3.97ms ? ?/sec
Data in single open chunk of mutable buffer/tag1/with_pred 1.00 11.7±0.64ms ? ?/sec
Data in single open chunk of mutable buffer/tag2/no_pred 1.00 202.1±8.49ms ? ?/sec
Data in single open chunk of mutable buffer/tag2/with_pred 1.00 11.1±0.27ms ? ?/sec
Data in two read buffer chunks/tag0/no_pred 1.00 109.2±5.20ms ? ?/sec
Data in two read buffer chunks/tag0/with_pred 1.00 44.2±1.83ms ? ?/sec
Data in two read buffer chunks/tag1/no_pred 1.00 132.9±3.79ms ? ?/sec
Data in two read buffer chunks/tag1/with_pred 1.00 41.7±2.43ms ? ?/sec
Data in two read buffer chunks/tag2/no_pred 1.00 222.4±7.00ms ? ?/sec
Data in two read buffer chunks/tag2/with_pred 1.00 27.9±0.92ms ? ?/sec
```
Rationale
---------
We use `u32` throughout the codebase to reference for interned dictionary strings.
We also use `u32` for other reasons and it would be nice to get some help from the compiler
to avoid mixing them up
This removes the old ReplicatedWrite structure and implements the writing of an Entry to the Db. I also call out in `server/lib.rs` and in the `Db` where sharding and replication might happen.
I've also added helpers in various places to write line protocol to chunks, tables, and databases. That enabled removing a good amount of code from the test helpers crate.
* fix: avoid spamming the logs with errors that are not errors
* fix: make it compile
* fix: change to warning, add database name
* fix: include buffer_size and soft_limit as well
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: Move test for missing tags as null to db
* refactor: move last_write_at and create_time to catalog partition
* refactor: remove mutable buffer partition and port tests
* fix: port over partition sorting logic
* fix: remove stale comment
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: inline catalog crate to server
* refactor: Add fine grained (object level) catalog locking
* fix: Move mod definition and use to top of file
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: Rework Db to use Catalog for chunk state
* docs: Update server/src/db.rs
* fix: fmt
* fix: fmt
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: Management API + CLI command to close a chunk and move to read buffer
* refactor: Less copy-pasta
* fix: track only once, use `let _` instead of `.ok()`
* docs: Apply suggestions from code review
fix comments ( 🤦♀️ for copy/pasta)
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* docs: Update server/src/lib.rs
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* refactor: Use DatabaseName rather than impl Into<String>
* fix: Fixup logical merge conflicts
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* refactor: remove interior mutability from TrackerRegistry
* fix: don't hold mutex across await point
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
The replication, query, and subscription concepts here are going to be signficiantly different. Thought it would be best to just remove this cruft for now to avoid confusion.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: Create `ChunkPredicate` predicate in the Server crate
* refactor: move chunk predicate out of chunk
* fix: remove comment in server/src/db/pred.rs
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: make read_group and read_window_aggregate work across chunks
* refactor: Apply suggestions from code review
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* refactor: Update query/src/frontend/influxrpc.rs
Improve logic and use strings directly
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* fix: fmt
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
The implementation of this function is ultimately the keys from a
HashMap, and that order isn't guaranteed. Not sure if the function needs
a stable ordering, so I didn't want to add the performance hit to the
function. Adding it to the test ensures it keeps passing (I saw it fail
a few times locally).
Addresses the API aspect of #818
Adds a utility module that helps computing the length of a stream while buffering it
for later replay (in-memory or spilling it in a temporary file).
* feat: Enable/Disable logging in tests via RUST_LOG environment variable
* docs: Add section to contributing
* docs: tweak readme
* fix: Use same logging system in tests as in influxdb_ioxd
* feat: Add database rules configuration for mutable buffer
* refactor: change all database rules usage to use mutable_buffer_config rather than store_locally
* feat: Change table_names to return either Some(set) or None, rather than a plan
* docs: improve comments
* docs: Apply suggestions from code review
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* fix: merge conflict
* fix: don't clone a string unless needed
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* fix: test_helpers crate should only be a dev-dep
* fix: object_store no longer has a build script, so no longer needs a build dep
* chore: Alphabetize all Cargo.tomls
* feat: Implement TableProvider for Db
Gets us selection pushdown in plans, sets us up for predicate pushdown
Includes: SendableRecordBatchStreams for mutable buffer and read buffer results
fixup snapshots
* docs: comments
This is the promised cleanup. This structure gets rid of a lot of
intermediate structures and encodes through associated types how the
object stores and path types are related.
The enums are still necessary to avoid having generics leak all over
the place, but the object store variants and path variants should always
match because they'll always come from the object store trait
implementations that use the associated types.
* chore: Update arrow + tokio deps
* chore: Use bleeding edge azure
* chore: Update aws + other deps
* fix: fmt
* fix: Switch to in-house version of routerify
* fix: Upgrade to hyper 0.14
The hyper::error module is now private; hyper::Error is the public
re-export
* fix: Upgrade cloud storage to get tokio upgrade
* fix: Upgrade open_telemetry
* fix: Do not call `panic::set_hook` during another panic
Doing so leads to a double panic which aborts the process.
* fix: new h2 error who dis
Co-authored-by: Carol (Nichols || Goulding) <carol.nichols@integer32.com>
Co-authored-by: Jake Goulding <jake.goulding@integer32.com>
* feat: Add selection interface to mutable buffer and query interface
* docs: Update mutable_buffer/src/table.rs
* refactor: rename for consistency
* refactor: use map and filter_map rather than fold
* test: revamp testing so it works in multiple scenarios, fix bug found by same
* fix: Update docs in server/src/db.rs
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* refactor: use tsp rather than different functions
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
This adds functionality to the server to load database rules on startup. Follow on work will update the rules to store additional data (the catalog) and ensure that updates to the catalog can occur as outlined in #651. This work also updated the configuration to not require a database directory so the server can run entirely in memory. I needed this to get the end-to-end test passing since the file object store API doesn't yet have the functionality needed. I've logged #688 to track adding that in.
* refactor: Remove import of unimplemented macro that's in the prelude
* refactor: Remove allowing of dead code that isn't dead anymore
Co-authored-by: Andrew Lamb <alamb@influxdata.com>
* feat: Chunk Migration APIs and query data in the read buffer via SQL
* fix: Make code more consistent
* fix: fmt / clippy
* chore: Apply suggestions from code review
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* refactor: Remove unecessary Result and make chunks() infallable
* chore: Apply more suggestions from code review
Co-authored-by: Edd Robinson <me@edd.io>
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
Co-authored-by: Edd Robinson <me@edd.io>
This pulls the server configuration into its own module. It updates the config so that databases can be created and saved without holding a write lock across an async call. Follow on work will add reading of these rules from object store on server initialization.
This hooks the wal buffer up to the server. When segments go over their max size, they will be persisted if that configuration was set. A follow on commit will add persistence based on time (how long the segment has been open).
Adds serialization with compression and checksum for WAL buffer segments.
This required a weird structure where the flatbuffer bytes of ReplicatedWrite were kept as a raw payload. I did this because otherwise each of the replicated writes would have been rebuilt in the segment.
The other thing that isn't ideal is that deserializing a segment actually marshals it into a Rust struct as opposed to keeping the entire thing as raw flatbuffers. We could update this later to have a concept of an open segment (regular rust stuct) and closed segments that are just the flatbuffers.
Adds a new endpoint /iox/api/v1/id that accepts a JSON object in the form:
{"id":42}
And calls into the server's set_id method to assign the writer ID to the server.
* fix: do not overwrite databases
Do not overwrite an existing database when attempting to create a DB with an
existing name.
This should probably update the existing database config, but for the time being
this prevents the existing database from being silently dropped.
Fixes#643
* test: ensure duplicate DB names returns an error
Covers #643 with a test case.
Now you have to designate whether you're adding a directory or a file
name, with some assumptions based on paths coming from a cloud object
storage or the file system.
A notable difference: checking to see if "apple/b" is a prefix of
"apple/bear/cow.json" will now say no; only whole directories are
matched.
* feat: add create and get database to API
This commit is start of the IOx specific API. It puts everything under /iox/api/v1 as this is the beginning of the IOx API. Creating a database is done with a PUT and a GET request can retrieve the DatabaseRules details.
* feat: add defaults for DatabaseRules for create_database
* feat: add create and get database to API
This commit is start of the IOx specific API. It puts everything under /iox/api/v1 as this is the beginning of the IOx API. Creating a database is done with a PUT and a GET request can retrieve the DatabaseRules details.
This pulls the different backing implmenetations into their own modules. They're about to get more complex so it felt like it was time to separate them out rather than building towards a single multi-thousand line lib.rs. The error type is only defined in lib and imported by the individual modules, which I think makes it easier to work with.
* refactor: Create database with mutable buffer, read buffer and parquet files
* docs: Apply suggestions from code review
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* fix: rename planners to clarify what they are
* refactor: simplify traits
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* feat: implement partition chunk rollover + ids and timestamps
* feat: add last_write_timestamp
* refactor: Use DateTime<Utc> rather than Instant
* refactor: avoid use of structure to generate ids
* refactor: Update docs, remove unused field
* refactor: rename partition -> chunk
* feat: Introduce new partition, which is a holder for Chunks
* refactor: Remove use of wal from mutable database
* refactor: cleanups, remove last direct use of chunks
* fix: delete old benchmarks
* fix: clippy sacrifice
* docs: tidy up comments
* refactor: remove unused error types
* chore: remove commented out tests
This moves the HTTP API over to Routerify, which has the basic route parsing logic that will enable the API design for IOx.
I had a little trouble with the error handling in Routerify so I ended up creating a macro for constructing error responses in the HTTP API. I'm not sure what I think of this pattern so I'm interested in what others think. Another option would be to have two functions for each API endpoint. One which is x_handler with a Routerify function signature. Then another which is just x that has the Result<Response<Body>, ApplicationError> return type, which would make using the ? operator work in those functions. That would eliminate the need for the return_err macro.
I'm happy to refactor to that if people prefer it.
Moves the server ID value outside of the mutex-wrapped `Config` type and
into the `Server` struct, synchronising threads using an atomic instead.
This prevents any calls to `require_id()` from having to contend for a
read lock on the config mutex while maintaining lineraisaion of ID
values - all threads see the change as soon as it is set.
On x86 this synchronisation is actually completely "free" - memory
accesses are totally ordered, so this turns into a compile fence to
prevent LLVM changing program order at compile time with no runtime
overhead at all.
Previously the ID value was wrote as part of the config, but this is no
longer the case. It should be easy to add back in, but it appears to be
redundant as the ID is required to construct the path to the config when
loading in the first place.
* chore: Connect server crate to http routes and server command
This updates the http_routes and main server to use the Server crate. This is the first step in a larger effort to start hooking up the initial IOx API and get things running end to end with in-memory database, WAL buffer, and object storage.
For the time being, this disables the previous disk based WAL. Or rather, it uses the WriteBufferDb without it. That means that this IOx server has no persistence until later. Because of this, the restart in the end-to-end was removed.
Later PRs will add the WAL buffer and restart logic that loads from object store. We can opt to bring the local disk based WAL back later, but it will likely require some refactoring to work with how the WAL Buffer will operate.
* feat: Implement write buffer to Parquet snapshotting
This introduces snapshot to the server packages to manage snapshotting. It also introduces a new trait for representing a Partition. There is a very crude API wired up in http_routes for testing purposes. Follow on work will bring the server package into http_routes and rework the snapshot API.
Previosuly the $ORG and $BUCKET was joined as:
$ORG + "_" + $BUCKET
Which is fine unless either $ORG or $BUCKET includes a "_", such as:
$ORG = "org_a"
$BUCKET = "bucket"
and
$ORG = "org"
$BUCKET = "a_bucket"
This change continues to join $ORG and $BUCKET with an underscore, but
disallows underscores in either $ORG or $BUCKET. It appears these values
are non-zero u64s in the gRPC protocol converted to their base-10 string
representations for the DB name, so this seems safe to enforce.
In addition, this change introduces a `DatabaseName` type to avoid
passing bare strings around, and allow consuming code to ensure only
valid database names are provided at compile type. This type works with
both owned & borrowed content so doesn't force a string copy where we
can avoid it, and derefs to `str` to make it easier to use with existing
code.
I've been minimally invasive in pushing the `DatabaseName` through the
existing code and figured I'd see what the sentement is first.
Candidates for conversion from `str` to `DatabaseName` that seem to make
sense to me include:
- `DatabaseStore` trait
- `RemoteServer` trait
- Others? Basically anywhere other than the "edge" API inputs
Fixes#436 (thanks @zeebo)
* chore(server): add logs for dropped WAL segments
Added logging for dropped writes and old segments in rollover scenarios
Also including a dep on tracing and dev-dep on test_helpers
Refs: #466
* chore(server): Add more context to logs
Minor cleanup around remove_oldest_segment usage
Suggestions from @alamb's review