Logs and traces are emitted via one pipeline. For now, it is not
possible to emit both, but it should be possible in a few weeks, as
tokio/tracing/tracing-subscriber is going through some refactoring recently.
All affected flags are well-documented, and I have tested all but the
OTLP output flags.
chore: clippy happy
chore: revert instrumentation changes
feat: add log format logfmt, log destinations stderr, stdout
chore: clippy happy
Closes#1206
__Rationale__
The IOx server can be started with an writer id flag/env variable, or the writer id
can be set with an RPC call from conductor.
If the writer ID is passed from a flag, it immediately loads the database configuration
at startup time.
If the pod networking system is not yet ready during early startup (e.g. when the main
container wins the race against the istio envoy sidecar), network requests to S3 (and to the
local cloud metadata server anycast address used to retrieve the IAM auth token) will fail.
In these cases, generally the simplest thing to do is to crash early; k8s will restart the
container and eventually envoy will be ready.
If the crash is caused by a config error, the crashlooping effect will effectively
inform the deployment controller that the rollout has failed and it will rollback to the previous version of the deployment (and things like argocd will rollback the whole rollout).
__Caveat__
This PR doesn't change nor address the scenario where the IOx server starts without writer ID
and conductor sets it later.
* refactor: plumb executor into all Db instances
* refactor: Route all query executions through worker pool
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: use a dedicated tokio threadpool for running queries
* feat: plumb number of executor threads through to command line
thread through command line
* fix: Logical merge conflict
* fix: another logical conflict
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Rationale: it's easier to work with a newline separated list of names, rather than a comma separated one.
You can `head`, or `tail` or `| while read i; do ...`, or just copy paste it and manipulate it around more easily.
Rationale
---------
Our CLI needs to be able to accept configuration as JSON and render configuration as JSON.
Protobufs technically have an official JSON encoding rule called 'jsonpb` but prost doesn't
offer native supprot for it.
`prost` allows us to specify arbitrary derive metadata to be added to generated
code. We emit the `serde` derive directives in the two packages that generate prost code
(`generated_types` and `google_types`).
We use the `serde(rename_all = "camelCase")` to approximate `jsonpb`.
We instruct `prost` to use `bytes::Bytes` for some types, hence we must turn on the `serde` feature
on the `bytes` dependency.
We also use json to serialize the output of the `database get` command, to showcase the feature
and get rid of a TODO. In a subsequent PR I'll teach `database create` (and the yet to be done `database update`) to accept an option JSON configuration body so we can configure partitioning, lifecycle, sharding etc rules etc.
Caveats
-------
This is not technically `jsonpb`. Main issues:
1. default values not omitted
2. no special rendering of special types like `google.protobuf.Any`
Future work
-----------
Figure out if we can get fully compliant `jsonpb`, or at least a decent approximation.
Effect
------
```console
$ cargo run -- database get foobar_weather
{
"name": "foobar_weather",
"partitionTemplate": {
"parts": [
{
"part": {
"time": "%Y-%m-%d %H:00:00"
}
}
]
},
"lifecycleRules": {
"mutableLingerSeconds": 0,
"mutableMinimumAgeSeconds": 0,
"mutableSizeThreshold": 0,
"bufferSizeSoft": 0,
"bufferSizeHard": 0,
"sortOrder": {
"order": 2,
"sort": {
"createdAtTime": {}
}
},
"dropNonPersisted": false,
"immutable": false
},
"walBufferConfig": null,
"shardConfig": {
"specificTargets": null,
"hashRing": null,
"ignoreErrors": false
}
}
```
* refactor: inline catalog crate to server
* refactor: Add fine grained (object level) catalog locking
* fix: Move mod definition and use to top of file
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: Rework Db to use Catalog for chunk state
* docs: Update server/src/db.rs
* fix: fmt
* fix: fmt
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: Management API + CLI command to close a chunk and move to read buffer
* refactor: Less copy-pasta
* fix: track only once, use `let _` instead of `.ok()`
* docs: Apply suggestions from code review
fix comments ( 🤦♀️ for copy/pasta)
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* docs: Update server/src/lib.rs
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* refactor: Use DatabaseName rather than impl Into<String>
* fix: Fixup logical merge conflicts
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* docs: Update README.md to refer to the command and CLI help
* docs: Apply suggestions from code review
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
so that if not present, the aws client library can use builtin auth providers,
such as the InstanceMetadataProvider, which is commonly used to get the credentials
granted to the AWS VM via cloud native mechanism.
* feat: make read_group and read_window_aggregate work across chunks
* refactor: Apply suggestions from code review
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* refactor: Update query/src/frontend/influxrpc.rs
Improve logic and use strings directly
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* fix: fmt
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This variant is now much larger than the other variants because of all
the CLI options on `server`. If we don't box this, the total size of the
enum depends on the size of the server config. This seems like a good
suggestion to me.
There were already too many items in the top-level match that were being
ignored in some match arms, and I'm about to add more, so switch to an
MLM (Multi-Level Match, not Multi-Level Marketing ;))