Currently heappy is reporting "bytes allocated" and "inuse_bytes" ("bytes allocated - freed bytes").
Since the allocation site and free site can be wildly different, it's very hard to correlate these events
and have a meaningful "inuse_bytes" graph.
I have a plan for how to fix this for good, but in the meantime
I'd like to report raw "freed bytes".
In aed37ab50a I fixed the main
problem that rendered free tracking useless: the allocator was actually allocating more bytes than requested; we were tracking the amount of memory requested by the user and not the size of the block that was returned to the user. However, when we were tracking a call to free we used the size of the memory block which was bigger than the amount that was tracked; this led to negative numbers of "in use memory".
* feat: add jaeger and otlp flags
* chore: add jaeger and otlp features to CI test and deploy image
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* chore: Update deps (including arrow 5.0.0 --> arrow 5.1.0)
* chore: update all the things
* refactor: Update serving readiness check due to change in Tonic API
* chore: update more deps
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: Attempt to dump a stacktrace to stdout prior to process abort
* refactor: rewrite signal handling in terms of libc, remove sig dep
* refactor: print to stderr
* fix: Update src/main.rs
* docs: note provenance
This refactors the write buffer a bit for:
- **Testing:** Add generic tests for the Kafka and the mocking
implementation. The same interface can be used easily add new
implementations (e.g. via Redis, filesystem, ...).
- **Partition on Write:** The caller of the writer operation must now
specify the partition/sequencer ID. The implicit partitioning of the
Kafka writer would have lead to broken data since we must never spill
entries w/ the same primary key over multiple partitions. At the
moment we will only use partition 0 but we can easily implement
better logic in the future.
- **Improved Mocking:** The mocked implementation now simulates a system
that feels more real. Especially the handling around multiple streams
and "write while read" has been improved. This will be helpful for
testing and for new features like seeking (during replay). A solid
realistic mock also helps us to ensure that the tests using the mock
do not rely on unrealistic behavior too much.
* feat: make aws, gcp, azure dependencies optional
* fix: only run object store tests if the features are enabled
* fix: clean up testing
* fix: rename step
* fix: add to list of jobs
* fix: remove test with object store
* fix: review comments
The entire persistence windows data structures (including the
checkpoints) have nothing to do with the mutable buffer per se. So lets
move them into their own crate. This also makes `parquet_file` not
longer depend on `mutable_buffer`.
Instead of waiting for the server ID to be set and then mark the server
as errored, directly check the object store on startup. This is
important so that we fail fast when Istio isn't up and running yet.
This PR factors out the tracing/logging CLI optinos into the `trogging` utility crate,
so that multiple binaries from the IOx suite (such as conductor) can use the same (and quite complex)
logging/tracing configuration options (flags and env vars).
Closesinfluxdata/conductor#343
__Rationale__
We currently use the `tracing` framework to output to both log outputs (e.g. stdout for k8s) and distributed tracing collectors (e.g. opentelemetry jaeger).
However, due to a limitation in the `tracing` SDK, we can only have one "filter" level that applies
to both logs and tracing outputs. This is unpractical because tracing collectors are designed
to receive high verbosity data (which will be then sampled within the opentelemetry library),
while logs generally are limited to the DEBUG level on production.
This PR adds a `FilteredLayer` tracing subscriber layer, that wraps a subscriber layer with a independent
filter, which can filter events goint to the wrapper subscriber layer more agressively than the global layer.
This will allow us to emit logs at INFO or DEBUG level while passing all events to opentelemetry at TRACE
level (and opentelemetry SDK will then sample the events so that only a small part will be sent to the
ot collector)
__Note__
This PR just implements the `FilteredLayer` and a test. Another PR will integrate this with
our log/tracing setup code.