Commit Graph

31 Commits (0ac6ddf057dfdb98dd27743aa42e25e3ef91b365)

Author SHA1 Message Date
Carol (Nichols || Goulding) 493f331e4b
fix: Remove the max_compact_size knob and hardcode a multiple (#7197)
* fix: Remove the max_compact_size knob and hardcode a multiple

Rather than panic if the user hasn't set this knob in a particular way,
set the max_compact_size to the minimum value we need by multiplying
max_desired_file_size_bytes by MIN_COMPACT_SIZE_MULTIPLE.

Fixes influxdata/idpe#17259.

* refactor: Move computation of max_compact_size_bytes into compactor config

* test: change test setups to reflect the purposes of the tests

---------

Co-authored-by: NGA-TRAN <nga-tran@live.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-03-15 11:21:28 +00:00
Joe-Blount c87113ccbf
chore(iox/compactor): rename max_input_parquet_bytes_per_partition (#7160) 2023-03-08 17:08:08 +00:00
Joe-Blount 86dd72ef1f
chore: add panic at compactor startup for invalid config options (#7141)
* chore: add panic at compactor startup for invalid config options

* chore: apply comments
2023-03-07 21:02:01 +00:00
Andrew Lamb f3a16a1221
feat(compactor2): add catalog upgrade information to tests (#7075)
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-02-28 19:28:42 +00:00
Carol (Nichols || Goulding) faae5eb438 chore: Rerun cargo hakari manage-deps 2023-02-27 11:56:15 +01:00
Andrew Lamb 21a3c8c40d refactor: delete all at once algorithm 2023-02-17 06:24:26 -05:00
Nga Tran f69c8adc7c
feat: Compact partition with many L0 files (#7007)
* feat: initial implementation of the split

* feat: split many L0 files in groups and compact them into new and fewer L0 files

* test: remove iappropriate AllAtOnce test

* refactor: move file classification for initial target to its own function

* fix: pop the branch from start to end

* chore: address review comments

* feat: support splitting to many L1 files

* feat: only add extra round to compact level-n files to same level-n files if their files plus overlapped level-n-plus-1 over limit

* chore: Apply suggestions from code review

Co-authored-by: Andrew Lamb <alamb@influxdata.com>

* chore: final cleanup and address comments

* chore: run fmt

---------

Co-authored-by: Andrew Lamb <alamb@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-02-16 21:17:25 +00:00
Nga Tran 5c506058da
feat: skip partitions of wide tables (#6978)
* feat: skip partitions of wide tables

* test: one more test

* refactor: address review comments
2023-02-14 16:42:13 +00:00
Andrew Lamb 263d8fe21f
chore: Layout tests with `TargetLevel` algorithm + update display (#6977)
* refactor: move ParquetFileSimulator to compactor2_test_utils

* chore: Test with new algorithm + update display

* chore: Updates

* chore: Update setting to match prod
2023-02-13 22:12:55 +00:00
dependabot[bot] 0cbd9f6a82
chore(deps): Bump tokio-util from 0.7.5 to 0.7.7 (#6964)
---
updated-dependencies:
- dependency-name: tokio-util
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-13 10:10:53 +00:00
dependabot[bot] c0c9b51b9e
chore(deps): Bump tokio-util from 0.7.4 to 0.7.5 (#6941)
Bumps [tokio-util](https://github.com/tokio-rs/tokio) from 0.7.4 to 0.7.5.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-util-0.7.4...tokio-util-0.7.5)

---
updated-dependencies:
- dependency-name: tokio-util
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-10 09:42:00 +00:00
Marco Neumann 18d5924dfd
test: allow testing the compactor w/o any real data (#6908)
* test: allow testing the compactor w/o any real data

Things that are missing:

- output files have nondeterministic IDs which interferes w/ snapshot
  testing. We should probably normalize the IDs somehow.
- time ranges of output files are not captured correctly (because the
  mock sink doesn't know how to calculate them)

* fix: Add output assertion

* fix: fmt

* docs: improve

Co-authored-by: Andrew Lamb <alamb@influxdata.com>

* fix: fmt

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Co-authored-by: Andrew Lamb <alamb@influxdata.com>
2023-02-08 19:10:28 +00:00
Marco Neumann dcba47ab58
feat: allow the compactor to process all known partitions (#6887)
* feat: `PartitionRepo::list_ids`

* refactor: `CatalogPartitionsSource` => `CatalogToCompactPartitionsSource`

* feat: allow the compactor to process all known partitions

Closes #6648.

* docs: improve

Co-authored-by: Andrew Lamb <alamb@influxdata.com>

---------

Co-authored-by: Andrew Lamb <alamb@influxdata.com>
2023-02-08 09:32:21 +00:00
Marco Neumann 52b43c40bc
refactor: use "endless" stream for compactor work (#6803)
Instead of looping and polling a fresh set of partitions and
constructing a stream from that, use an endless stream instead. This
helps w/ efficiency during roll-overs since we can already start to
process the next set of partitions while the last ones from the previous
round are still in-progress.

Closes #6750.

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-02-06 11:11:39 +00:00
Nga Tran e85de74a5d
feat: partition filters for TargetLevel version and a complete test (#6858)
* feat: partition filters for TargetLevel version and a complete test

* chore: Apply suggestions from code review

Co-authored-by: Andrew Lamb <alamb@influxdata.com>

* chore: run fmt after applying review suggestions in git

---------

Co-authored-by: Andrew Lamb <alamb@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-02-04 19:54:41 +00:00
Carol (Nichols || Goulding) 30fea67701
fix: Move variables within format strings. Thanks clippy!
Changes made automatically using `cargo clippy --fix`.
2023-02-03 13:06:17 -05:00
Nga Tran 1535366666
refactor: rename compact algo versions to reflect their actual work (#6841)
* refactor: rename compact algo versions to reflect thier actual work

* chore: Apply suggestions from code review

Co-authored-by: Andrew Lamb <alamb@influxdata.com>

---------

Co-authored-by: Andrew Lamb <alamb@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-02-03 15:06:38 +00:00
Marco Neumann 0a20bd404e
feat: allow selection of different compactor algo versions (#6800)
Required to lift&shift to hot-cold compaction w/ keeping the codebase
maintainable.
2023-02-01 15:33:41 +00:00
Marco Neumann 62697265c1
feat: compactor sharding (#6729)
I'm not saying we have to use this, but this is a demonstration how easy
it would be to add sharding to the compaction tier and also acts as a
"backup / insurance" if we ever need it.

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-01-31 14:37:06 +00:00
Marco Neumann 515f0eef64
feat: simple compactor2 self-protection (#6728)
Add some rough "partition is too big" filter for now until we can deal
with them (the framework allows that but we need to set up the proper
divide-and-conquer components).

This will hopefully prevent our prod compactor from dying that often.

Note that this is also duct-tape around two issues:

- DataFusion not accounting in-flight data all the time
- Our wide fan-out query plans (see https://github.com/influxdata/idpe/issues/16768#issuecomment-1387056833 )

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-01-30 10:57:47 +00:00
Marko Mikulicic 0bc7d90ee3 chore: Avoid defining transition shard numbers in multiple crates 2023-01-27 18:30:34 +01:00
Marko Mikulicic f6e7724d19
fix(compactor2): Update other locations of the TRANSITION_SHARD_INDEX (#6736) 2023-01-27 16:59:24 +00:00
Marco Neumann 30d411dc95
feat: shadow mode (#6712)
* refactor: remove untyped durations from `compactor2`

* feat: shadow mode

Closes #6645.

* refactor: split input and output store
2023-01-26 14:20:55 +00:00
Marco Neumann ed694d3be4
feat: introduce scratchpad store for compactor (#6706)
* feat: introduce scratchpad store for compactor

Use an intermediate in-memory store (can be a disk later if we want) to
stage all inputs and outputs of the compaction. The reasons are:

- **fewer IO ops:** DataFusion's streaming IO requires slightly more
  IO requests (at least 2 per file) due to the way it is optimized to
  read as little as possible. It first reads the metadata and then
  decides which content to fetch. In the compaction case this is (esp.
  w/o delete predicates) EVERYTHING. So in contrast to the querier,
  there is no advantage of this approach. In contrary this easily adds
  100ms latency to every single input file.
- **less traffic:** For divide&conquer partitions (i.e. when we need to
  run multiple compaction steps to deal with them) it is kinda pointless
  to upload an intermediate result just to download it again. The
  scratchpad avoids that.
- **higher throughput:** We want to limit the number of concurrent
  DataFusion jobs because we don't wanna blow up the whole process by
  having too much in-flight arrow data at the same time. However while
  we perform the actual computation, we were waiting for object store
  IO. This was limiting our throughput substantially.
- **shadow mode:** De-coupling the stores in this way makes it easier to
  implement #6645.

Note that we assume here that the input parquet files are WAY SMALLER
than the uncompressed Arrow data during compaction itself.

Closes #6650.

* fix: panic on shutdown

* refactor: remove shadow scratchpad (for now)

* refactor: make scratchpad safe to use
2023-01-26 10:03:08 +00:00
Marco Neumann 40e6a1a437
feat: job semaphore (#6696)
* refactor: avoid too-many-arguments

* refactor: extract `fetch_partition_info`

* feat: job semaphore
2023-01-25 10:35:07 +00:00
Marco Neumann 4521516147
feat: add per-partition timeout (#6686)
It seems that prod was hanging last night. This is pretty hard to debug
and in general we should protect the compactor against hanging /
malformed partitions that take forever. This is similar to the fact that
the querier also has a timeout for every query. Let's see if this shows
anything in prod (and if not it's still a desired safety net).

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-01-24 16:53:47 +00:00
Nga Tran 411b3db928
fix: Get shard id from a constant (topic, shard_index) to avoid error of shard_id FK violation (#6658)
* fix: ake shard_id FK always 1

* fix: use const shard_index to read its ID

* refactor: read shard_id during compactor initiation
2023-01-22 16:49:06 +00:00
Nga Tran 840923abab
refactor: execute compaction plan (#6654)
* chore: address review comment of previous PR

* refactor: execute compact plan

* refactor: we will now compact all L0 and L1 files of a partition and split them as needed

* chore: comnents

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-01-20 22:34:50 +00:00
Nga Tran 9ae03b16d6
feat: invokes catalog functions for compactor2 (#6619)
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-01-19 10:33:57 +00:00
Marco Neumann 380a855aab
feat: basic compactor2 algo layout (#6616) 2023-01-18 18:51:59 +00:00
Marco Neumann e72173d58d
feat: very basic compactor2 skeleton (#6614)
Sets up crate and wires up the main binary. No tests yet, no algorithm
framework, just the bare minimum.

Also I decided to not offer a gRPC server in `compactor2` at the moment
and hence did not implement any handle/delegate infrastructure. We add
this later if we need it. This also means compactor2 does NOT provide a
catalog service for now.

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2023-01-18 16:36:40 +00:00