Commit Graph

437 Commits (9328ba8c4508b8a25529b24b438f7d76a78288df)

Author SHA1 Message Date
Carol (Nichols || Goulding) 9328ba8c45
feat: Use new extra loading info to load read buffer chunks into cache 2022-06-02 09:22:44 -04:00
Carol (Nichols || Goulding) 054c25de50
refactor: Add more methods to DecodedParquetFile
I'm tired of trying to remember which info is on which metadata.
2022-06-02 09:22:44 -04:00
Dom Dwyer f8b83c5085 test: assert panic behaviour
Modifies the existing test added as part of #4695 to ensure a panic is
emitted when serialising an empty parquet file.
2022-06-01 16:55:53 +01:00
Dom Dwyer 0bfc11f4a1 refactor: always panic for empty parquet files
Moves the panic into the child call to_parquet() so all code paths are
covered (i.e. not serialising into memory via to_parquet_bytes()).
2022-06-01 16:54:36 +01:00
Dom Dwyer 5d74ae2ac1 test: fix test_metadata_from_parquet_metadata
Changes the test_metadata_from_parquet_metadata test to embed the IOx
metadata before asserting it can be read back.
2022-05-31 17:34:04 +01:00
Marco Neumann 988bd38e93
refactor: remove unused code (#4742) 2022-05-31 11:36:02 +00:00
Marco Neumann 5a95da7327
refactor: do NOT use ANY file IO for parquet reading (#4741) 2022-05-31 11:18:24 +00:00
Marco Neumann 2bf03e57bf
feat: limit tmp parquet file count and size (#4737)
Fixes #4736.

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-05-31 08:56:15 +00:00
Marco Neumann 79c054ffc9
fix: do NOT block in parquet file IO (#4727)
* fix: do NOT block in parquet file IO

I think for historical reason we were using blocking IO to read parquet
files. With the current streaming `SendableRecordStream` approach this
is technically NOT required anymore.

Now one might think that the sync-async dance that we did is kinda
harmless, but looking at our producition querier I think it is really
bad. The querier seems to be stuck but looking at `strace` and other
health signal it seems it is not entirely dead. Looking at GDB
backtraces it seems that nearly all threads are busy in
`download_and_scan_parquet`. Looking at the tokio docs
(<https://docs.rs/tokio/1.18.2/tokio/task/fn.spawn_blocking.html>)
for `spawn_blocking` (which is used to start the sync download) this
makes sense: tokio only starts replacement threads for the current
runtime thread (which calls `spawn_blocking`) if this does NOT exceed the
runtime thread limit. However we set the runtime thread limit to the
number of CPU cores available to IOx, so this is a limiting factor. This
means that there are only a few threads left to do actual work (I've
seen postgres data flowing back and forth for example) but tokio is not
able to use its full potential anymore. This is esp. bad because the
sync code in `download_and_scan_parquet` then uses `futures` `block_on`
functionality to call back into async code, so it waits for tokio
itself.

The change is rather simple: just use async task spawns.

* fix: use async IO to write stream to temp file

* fix: do not block tokio thread during parquet file reading

* refactor: ensure parquet IO tasks are cancelled if they are not needed anymore

There is no REAL way to cancel sync tasks, but at least we can try our
best.
2022-05-30 13:32:20 +00:00
Andrew Lamb 9f21512296
chore: reduce `debug!` log spew in `parquet_file` (#4718)
* chore: reduce log spew

* chore: trace another overly verbose message

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-05-27 20:57:10 +00:00
kodiakhq[bot] 842ef8e308
Merge branch 'main' into cn/fetch-from-parquet-file 2022-05-27 17:08:28 +00:00
Nga Tran 16e7a6d596
test: test that hits panic becasue of no column meta data (#4719)
* test: test that hits panic becasue of no column meta data

* chore: Apply suggestions from code review

* chore: run format after applying changes

* chore: Apply suggestions from code review

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* chore: run clippy

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
2022-05-27 15:27:03 +00:00
Andrew Lamb dde3c3922c
refactor: use consistent spelling of serialize (#4717) 2022-05-27 14:42:59 +00:00
Nga Tran ea81152fac
refactor: add partition ID into debug info and panic earlier to identify the bug easier (#4716)
* chore: point tests to the new ticket

* chore: cleanup

* refactor: add partition ID into debug info and panic earlier to identify the bug easier
2022-05-27 12:20:36 +00:00
Nga Tran 09b55a209d
chore: point tests to the new ticket (#4715)
* chore: point tests to the new ticket

* chore: cleanup
2022-05-27 11:12:55 +00:00
Nga Tran 372b262f37
test: parquet meta decoded tests and more debug info (#4713)
* test: reproducer for 4695

* chore: some debug info

* test: test with many columns and rows

* chore: cleanup and add debug info

* chore: cleanup

* chore: cleanup

* chore: more debug info
2022-05-27 09:53:07 +00:00
Carol (Nichols || Goulding) b2905650aa
refactor: Extract extract_range to be a method on TableSummary
So that other kinds of chunks can use this code too.
2022-05-26 16:52:14 -04:00
Nga Tran 05151d5c69
test: reproducer for 4695 (#4706)
* test: reproducer for 4695

* chore: Apply suggestions from code review

Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>

Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-05-26 15:32:30 +00:00
Dom Dwyer 6aa626ef84 refactor: retry object store upload
Changes the Storage::upload() method to endlessly retry uploading the
generated Parquet file.
2022-05-24 11:29:42 +01:00
Andrew Lamb e877a64462
feat: Add `ParquetFiles` cache and memory size estimation for ParquetMetadata (#4661)
* feat: Add `ParquetFiles` cache

* fix: Apply suggestions from code review

Co-authored-by: Marko Mikulicic <mkm@influxdata.com>

* fix: remove commented out debugging println

* refactor: Improve size calculation

* fix: mark `ParquetFileCache::clear` test only

* fix: assert on metric count

Co-authored-by: Marko Mikulicic <mkm@influxdata.com>
2022-05-23 17:11:38 +00:00
Dom Dwyer 2e6c49be83 refactor: remove IoxMetadata min & max timestamp
Removes the min/max timestamp fields from the IoxMetadata proto
structure embedded within a Parquet file's metadata.

These values are redundant as they already exist within the Parquet
column statistics, and precluded streaming serialisation as these
removed min/max values were needed before serialising the file.
2022-05-23 16:27:08 +01:00
Dom Dwyer a142a9eb57 refactor: remove row_count from IoxMetadata
Remove the redundant row_count from the IoxMetadata structure that is
serialised into the Parquet file.

The reasoning is twofold:

    * The Parquet file's native metadata already contains a row count
    * Needing to know the number of rows up-front precludes streaming
2022-05-23 16:18:35 +01:00
Dom Dwyer 71555ee55c test: Parquet metadata integration test
Adds two integration tests covering validation of the embedded IOx
metadata within the Parquet file metadata, and validation of the derived
ParquetFileParams metadata used to populate the catalog.
2022-05-23 16:17:56 +01:00
Dom Dwyer af6d3f4d48 docs: remove clone ref comment 2022-05-23 11:46:06 +01:00
Dom Dwyer 00dc95829d style: enable more lints
Enable more lints on the parquet_file crate to keep it a little cleaner
- adds the following:

    clippy::clone_on_ref_ptr,
    unreachable_pub,
    missing_docs,
    clippy::todo,
    clippy::dbg_macro

This commit includes fixes for any new lint failures.
2022-05-20 15:17:40 +01:00
Dom Dwyer 7df7c4844c refactor: remove redundant ParquetChunk errors
Eliminates unused / refactors away unnecessary errors for the
parquet::chunk module.
2022-05-20 15:17:40 +01:00
Dom Dwyer 661f8599a6 refactor: internalise Parquet path generation
Derive the ParquetFilePath from the IoxMetadata within the
ParquetStorage::read_filter() call.

This prevents the "put/get RecordBatches" abstraction from leaking out
the object store path generation concern - an implementation detail of
the ParquetStorage layer.
2022-05-20 15:17:40 +01:00
Dom Dwyer cdb341d45a test: ParquetStorage upload() and read_filter()
Adds tests for the previously untested (directly at least) Parquet
(de)serialisation & persistence layer, provided by the ParquetStorage
type.
2022-05-20 15:17:40 +01:00
Dom Dwyer 302301659e refactor: derive ParquetFilePath from IoxMetadata
Allow directly converting an IoxMetadata to a ParquetFilePath.
2022-05-20 15:17:40 +01:00
Dom Dwyer b9a745d42d feat: RecordBatch stream to Parquet file upload
Implements an upload() method on the ParquetStorage type, consuming a
stream of RecordBatch, serialising the Parquet file, and uploading the
result to object storage. Returns the IOx-specific file metadata.

Currently while the upload() method accepts a stream of RecordBatch, the
actual resulting Parquet file is buffered in memory before uploading to
object store, due to lack of streaming upload functionality in the
ObjectStore abstraction - this isn't the end of the world, as the files
tend to be relatively small with our current usage.

This impl should be easily modified to be fully streaming once streaming
object store puts are implemented:

    https://github.com/influxdata/object_store_rs/issues/9
2022-05-20 15:17:40 +01:00
Dom Dwyer 76e08d14a3 perf: IoxParquetMetaData direct from file metadata
Construct a IoxParquetMetaData instance directly from the FileMetaData
instance returned by the ArrowWriter.

This change will allow us to avoid the inefficient impl currently in
use:

    * Serialise batches into memory
    * Wrap buffer in arrow cursor
    * Read parquet metadata with arrow file reader
    * Serialise schema with thrift
    * Serialise each row group's metadata with thrift
    * Construct our own FileMetaData instance
    * Serialise FileMetaData with thrift
    * zstd encode resulting thrift bytes
    * Wrap in IoxParquetMetaData

Now we "only":

    * Stream batches into opaque Write impl
    * Serialise FileMetaData with thrift
    * zstd encode resulting thrift bytes
    * Wrap in IoxParquetMetaData

Then accessing any data within the IoxParquetMetaData (as before this
change) requires deserialising it first.

There are still a number of easy performance improvements to be had
w.r.t the metadata handling.
2022-05-20 15:17:40 +01:00
Dom Dwyer 70856a645f feat: streaming RecordBatch -> parquet encoding
Implements a streaming RecordBatch to Parquet file serialiser.

This impl automatically discovers the schema of the RecordBatch stream,
and accepts &mut destination types (internalising the handle
cloning/etc) to simplify caller usage.

This encoder returns the resulting FileMetaData to allow callers to
inspect the resulting metadata without reading back the file.

Currently unused / not yet plumbed in.
2022-05-20 15:09:26 +01:00
Marco Neumann addc45327e
fix: ensure that query tokio background tasks are canceled (#4643)
* fix: ensure that query tokio background tasks are canceled

While I am not entirely sure if this explains some of the memory leaks I
am seeing in prod, not canceling the tasks correctly certainly makes
debugging way harder and also renders certain form of throttling (e.g.
max. concurrent queries) somewhat ineffective.

Note that parquet file downloads are currently NOT canceled because
tokios `spawn_blocking` cannot be canceled.

* refactor: `Vec` -> `Option`

* refactor: `spawn_blocking` creates a join handle, even though it is useless

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-05-20 07:18:52 +00:00
Dom Dwyer baa86d846f refactor: use ParquetStore instead of ObjectStore
Changes the code paths that interact with Parquet files in the object
store to reference the ParquetStorage directly (DRY refactor).

This change takes us from a dependency graph of:

                    ┌─────────────────┐
                    │                 │
                    ▼                 │
            Parquet Consumer          │
                    │         ┌──────────────┐
                    ├────────▶│ParquetStorage│
                    ▼         └──────────────┘
            ┌──────────────┐
            │ ObjectStore  │
            └──────────────┘
                    │
               ┌────┴────┐
               ▼         ▼
             File       s3
            System    (etc)

to:

                Parquet Consumer
                        │
                        ▼
                ┌──────────────┐
                │ParquetStorage│
                └──────────────┘
                        │
                        ▼
                ┌──────────────┐
                │ ObjectStore  │
                └──────────────┘
                        │
                   ┌────┴────┐
                   ▼         ▼
                 File       s3
                System    (etc)

With the ParquetStorage being solely responsible for managing
interactions with the object store when dealing with Parquet files.
2022-05-19 13:52:51 +01:00
Dom Dwyer d3548653d5 refactor: rename Storage -> ParquetStorage
Renames the Storage type so the context is clear in usage (i.e. fn
args), rather than having to rely on knowing the fully-qualified import
path to know what the type stores.
2022-05-19 13:51:07 +01:00
Dom Dwyer e20b02b914 refactor: tidy ParquetChunk constructor
Removes two unused constructors for a ParquetChunk, and moves the bare
fn constructor that is actually used to be an associated method (a
conventional constructor).
2022-05-19 13:51:07 +01:00
Dom Dwyer 7a8e6d1a38 refactor: remove unused max_row_group_size
The Parquet writer references an unused max_row_group_size property in
the parquet file metadata.
2022-05-18 16:45:15 +01:00
Andrew Lamb 3a33e806c7
chore: Update datafusion + `arrow`/`parquet`/`arrow-flight` to `14.0.0` (#4619)
* chore: Update datafusion deps

* chore: update arrow/parquet/arrow flight deps

* chore: Run cargo hakari tasks

* chore: Update location of utils

* chore: Update some more APIs

Co-authored-by: CircleCI[bot] <circleci@influxdata.com>
2022-05-17 14:13:03 +00:00
Nga Tran 9530e73925
chore: move noisy debug to trace and fix some comments (#4598)
* chore: move noisy debug to trace and fix some comments

* chore: Apply suggestions from code review

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* chore: fix format

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-05-13 19:18:15 +00:00
Raphael Taylor-Davies f2bb0fdf77
feat: update to crates.io object_store version (#4595)
* feat: update to crates.io object_store version

* chore: Run cargo hakari tasks

* fix: tests

* chore: remove object store integration test plumbing

Co-authored-by: CircleCI[bot] <circleci@influxdata.com>
2022-05-13 16:26:07 +00:00
Carol (Nichols || Goulding) 55313d290a
fix: Update or remove comments that mention NG or OG
Connects to #4450.
2022-05-12 16:09:08 -04:00
Raphael Taylor-Davies 8b379c83cc
refactor: simplify object_store path handling (#4534)
* refactor: simplify object_store path handling

* fix: aws integration tests

* chore: lint

* fix: update gcs tests

* refactor: move errors into submodules

* chore: lint

* chore: review feedback

* refactor: replace provider with Display

* fix: failing tests

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-05-09 18:43:22 +00:00
Jake Goulding e07bcd40c2 refactor: Remove unused dependencies
These were found by iterating over all of the dependencies of each
Cargo.toml, then grepping that crate for the dependency's name. If it
didn't show up, I attempted to remove it.

I left a few dependencies that this process flagged:

* generated_types
  - `pbjson`,`serde`. Apparently used by the generated code.

* grpc-router-test-gen
  - `prost`. Apparently used by the generated code.

* influxdb_iox
  - `heappy`. Doesn't appear used, but is behind enough feature
    flags that I don't care to reason about and it's already optional.
  - `tikv_jemalloc_sys`. Appears to be setting a feature flag of an
    indirect dependency.

* iox_gitops_adapter
  - `k8s_openapi`. Appears to be setting a feature flag of an indirect
    dependency.
2022-05-06 15:57:58 -04:00
Carol (Nichols || Goulding) 068096e7e1
fix: Rename data_types2 to data_types 2022-05-06 14:45:39 -04:00
Carol (Nichols || Goulding) 0541c6e40f
fix: Remove data_types crate where it's no longer used 2022-05-06 14:45:39 -04:00
Carol (Nichols || Goulding) d2671355c3
fix: Move partition metadata types to data_types2 2022-05-06 14:45:37 -04:00
Carol (Nichols || Goulding) ea46830954
fix: Remove iox_object_store crate; move ParquetFilePath to parquet_file 2022-05-06 14:45:36 -04:00
Carol (Nichols || Goulding) b4894c2b46
fix: Remove unused parts of parquet_file 2022-05-06 11:30:36 -04:00
Carol (Nichols || Goulding) ba8191c1eb
fix: Remove persistence_windows 2022-05-06 11:30:35 -04:00
Andrew Lamb 02893e598c
chore: Update datafusion and upgrade arrow/parquet/arrow-flight to 13 (#4516)
* chore: Tool for automating arrow version update

* chore: Update datafusion and arrow/parquet/arrow-flight

* fix: update for changes in Arrow API

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
2022-05-05 00:21:02 +00:00