This was "internal". The mapping works like this: we take the
`DataFusionError` and call `find_root` which should traverse the
`External(...)` chain (even through Arrow) to find the last error that
is not within the Arrow/DataFusion land. This is then mapped by us.
`DataFusionError::External(...)` is no further inspected and mapped
straight to "internal". I think this if fine because in the end we're
mostly dealing w/ DataFusion stuff anyways.
I've slightly changed the error mapping in the planner to emit
`DataFusionError::Plan(...)` instead which we map to "invalid argument".
I think this is way better for the user.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Within our query tests and our CLI, we've used to print out empty
query responses as:
```text
++
++
```
This is pretty misleading. Why are there no columns?! The reason is that
while Flight provides us with schema information, we often have zero
record batches (because why would the querier send an empty batch). Now
lets fix this by creating an empty batch on the client side based on the
schema data we've received. This way, people know that there are columns
but no rows:
```text
+-------+--------+------+------+
| count | system | time | town |
+-------+--------+------+------+
+-------+--------+------+------+
```
An alternative fix would be to pass the schema in addition to
`Vec<RecordBatch>` to the formatting code, but that seemed to be more
effort.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: add CommandGetPrimaryKeys metadata endpoint and tests
* chore: update schema for the returned record batch
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* chore: end-to-end tests for authorization
Add tests to validate the behaviour of the authorization machinery
in the write and query paths.
In order to facilitate this an authorizer implentation has been
added to the the test helpers that runs an authorizer gRPC service
for the use of tests. The gRPC service is started in the process
that is running the test and listens on a OS-assigned port number.
The authorization service cannot be shared between tests so a
non-shared cluster must be used when the authorizer is configured.
The influxdb_iox_client has been enhanced so that the user can
configure additional headers in the flight client, which is used
for SQL and InfluxQL queries. This uses the same interface as the
Flight SQL client has for the same job.
* chore: fix lint errors
* chore: review suggestion
Consolate the authorization tests into fewer tests in order to avoid
repeating set-up and tear-down unnecessarily.
This commit adds a client method to invoke the
UpdateNamespaceServiceProtectionLimits RPC API, providing a user
friendly way to do this through the IOx command line.
* test: Add an e2e test for write replication
* fix: Pass through rpc_write_replicas configuration to RpcWrite handler
---------
Co-authored-by: Dom <dom@itsallbroken.com>
* feat(flightsql): Add support for table_schema in GetTables and complies
feat(fightsql): Add support for table_schema in GetTables
support actual schema
it compiles / does not fail
* chore: resolve merge conflict
* chore: make table_schema optional
* test: update e2e test for `include_schema` = true
* chore: remove info!() and update test `flightsql_schema_matches`
* chore(deps): Bump rustix from 0.36.11 to 0.37.3 (#7308)
* chore(deps): Bump rustix from 0.36.11 to 0.37.3
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.36.11 to 0.37.3.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.36.11...v0.37.3)
---
updated-dependencies:
- dependency-name: rustix
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
* chore: Run cargo hakari tasks
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: CircleCI[bot] <circleci@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* chore: make real error for existing columns
* chore: user match instead of unwrap() on column names
* chore: use datafusion::physical_plan::collect() to get record batches
* chore: use `concat_batches` to combine multiple batches into single one and fix db schema test
* chore: add doc comment for GetTables
* chore: remove pretty print
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: CircleCI[bot] <circleci@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: add optional param to GetTables
* chore: add the third param to query plan
* feat: add table_types param
* chore: clippy
* test: add test cases with filters
* chore: update query to avoid SQL injection
* refactor: update where clause and cleanup
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* test: Add getTables jdbc_client example
* feat: add `CommandGetTables` in FlightSqlClient
* feat: add `CommandGetTables` in flightsql cmd and planner
* test: add e2e test for `CommandGetTables`
* chore: clippy
* chore: comment out the test with filters
* test: update jdbc test expected value for tables
---------
Co-authored-by: Chunchun <14298407+appletreeisyellow@users.noreply.github.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This debugging tool was more useful in previous situations where it was
harder to get real data as input for the compactor.
It's currently causing a flaky test that isn't worth investigating.
Fixes#6190 by making it moot.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Ensure a HTTP error response contains a well-formed JSON structure
containing "code" and "message" fields (for backwards compatibility with
existing InfluxDB versions) and a correct "content-type" header.