Add passexec_cmd, passexec_expiration, kerberos_conn, tags, and
post_connection_sql to SharedServer so non-owners get their own
per-user values instead of inheriting the owner's. Drop the unused
db_res column which was never overlaid or writable by non-owners.
Key changes:
- New Alembic migration (sharedserver_feature_parity) adds 5 columns,
drops db_res, cleans up orphaned records. All operations idempotent.
- Overlay copies new fields from SharedServer instead of suppressing
- _owner_only_fields guard blocks non-owners from setting passexec_cmd,
passexec_expiration, db_res, db_res_type via API
- Non-owners can set post_connection_sql (runs under their own creds)
- update_tags and flag_modified use sharedserver for non-owners
- update() response returns sharedserver tags for non-owners
- ServerManager passexec suppression with config.SERVER_MODE guard
- UI: post_connection_sql editable for non-owners (readonly only when
connected, not when shared)
- SCHEMA_VERSION bumped to 51
- Comprehensive unit tests for overlay, write guards, and tag deltas
pgAdmin 4 in server mode had no data isolation between users — any
authenticated user could access other users' private servers,
background processes, and debugger state by guessing object IDs.
The shared server feature had 21 vulnerabilities including credential
leaks, privilege escalation via passexec_cmd, and owner data
corruption via SQLAlchemy session mutations.
Centralized access control:
- New server_access.py with get_server(), get_server_group(),
get_user_server_query() replacing ~20 unfiltered queries
- connection_manager() raises ObjectGone (HTTP 410) in server mode
when access is denied — fixes 155+ unguarded callers
- UserScopedMixin.for_user() on 10 models replaces scattered
user_id filters
Shared server isolation (all 21 audit issues):
- Expunge server from session before property merge to prevent
owner data corruption
- Suppress passexec_cmd, post_connection_sql for non-owners in
merge, API response, and ServerManager
- Override all 6 SSL/passfile connection_params keys from
SharedServer; strip owner-only keys; sanitize on creation
- _is_non_owner() helper centralises 15+ inline ownership checks
- SharedServer lookup uses (osid, user_id) not name
- Unique constraint on SharedServer(osid, user_id)
- Tunnel/DB password save, change_password, clear_saved_password,
clear_sshtunnel_password all branch on ownership
- Only owner can unshare (delete_shared_server guard)
- Session restore includes shared servers
- tunnel_port/tunnel_keep_alive copied from owner, not hardcoded
Tool/module hardening:
- All tool endpoints use get_server()
- Debugger function arguments scoped by user_id
- Background processes use Process.for_user()
- Workspace adhoc servers scoped to current user
Migration (schema version 49 -> 50):
- Add user_id to debugger_function_arguments composite PK
- Add indexes on server, sharedserver, servergroup
- Add unique constraint on sharedserver(osid, user_id)
* Support /v1/responses for OpenAI models. #9795
* Address CodeRabbit review feedback on OpenAI provider.
- Preserve exception chains with 'raise ... from e' in all
exception handlers for better debugging tracebacks.
- Use f-string !s conversion instead of str() calls.
- Extract duplicated max_tokens error handling into a shared
_raise_max_tokens_error() helper method.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Validate api_url and use incomplete_details from Responses API.
- Strip known endpoint suffixes (/chat/completions, /responses) from
api_url in __init__ to prevent doubled paths if a user provides a
full endpoint URL instead of a base URL.
- Use incomplete_details.reason from the Responses API to properly
distinguish between max_output_tokens and content_filter when the
response status is 'incomplete', in both the non-streaming and
streaming parsers.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Address CodeRabbit review feedback for streaming and SQL extraction.
- Anthropic: preserve separators between text blocks in streaming to
match _parse_response() behavior.
- Docker: validate that the API URL points to a loopback address to
constrain the request surface.
- Docker/OpenAI: raise LLMClientError on empty streams instead of
yielding blank LLMResponse objects, matching non-streaming behavior.
- SQL extraction: strip trailing semicolons before joining blocks to
avoid double semicolons in output.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Address remaining CodeRabbit review feedback for streaming and rendering.
- Use distinct 3-tuple ('complete', text, messages) for completion events
to avoid ambiguity with ('tool_use', [...]) 2-tuples in chat streaming.
- Pass conversation history from request into chat_with_database_stream()
so follow-up NLQ turns retain context.
- Add re.IGNORECASE to SQL fence regex for case-insensitive matching.
- Render MarkdownContent as block element instead of span to avoid
invalid DOM when response contains paragraphs, lists, or tables.
- Keep stop notice as a separate message instead of appending to partial
markdown, preventing it from being swallowed by open code fences.
- Snapshot streamingIdRef before setMessages in error handler to avoid
race condition where ref is cleared before React executes the updater.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Address CodeRabbit review feedback for streaming providers and history.
- Fix critical NameError: use self._api_url instead of undefined API_URL
in anthropic and openai streaming _process_stream() methods.
- Match sync path auth handling: conditionally set API key headers in
streaming paths for both anthropic and openai providers.
- Remove unconditional temperature from openai streaming payload to
match sync path compatibility approach.
- Add URL scheme validation to OllamaClient.__init__ to prevent unsafe
local/resource access via non-http schemes.
- Guard ollama streaming finalizer: raise error when stream drops
without a done frame and no content was received.
- Update chat.py type hint and docstring for 3-tuple completion event.
- Serialize and return filtered conversation history in the complete
SSE event so the client can round-trip it on follow-up turns.
- Store and send conversation history from NLQChatPanel, clear on
conversation reset.
- Fix JSON-fallback SQL render path: clear content when SQL was
extracted without fenced blocks so ChatMessage uses sql-only renderer.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Fix missing closing brace in NLQChatPanel switch statement.
Adding block scoping to the error case introduced an unmatched brace
that prevented the switch statement from closing properly, causing
an eslint parse error.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Fix missing compaction module and SQL extraction test.
- Replace compaction module imports with inline history deserialization
and filtering since compaction.py is on a different branch.
- Add rstrip(';') to SQL extraction test to match production code,
fixing double-semicolon assertion failure.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Fix SQL extraction test expected values after rstrip(';') change.
The rstrip(';') applied to each block before joining means single
blocks and the last block in multi-block joins no longer have
trailing semicolons. Update expected values to match.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Strictly guard Ollama stream: raise if no terminal done frame received.
Truncated content from a dropped connection should not be treated as
a complete response, even if partial text was streamed. Always raise
when final_data is None, matching CodeRabbit's recommendation.
* Address CodeRabbit review feedback for chat context and compaction.
- Track tool-use turns as groups instead of one-to-one pairs, so
multi-tool assistant messages don't leave orphaned results.
- Add fallback to shrink the recent window when protected messages
alone exceed the token budget, preventing compaction no-ops.
- Fix low-value test fixtures to keep transient messages short so
they actually classify as low-importance.
- Guard Clear button against in-flight stream race conditions by
adding a clearedRef flag and cancelling active streams.
- Assert that conversation history is actually passed through to
chat_with_database in the "With History" test.
* Address remaining CodeRabbit review feedback for compaction module.
- Expand protected set to cover full tool groups, preventing orphaned
tool call/result messages when a turn straddles the recent window.
- Add input validation in deserialize_history() for non-list/non-dict data.
- Strengthen test assertion for preserved recent window tail.
* Fix CI test failures in compaction and NLQ chat tests.
- Lower max_tokens budget in test_drops_low_value to reliably force
compaction (500 was borderline, use 200).
- Consume SSE response data before asserting mock calls in NLQ chat
test, since Flask's streaming generator only executes on iteration.
* Clarify mock patch target in NLQ chat test.
Add comment explaining why we patch the source module rather than the
use site: the endpoint uses a local import inside the function body,
so there is no module-level binding to patch.
* Don't let auto-selection override an explicit default_provider choice.
If the same save payload includes a default_provider update (including
setting it to empty/disabled), skip the auto-selection logic so the
user's explicit choice is respected.
The previous messages like "Vacuuming the catalog..." and "Analyzing
table statistics..." could be mistaken for actual database operations.
Replace them with clearly whimsical elephant-themed messages, expand
the pool to 32 messages, and consolidate them into a single shared
module with gettext() support.
Add a wait for the FormView autofocus timer (200ms) to complete before
typing, preventing a race condition where the autofocus moves focus away
from the target field on slow CI machines. This matches the pattern
already used by simulateValidData in the same test file.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Removed the temperature parameter from all LLM provider clients and
pipeline calls, allowing each model to use its default. This fixes
compatibility with GPT-5-mini/nano and future models that don't
support user-configurable temperature.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Fix NLQ system prompt to work with models that prioritize text instructions over tool calls.
The previous prompt told the model to "Return ONLY the JSON object, nothing else"
while also providing tool definitions. Models like Qwen 3.5 would follow the text
instruction and never use tools. The updated prompt clearly separates the tool-use
phase from the final JSON response phase, and explicitly instructs the model to
call tools directly rather than describing them in text.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Update release notes for NLQ prompt fix.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Fix issue number in release notes for NLQ prompt fix.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Add configurable API URL fields for OpenAI and Anthropic providers
- Make API keys optional when using custom URLs (for local providers)
- Auto-clear model dropdown when provider settings change
- Refresh button uses current unsaved form values
- Update documentation and release notes
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>