* Address CodeRabbit review feedback for streaming and SQL extraction.
- Anthropic: preserve separators between text blocks in streaming to
match _parse_response() behavior.
- Docker: validate that the API URL points to a loopback address to
constrain the request surface.
- Docker/OpenAI: raise LLMClientError on empty streams instead of
yielding blank LLMResponse objects, matching non-streaming behavior.
- SQL extraction: strip trailing semicolons before joining blocks to
avoid double semicolons in output.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Address remaining CodeRabbit review feedback for streaming and rendering.
- Use distinct 3-tuple ('complete', text, messages) for completion events
to avoid ambiguity with ('tool_use', [...]) 2-tuples in chat streaming.
- Pass conversation history from request into chat_with_database_stream()
so follow-up NLQ turns retain context.
- Add re.IGNORECASE to SQL fence regex for case-insensitive matching.
- Render MarkdownContent as block element instead of span to avoid
invalid DOM when response contains paragraphs, lists, or tables.
- Keep stop notice as a separate message instead of appending to partial
markdown, preventing it from being swallowed by open code fences.
- Snapshot streamingIdRef before setMessages in error handler to avoid
race condition where ref is cleared before React executes the updater.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Address CodeRabbit review feedback for streaming providers and history.
- Fix critical NameError: use self._api_url instead of undefined API_URL
in anthropic and openai streaming _process_stream() methods.
- Match sync path auth handling: conditionally set API key headers in
streaming paths for both anthropic and openai providers.
- Remove unconditional temperature from openai streaming payload to
match sync path compatibility approach.
- Add URL scheme validation to OllamaClient.__init__ to prevent unsafe
local/resource access via non-http schemes.
- Guard ollama streaming finalizer: raise error when stream drops
without a done frame and no content was received.
- Update chat.py type hint and docstring for 3-tuple completion event.
- Serialize and return filtered conversation history in the complete
SSE event so the client can round-trip it on follow-up turns.
- Store and send conversation history from NLQChatPanel, clear on
conversation reset.
- Fix JSON-fallback SQL render path: clear content when SQL was
extracted without fenced blocks so ChatMessage uses sql-only renderer.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Fix missing closing brace in NLQChatPanel switch statement.
Adding block scoping to the error case introduced an unmatched brace
that prevented the switch statement from closing properly, causing
an eslint parse error.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Fix missing compaction module and SQL extraction test.
- Replace compaction module imports with inline history deserialization
and filtering since compaction.py is on a different branch.
- Add rstrip(';') to SQL extraction test to match production code,
fixing double-semicolon assertion failure.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Fix SQL extraction test expected values after rstrip(';') change.
The rstrip(';') applied to each block before joining means single
blocks and the last block in multi-block joins no longer have
trailing semicolons. Update expected values to match.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Strictly guard Ollama stream: raise if no terminal done frame received.
Truncated content from a dropped connection should not be treated as
a complete response, even if partial text was streamed. Always raise
when final_data is None, matching CodeRabbit's recommendation.
* Address CodeRabbit review feedback for chat context and compaction.
- Track tool-use turns as groups instead of one-to-one pairs, so
multi-tool assistant messages don't leave orphaned results.
- Add fallback to shrink the recent window when protected messages
alone exceed the token budget, preventing compaction no-ops.
- Fix low-value test fixtures to keep transient messages short so
they actually classify as low-importance.
- Guard Clear button against in-flight stream race conditions by
adding a clearedRef flag and cancelling active streams.
- Assert that conversation history is actually passed through to
chat_with_database in the "With History" test.
* Address remaining CodeRabbit review feedback for compaction module.
- Expand protected set to cover full tool groups, preventing orphaned
tool call/result messages when a turn straddles the recent window.
- Add input validation in deserialize_history() for non-list/non-dict data.
- Strengthen test assertion for preserved recent window tail.
* Fix CI test failures in compaction and NLQ chat tests.
- Lower max_tokens budget in test_drops_low_value to reliably force
compaction (500 was borderline, use 200).
- Consume SSE response data before asserting mock calls in NLQ chat
test, since Flask's streaming generator only executes on iteration.
* Clarify mock patch target in NLQ chat test.
Add comment explaining why we patch the source module rather than the
use site: the endpoint uses a local import inside the function body,
so there is no module-level binding to patch.
* Don't let auto-selection override an explicit default_provider choice.
If the same save payload includes a default_provider update (including
setting it to empty/disabled), skip the auto-selection logic so the
user's explicit choice is respected.
The previous messages like "Vacuuming the catalog..." and "Analyzing
table statistics..." could be mistaken for actual database operations.
Replace them with clearly whimsical elephant-themed messages, expand
the pool to 32 messages, and consolidate them into a single shared
module with gettext() support.
Add a wait for the FormView autofocus timer (200ms) to complete before
typing, preventing a race condition where the autofocus moves focus away
from the target field on slow CI machines. This matches the pattern
already used by simulateValidData in the same test file.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Removed the temperature parameter from all LLM provider clients and
pipeline calls, allowing each model to use its default. This fixes
compatibility with GPT-5-mini/nano and future models that don't
support user-configurable temperature.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Fix NLQ system prompt to work with models that prioritize text instructions over tool calls.
The previous prompt told the model to "Return ONLY the JSON object, nothing else"
while also providing tool definitions. Models like Qwen 3.5 would follow the text
instruction and never use tools. The updated prompt clearly separates the tool-use
phase from the final JSON response phase, and explicitly instructs the model to
call tools directly rather than describing them in text.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Update release notes for NLQ prompt fix.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Fix issue number in release notes for NLQ prompt fix.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Add configurable API URL fields for OpenAI and Anthropic providers
- Make API keys optional when using custom URLs (for local providers)
- Auto-clear model dropdown when provider settings change
- Refresh button uses current unsaved form values
- Update documentation and release notes
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add a runtime guard in the postinst so apparmor_parser is only called
when available. Previously, packages built on Ubuntu 24+ would fail to
install on headless servers or systems without AppArmor tools. A warning
is printed when the profile load is skipped to aid debugging.
* Add preference for insert with relations
Co-authored-by: Christian P. <pirnichristian@gmail.com>
* Insert tables with relations on drag and drop
Co-authored-by: Christian P. <pirnichristian@gmail.com>
* Fix test mock not returning Erd Supported Data
Co-authored-by: Christian P. <pirnichristian@gmail.com>
---------
Co-authored-by: Christian P. <pirnichristian@gmail.com>
The fromRaw formatter for the Columns field in unique constraint and
primary key properties used _.filter(allOptions, ...), which preserved
the order of allOptions (table column position) rather than the
constraint-defined column order from backendVal. Replaced with _.find
mapped over backendVal to preserve the correct constraint column order.
Added unit tests for cell and type formatter functions to verify
column ordering is preserved.