pgadmin4/web/pgadmin/llm/providers
Dave Page 9bb96360dd
Support /v1/responses for OpenAI models. #9795
* Support /v1/responses for OpenAI models. #9795

* Address CodeRabbit review feedback on OpenAI provider.

- Preserve exception chains with 'raise ... from e' in all
  exception handlers for better debugging tracebacks.
- Use f-string !s conversion instead of str() calls.
- Extract duplicated max_tokens error handling into a shared
  _raise_max_tokens_error() helper method.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Validate api_url and use incomplete_details from Responses API.

- Strip known endpoint suffixes (/chat/completions, /responses) from
  api_url in __init__ to prevent doubled paths if a user provides a
  full endpoint URL instead of a base URL.
- Use incomplete_details.reason from the Responses API to properly
  distinguish between max_output_tokens and content_filter when the
  response status is 'incomplete', in both the non-streaming and
  streaming parsers.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 14:16:22 +05:30
..
__init__.py Core LLM integration infrastructure to allow pgAdmin to connect to AI providers. #9641 2026-02-17 17:16:06 +05:30
anthropic.py Fix an issue where LLM responses are not streamed or rendered properly in the AI Assistant. #9734 2026-03-17 11:41:57 +05:30
docker.py Fix an issue where LLM responses are not streamed or rendered properly in the AI Assistant. #9734 2026-03-17 11:41:57 +05:30
ollama.py Fix an issue where LLM responses are not streamed or rendered properly in the AI Assistant. #9734 2026-03-17 11:41:57 +05:30
openai.py Support /v1/responses for OpenAI models. #9795 2026-03-30 14:16:22 +05:30