Removed the temperature parameter from all LLM provider clients and pipeline calls, allowing each model to use its default. This fixes compatibility with GPT-5-mini/nano and future models that don't support user-configurable temperature. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| generator.py | ||
| models.py | ||
| pipeline.py | ||
| prompts.py | ||
| queries.py | ||
| sections.py | ||