- Set default values for `ProviderBudget` / `ModelProviderBudget` fields
- Remove redundant field redefinitions on `ModelProviderBudget` class
- Change `ModelProviderUsage.update_usage(..)` and `ModelProviderBudget.update_usage_and_cost(..)` signatures for easier use
- Change `ModelProviderBudget.usage` from `ModelProviderUsage` to `defaultdict[str, ModelProviderUsage]` for per-model usage tracking
- Fix `ChatModelInfo`/`EmbeddingModelInfo` `service` attribute: rename from `llm_service` to match base class and fix types.
This makes it unnecessary to specify the `service` field when creating a `ChatModelInfo` or `EmbeddingModelInfo` object.
- Use `defaultdict(ModelProviderBudget)` for task budget tracking in agent_protocol_server.py
* refactor(agent/core): Rearrange and split up `OpenAIProvider.create_chat_completion`
- Rearrange to reduce complexity, improve separation/abstraction of concerns, and allow multiple points of failure during parsing
- Move conversion from `ChatMessage` to `openai.types.ChatCompletionMessageParam` to `_get_chat_completion_args`
- Move token usage and cost tracking boilerplate code to `_create_chat_completion`
- Move tool call conversion/parsing to `_parse_assistant_tool_calls` (new)
* fix(agent/core): Handle decoding of function call arguments in `create_chat_completion`
- Amend `model_providers.schema`: change type of `arguments` from `str` to `dict[str, Any]` on `AssistantFunctionCall` and `AssistantFunctionCallDict`
- Implement robust and transparent parsing in `OpenAIProvider._parse_assistant_tool_calls`
- Remove now unnecessary `json_loads` calls throughout codebase
* feat(agent/utils): Improve conditions and errors in `json_loads`
- Include all decoding errors when raising a ValueError on decode failure
- Use errors returned by `return_errors` instead of an error buffer
- Fix check for decode failure
* Make `Agent.save_state` behave like "save as"
- Leave previously saved state untouched
- Save agent state in new folder corresponding to new `agent_id`
- Copy over workspace contents to new folder
* Add `copy` method to `FileStorage`
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Update web_search command for both autogpt and forge to adjust for breaking change in v5 of duckduckgo_search,
update duckduckgo_search to ^5.0.0
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
- Move filtering logic from tests/vcr/__init__.py to tests/vcr/vcr_filter.py
- Ignore all `X-Stainless-*` headers for cassette matching, e.g. `X-Stainless-OS` and `X-Stainless-Runtime-Version`
- Remove deprecated OpenAI proxy logic
- Reorder methods in vcr_filter.py for readability
* Better handle no API keys or invalid ones
* Handle exception and exit when invalid key is provided
* Handle any APIError exception when trying to get OpenAI models and exit
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* Matrix the AutoGPT Python CI's `test` job across Ubuntu, macOS and Windows
- Set up MinIO in a step rather than specifying it under `jobs[test].services`, because services are only supported on Linux runners
- Add Windows version of step to install Poetry
- Add macOS compatibility patches to 'Install Poetry (Unix)' and `setup_git_auth` steps
**Caveats:**
- **No Docker on macOS or Windows**
* Windows comes with Docker but only supports running Windows containers, while we're mainly interested in using Linux containers for code execution and/or running auxiliary services.
* [The macOS runner doesn't come with Docker](https://github.com/actions/runner-images/issues/17). Setting it up is possible but takes ~3-4 minutes, and the performance of the Colima engine is poor: a `docker pull` that takes 2 seconds on Linux takes 45 seconds on macOS.
- **No S3 service available on Windows**
It seems that running a background process [isn't possible on Windows](https://github.com/actions/runner/issues/598#issuecomment-2011890429), and neither is running Linux-based Docker containers.
* Add `autogpt-agent` and OS-specific flags to Codecov upload step
* Improve caching of Python dependencies in CI by changing the cache key
- Include hash of `poetry.lock` instead of `pyproject.toml` in key
- Remove date component from key; it was included to avoid getting stuck to old cached versions of packages when we were still using `requirements.txt`. With `poetry.lock` that is no longer a concern.
* Fix skip check in test_s3_file_storage.py
* Implement syntax fault tolerant `json_loads` function using `dem3json`
- Add `dem3json` dependency
* Replace `json.loads` by `json_loads` in places where malformed JSON may occur
* Move `json_utils.py` to `autogpt/core/utils`
* Add tests for `json_utils`
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
- Replace `session.prompt_async(..)` with `click.prompt(..)` in `clean_input` (autogpt/app/utils.py)
- Convert `clean_input` back to a synchronous function (and amend its usages accordingly)
- Remove `prompt-toolkit` dependency
This mitigates issues crashes in some shell environments on Windows.
- Improve error output for failure to load plugin
- Fix logic to determine qualified module name
- Use `importlib` rather than `__import__` magic function
This unbreaks `scan_plugins` on Windows.