- Fixed `--mock` mode
- Moved interrupt to beginning of the step iterator pipeline (from `BuiltinChallenge` to `agent_api_interface.py:run_api_agent`). This ensures that any finish-up code is properly executed after executing a single step.
- Implemented mock mode in `WebArenaChallenge`
- Fixed `fixture 'i_attempt' not found` error when `--attempts`/`-N` is omitted
- Fixed handling of `python`/`pytest` evals in `BuiltinChallenge`
- Disabled left-over Helicone code (see 056163e)
- Fixed a couple of challenge definitions
- WebArena task 107: fix spelling of months (Sepetember, Octorbor *lmao*)
- synthesize/1_basic_content_gen (SynthesizeInfo): remove empty string from `should_contain` list
- Added some debug logging in agent_api_interface.py and challenges/builtin.py
OpenAI's newest models return JSON with markdown fences around it, breaking the `json.loads` parser.
This commit adds an `extract_list_from_response` function to json_utils/utilities.py and uses this function to replace `json.loads` in `_process_text`.
* Add Sentry integration for telemetry
- Add `sentry_sdk` dependency
- Add setup logic and config flow using `TELEMETRY_OPT_IN` environment variable
- Add app/telemetry.py with `setup_telemetry` helper routine
- Call `setup_telemetry` in `cli()` in app/cli.py
- Add `TELEMETRY_OPT_IN` to .env.template
- Add helper function `env_file_exists` and routine `set_env_config_value` to app/utils.py
- Add unit tests for `set_env_config_value` in test_utils.py
- Add prompt to startup to ask whether the user wants to enable telemetry if the env variable isn't set
* Add `capture_exception` statements for LLM parsing errors and command failures
- Change default `SMART_LLM` from `gpt-4` to `gpt-4-turbo-preview`
- Change default `FAST_LLM` from `gpt-3.5-turbo-16k` to `gpt-3.5-turbo-0125`
- Change default `EMBEDDING_MODEL` from `text-embedding-ada-002` to `text-embedding-3-small`
- Update .env.template, azure.yaml.template, and documentation accordingly
- Add `text-embedding-3-small` and `text-embedding-3-large` as `EMBEDDING_v3_S` and `EMBEDDING_v3_L` respectively
- Add `gpt-3.5-turbo-0125` as `GPT3_v4`
- Add `gpt-4-1106-vision-preview` as `GPT4_v3_VISION`
- Add GPT-4V models to info map
- Change chat model info mapping to derive info for aliases (e.g. `gpt-3.5-turbo`) from specific versions instead of the other way around
* Add `_sideload_chrome_extensions` subroutine to `open_page_in_browser` in web_selenium.py
* Sideloads uBlock Origin and I Still Don't Care About Cookies, downloading them if necessary
* Add 2-second delay to `open_page_in_browser` to allow time for handling cookie walls
Commit 956cdc7 "fix(agent/json_utils): Decode as JSON rather than Python objects" broke these unit tests because they generated "JSON" by stringifying a Python object.
* Compress steps in the prompt to reduce token usage, and to increase longevity when using models with limited context windows
* Move multiple copies of step formatting code to `Episode.format` method
* Add `EpisodicActionHistory.handle_compression` method to handle compression of new steps
* Implement `extract_information` function in `autogpt.processing.text` module. This function extracts pieces of information from a body of text based on a list of topics of interest.
* Add `topics_of_interest` and `get_raw_content` parameters to `read_webpage` commmand
* Limit maximum content length if `get_raw_content=true` is specified
* Replace `ast.literal_eval` with `json.loads` in `extract_dict_from_response`
This fixes a bug where boolean values could not be decoded because of their required capitalization in Python.
LLMs are probabilistic systems. Reproducibility of completions is not guaranteed. It only makes sense to account for this, by running challenges multiple times to obtain a success ratio rather than a boolean success/failure result.
Changes:
- Add `-N`, `--attempts` option to CLI and `attempts_per_challenge` parameter to `main.py:run_benchmark`.
- Add dynamic `i_attempt` fixture through `pytest_generate_tests` hook in conftest.py to achieve multiple runs per challenge.
- Modify `pytest_runtest_makereport` hook in conftest.py to handle multiple reporting calls per challenge.
- Refactor report_types.py, reports.py, process_report.ty to allow multiple results per challenge.
- Calculate `success_percentage` from results of the current run, rather than all known results ever.
- Add docstrings to a number of models in report_types.py.
- Allow `None` as a success value, e.g. for runs that did not render any results before being cut off.
- Make SingletonReportManager thread-safe.
* feat(benchmark): Add JungleGym WebArena challenges
- Add `WebArenaChallenge`, `WebArenaChallengeSpec`, and other logic to make these challenges work
- Add WebArena challenges to Pytest collection endpoint generate_test.py
* feat(benchmark/webarena): Add hand-picked selection of WebArena challenges
- Pydantic shallow-copies models when they are passed into a parent model, meaning they can't be updated through the original reference. This commit adds a fix for the resulting cost persistence issue.
- The `extract_dict_from_response` function, which is supposed to reliably extract a JSON object from an LLM's response, positively discriminated objects defined on a single line, causing issues.
- `summarize_text` and `QueryLanguageModel.__call__` still tried to access `response["content"]`, which isn't possible since upgrading to the OpenAI v1 client library.
- When an Artifact's file is modified by the agent, set its `agent_created` attribute to `True` instead of registering a new Artifact
- Update the `autogpt-forge` dependency to the newest version, in which `AgentDB.update_artifact` has been implemented
Squashed commit of the following:
commit 7d6476d329
Author: Reinier van der Leer <pwuts@agpt.co>
Date: Tue Jan 9 18:10:45 2024 +0100
refactor(benchmark/challenge): Set up structure to support more challenge providers
- Move `Challenge`, `ChallengeData`, `load_challenges` to `challenges/builtin.py` and rename to `BuiltinChallenge`, `BuiltinChallengeSpec`, `load_builtin_challenges`
- Create `BaseChallenge` to serve as interface and base class for different challenge implementations
- Create `ChallengeInfo` model to serve as universal challenge info object
- Create `get_challenge_from_source_uri` function in `challenges/__init__.py`
- Replace `ChallengeData` by `ChallengeInfo` everywhere except in `BuiltinChallenge`
- Add strong typing to `task_informations` store in app.py
- Use `call.duration` in `finalize_test_report` and remove `timer` fixture
- Update docstring on `challenges/__init__.py:get_unique_categories`
- Add docstring to `generate_test.py`
commit 5df2aa7939
Author: Reinier van der Leer <pwuts@agpt.co>
Date: Tue Jan 9 16:58:01 2024 +0100
refactor(benchmark): Refactor & rename functions in agent_interface.py and agent_api_interface.py
- `copy_artifacts_into_temp_folder` -> `copy_challenge_artifacts_into_workspace`
- `copy_agent_artifacts_into_folder` -> `download_agent_artifacts_into_folder`
- Reorder parameters of `run_api_agent`, `copy_challenge_artifacts_into_workspace`; use `Path` instead of `str`
commit 6a256fef4c
Author: Reinier van der Leer <pwuts@agpt.co>
Date: Tue Jan 9 16:02:25 2024 +0100
refactor(benchmark): Refactor & typefix report generation and handling logic
- Rename functions in reports.py and ReportManager.py to better reflect what they do
- `get_previous_test_results` -> `get_and_update_success_history`
- `generate_single_call_report` -> `initialize_test_report`
- `finalize_reports` -> `finalize_test_report`
- `ReportManager.end_info_report` -> `SessionReportManager.finalize_session_report`
- Modify `pytest_runtest_makereport` hook in conftest.py to finalize the report immediately after the challenge finishes running instead of after teardown
- Move result processing logic from `initialize_test_report` to `finalize_test_report` in reports.py
- Use `Test` and `Report` types from report_types.py where possible instead of untyped dicts: reports.py, utils.py, ReportManager.py
- Differentiate `ReportManager` into `SessionReportManager`, `RegressionTestsTracker`, `SuccessRateTracker`
- Move filtering of optional challenge categories from challenge.py (`Challenge.skip_optional_categories`) to conftest.py (`pytest_collection_modifyitems`)
- Remove unused `scores` fixture in conftest.py
commit 370d6dbf5d
Author: Reinier van der Leer <pwuts@agpt.co>
Date: Tue Jan 9 15:16:43 2024 +0100
refactor(benchmark): Simplify models in report_types.py
- Removed ForbidOptionalMeta and BaseModelBenchmark classes.
- Changed model attributes to optional: `Metrics.difficulty`, `Metrics.success`, `Metrics.success_percentage`, `Metrics.run_time`, and `Test.reached_cutoff`.
- Added validator to `Metrics` model to require `success` and `run_time` fields if `attempted=True`.
- Added default values to all optional model fields.
- Removed duplicate imports.
- Added condition in process_report.py to prevent null lookups if `metrics.difficulty` is not set.
- Update `openai` dependency from ^v0.27.10 to ^v1.7.2
- Update poetry.lock
- Update code for changed endpoints and new output types of OpenAI library
- Replace uses of `AssistantChatMessageDict` by `AssistantChatMessage`
- Update `PromptStrategy`, `BaseAgent`, and all of their subclasses accordingly
- Update `OpenAIProvider`, `OpenAICredentials`, azure.yaml.template, .env.template and test_config.py to work with new separate `AzureOpenAI` client
- Remove `_OpenAIRetryHandler` and implement retry mechanism with `tenacity`
- Rewrite pytest fixture `cached_openai_client` (renamed from `patched_api_requestor`) for OpenAI v1 library
* Update `openai` dependency from `^0.27.8` to `^1.7.2`
* Update `litellm` dependency from `^0.1.821` to `^1.17.9`
* Migrate llm.py from OpenAI module-level client to client instance
* Update return types in llm.py for new OpenAI and LiteLLM versions
* Also remove `Exception` as a return type because they are raised, not returned
* Update tutorials/003_crafting_agent_logic.md accordingly
Note: this changes the output types of the functions in `forge.llm`: `chat_completion_request`, `create_embedding_request`, `transcribe_audio`