- Change default `SMART_LLM` from `gpt-4` to `gpt-4-turbo-preview`
- Change default `FAST_LLM` from `gpt-3.5-turbo-16k` to `gpt-3.5-turbo-0125`
- Change default `EMBEDDING_MODEL` from `text-embedding-ada-002` to `text-embedding-3-small`
- Update .env.template, azure.yaml.template, and documentation accordingly
- Add `text-embedding-3-small` and `text-embedding-3-large` as `EMBEDDING_v3_S` and `EMBEDDING_v3_L` respectively
- Add `gpt-3.5-turbo-0125` as `GPT3_v4`
- Add `gpt-4-1106-vision-preview` as `GPT4_v3_VISION`
- Add GPT-4V models to info map
- Change chat model info mapping to derive info for aliases (e.g. `gpt-3.5-turbo`) from specific versions instead of the other way around
* Add `_sideload_chrome_extensions` subroutine to `open_page_in_browser` in web_selenium.py
* Sideloads uBlock Origin and I Still Don't Care About Cookies, downloading them if necessary
* Add 2-second delay to `open_page_in_browser` to allow time for handling cookie walls
Commit 956cdc7 "fix(agent/json_utils): Decode as JSON rather than Python objects" broke these unit tests because they generated "JSON" by stringifying a Python object.
* Compress steps in the prompt to reduce token usage, and to increase longevity when using models with limited context windows
* Move multiple copies of step formatting code to `Episode.format` method
* Add `EpisodicActionHistory.handle_compression` method to handle compression of new steps
* Implement `extract_information` function in `autogpt.processing.text` module. This function extracts pieces of information from a body of text based on a list of topics of interest.
* Add `topics_of_interest` and `get_raw_content` parameters to `read_webpage` commmand
* Limit maximum content length if `get_raw_content=true` is specified
* Replace `ast.literal_eval` with `json.loads` in `extract_dict_from_response`
This fixes a bug where boolean values could not be decoded because of their required capitalization in Python.
- Pydantic shallow-copies models when they are passed into a parent model, meaning they can't be updated through the original reference. This commit adds a fix for the resulting cost persistence issue.
- The `extract_dict_from_response` function, which is supposed to reliably extract a JSON object from an LLM's response, positively discriminated objects defined on a single line, causing issues.
- `summarize_text` and `QueryLanguageModel.__call__` still tried to access `response["content"]`, which isn't possible since upgrading to the OpenAI v1 client library.
- When an Artifact's file is modified by the agent, set its `agent_created` attribute to `True` instead of registering a new Artifact
- Update the `autogpt-forge` dependency to the newest version, in which `AgentDB.update_artifact` has been implemented
- Update `openai` dependency from ^v0.27.10 to ^v1.7.2
- Update poetry.lock
- Update code for changed endpoints and new output types of OpenAI library
- Replace uses of `AssistantChatMessageDict` by `AssistantChatMessage`
- Update `PromptStrategy`, `BaseAgent`, and all of their subclasses accordingly
- Update `OpenAIProvider`, `OpenAICredentials`, azure.yaml.template, .env.template and test_config.py to work with new separate `AzureOpenAI` client
- Remove `_OpenAIRetryHandler` and implement retry mechanism with `tenacity`
- Rewrite pytest fixture `cached_openai_client` (renamed from `patched_api_requestor`) for OpenAI v1 library
* docs: Add documentation on how to use Agent Protocol in the template
- Added documentation on how to use Agent Protocol and its settings in the `.env.template` file.
- An explanation is provided for the `AP_SERVER_PORT` and `AP_SERVER_DB_URL` settings.
- This change aims to improve the understanding and usage of Agent Protocol in the project.
* docs: Update usage.md with information about configuring the API port
- Update the documentation for the `serve` mode in `usage.md`
- Add information about configuring the port for the API server using the `AP_SERVER_PORT` environment variable.
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
gitpython was installed as an indirect dependency via agbenchmark. The release builds don't contain agbenchmark and thus also lack the gitpython package, which breaks the image.
- Move `auto-gpt-plugin-template` from dev dependencies to regular dependencies in `pyproject.toml`.
- Fixes#6566 - No module named 'auto_gpt_plugin_template'.
- Add logging to capture errors raised during execution of actions in the Agent
- Move `fmt_kwargs` function from `agent_protocol_server.py` to `logs/utils.py`
- Update the `read_file` function in `file_operations.py` to pass the file's extension to the `decode_textual_file` function.
- Modify the `decode_textual_file` function in `file_operations_utils.py` to accept the file extension as an argument.
- Update the `content` property in the `FileContextItem` class in `context_item.py` to pass the file's extension to the `decode_textual_file` function.
- Update the `test_parsers` function in `test_text_file_parsers.py` to pass the file extension to the `decode_textual_file` function.
- Prevent the Agent from treating `AgentTerminated` like it would any other exception raised by a command.
- The agent should raise AgentTerminated exception to exit the loop.
- Fix the parsing of invalid LLM responses by appending an error message to the prompt and allowing the LLM to fix its mistakes.
- Update the `OpenAIProvider` to handle the self-correction process and limit the number of attempts to fix parsing errors.
- Update the `BaseAgent` to profit from the new pasing and parse-fixing mechanism.
This change ensures that the system can handle and recover from errors in parsing LLM responses.
Hopefully this fixes#1407 once and for all.
- Updated the quick links and user guide URLs in the bulletin
- Released v0.5.0 with cloud-readiness, a new UI, support for the newest Agent Protocol version, and other improvements
See the release notes on Github for more details: https://github.com/Significant-Gravitas/AutoGPT/releases
- Removed unnecessary print_attribute calls in configurators.py and configurator.py files
- Consolidated printing of configuration attributes in main.py for improved readability and reduced log spam in Agent Protocol mode
- Update GCSFileWorkspace.initialize() to handle cases where the bucket doesn't exist and create it if necessary
- Add logging to S3FileWorkspace.initialize() and GCSFileWorkspace.initialize()
- Update GCSFileWorkspace.list() and S3FileWorkspace.list() to correctly handle nested paths and return the relative paths of files
- Fix tests for GCSFileWorkspace and S3FileWorkspace to account for the changes in initialization and listing behavior
- Fix S3FileWorkspace.open_file() to correctly switch between binary and text mode
- Added tests to verify the fixes in workspace initialization and listing behavior
- Fixes#6553 (`web_search` command not working)
- v3.x.x of the duckduckgo-search library no longer works, so updating to v4.0.0 unbreaks the `web_search` command
feat: Add dependencies required to use PostgreSQL
- Added psycopg2-binary version 2.9.9 to the dependencies in pyproject.toml
- Updated the poetry.lock file with the new package information
- Update the signature of `FileWorkspace.open_file` and fix implementations in every workspace backend
- Replace `open()` with `workspace.open_file` in the `read_file` command to use the workspace's file opening functionality
- Fix the parametrization of the `test_text_file_parsers` test to correctly test text file parsers
- Adjusted path processing and use of `agent.workspace` in the file_operations.py module to prevent double path resolution.
- Updated the `is_duplicate_operation` and `log_operation` functions in file_operations.py to use the `make_relative` argument of the `sanitize_path_arg` decorator.
- Refactored the `write_to_file`, `list_folder`, and `list_files` functions in file_operations.py to accept both string and Path objects as the path argument.
- Modified the GCSFileWorkspace and S3FileWorkspace classes in the file_workspace module to ensure that the root path is always an absolute path.
This commit addresses issues with path processing in the file_operations.py module and across different workspace backend implementations. The changes ensure that relative paths are correctly converted to absolute paths where necessary and that the file operations logic functions consistently handle path arguments as strings or Path objects. Additionally, the GCSFileWorkspace and S3FileWorkspace classes now enforce that the root path is always an absolute path.
- Remove X- prefix from X-AutoGPT-UserID, X-AP-TaskID, and X-AP-StepID headers
- Refactor literal references to "ask_user" to `ask_user.__name__` in AgentProtocolServer
- Fix type annotation for `agent_id` in `BaseAgentSettings` class
- Add assertion to ensure `agent_id` is not an empty string in `AgentManager.get_agent_dir()` method
- Change type of `override_name` and `override_role` to be optional in `apply_overrides_to_ai_settings()` function
- Update `AgentProtocolServer` to include `X-AP-TaskID` and `X-AutoGPT-UserID` headers in outgoing requests for Agent Protocol tasks.
- Modify `ModelProvider` and `OpenAIProvider` to allow configuring extra headers to be added to all outgoing requests.
- Fix the type of the `task_id` parameter in `AgentProtocolServer.get_task`
- Update the return type of the `AgentProtocolServer.get_artifact` method to `StreamingResponse`.
- Fix the Content-Disposition header in the response to include quotes around the filename.
* refactor: Rename FileWorkspace to LocalFileWorkspace and create FileWorkspace abstract class
- Rename `FileWorkspace` to `LocalFileWorkspace` to provide a more descriptive name for the class that represents a file workspace that works with local files.
- Create a new base class `FileWorkspace` to serve as the parent class for `LocalFileWorkspace`. This allows for easier extension and customization of file workspaces in the future.
- Update import statements and references to `FileWorkspace` throughout the codebase to use the new naming conventions.
* feat: Add S3FileWorkspace + tests + test setups for CI and Docker
- Added S3FileWorkspace class to provide an interface for interacting with a file workspace and storing files in an S3 bucket.
- Updated pyproject.toml to include dependencies for boto3 and boto3-stubs.
- Implemented unit tests for S3FileWorkspace.
- Added MinIO service to Docker CI to allow testing S3 features in CI.
- Added autogpt-test service config to docker-compose.yml for local testing with MinIO.
* ci(docker): tee test output instead of capturing
* fix: Improve error handling in S3FileWorkspace.initialize()
- Do not tolerate all `botocore.exceptions.ClientError`s
- Raise the exception anyways if the error is not "NoSuchBucket"
* feat: Add S3 workspace backend support and S3Credentials
- Added support for S3 workspace backend in the Autogpt configuration
- Added a new sub-config `S3Credentials` to store S3 credentials
- Modified the `.env.template` file to include variables related to S3 credentials
- Added a new `s3_credentials` attribute on the `Config` class to store S3 credentials
- Moved the `unmasked` method from `ModelProviderCredentials` to the parent `ProviderCredentials` class to handle unmasking for S3 credentials
* fix(agent/tests): Fix S3FileWorkspace initialization in test_s3_file_workspace.py
- Update the S3FileWorkspace initialization in the test_s3_file_workspace.py file to include the required S3 Credentials.
* refactor: Remove S3Credentials and add get_workspace function
- Remove `S3Credentials` as boto3 will fetch the config from the environment by itself
- Add `get_workspace` function in `autogpt.file_workspace` module
- Update `.env.template` and tests to reflect the changes
* feat(agent/workspace): Make agent workspace backend configurable
- Modified `autogpt.file_workspace.get_workspace` function to either take a workspace `id` or `root_path`.
- Modified `FileWorkspaceMixin` to use the `get_workspace` function to set up the workspace.
- Updated the type hints and imports accordingly.
* feat(agent/workspace): Add GCSFileWorkspace for Google Cloud Storage
- Added support for Google Cloud Storage as a storage backend option in the workspace.
- Created the `GCSFileWorkspace` class to interface with a file workspace stored in a Google Cloud Storage bucket.
- Implemented the `GCSFileWorkspaceConfiguration` class to handle the configuration for Google Cloud Storage workspaces.
- Updated the `get_workspace` function to include the option to use Google Cloud Storage as a workspace backend.
- Added unit tests for the new `GCSFileWorkspace` class.
* fix: Unbreak use of non-local workspaces in AgentProtocolServer
- Modify the `_get_task_agent_file_workspace` method to handle both local and non-local workspaces correctly
* feat: Refactor config loading and initialization to be modular and decentralized
- Refactored the `ConfigBuilder` class to support modular loading and initialization of the configuration from environment variables.
- Implemented recursive loading and initialization of nested config objects.
- Introduced the `SystemConfiguration` base class to provide common functionality for all system settings.
- Added the `from_env` attribute to the `UserConfigurable` decorator to provide environment variable mappings.
- Updated the `Config` class and its related classes to inherit from `SystemConfiguration` and use the `UserConfigurable` decorator.
- Updated `LoggingConfig` and `TTSConfig` to use the `UserConfigurable` decorator for their fields.
- Modified the implementation of the `build_config_from_env` method in `ConfigBuilder` to utilize the new modular and recursive loading and initialization logic.
- Updated applicable test cases to reflect the changes in the config loading and initialization logic.
This refactor improves the flexibility and maintainability of the configuration loading process by introducing modular and recursive behavior, allowing for easier extension and customization through environment variables.
* refactor: Move OpenAI credentials into `OpenAICredentials` sub-config
- Move OpenAI API key and other OpenAI credentials from the global config to a new sub-config called OpenAICredentials.
- Update the necessary code to use the new OpenAICredentials sub-config instead of the global config when accessing OpenAI credentials.
- (Hopefully) unbreak Azure support.
- Update azure.yaml.template.
- Enable validation of assignment operations on SystemConfiguration and SystemSettings objects.
* feat: Update AutoGPT configuration options and setup instructions
- Added new configuration options for logging and OpenAI usage to .env.template
- Removed deprecated configuration options in config/config.py
- Updated setup instructions in Docker and general setup documentation to include information on using Azure's OpenAI services
* fix: Fix image generation with Dall-E
- Fix issue with image generation with Dall-E API
Additional user context: This commit fixes an issue with image generation using the Dall-E API. The code now correctly retrieves the API key from the agent's legacy configuration.
* refactor(agent/core): Refactor `autogpt.core.configuration.schema` and update docstrings
- Refactor the `schema.py` file in the `autogpt.core.configuration` module.
- Added docstring to `SystemConfiguration.from_env()`
- Updated docstrings for functions `_get_user_config_values`, `_get_non_default_user_config_values`, `_recursive_init_model`, `_recurse_user_config_fields`, and `_recurse_user_config_values`.
- Update the instruction in the prompt strategy to ensure the response is pure JSON.
- Remove unnecessary text and make the instruction clearer.
- Also update the error logging to include the received JSON content.
This commit refactors the code in the `one_shot.py` file and the `utilities.py` file.
- Update the pytest command in the .pre-commit-config.yaml file to use Poetry run instead of directly running pytest in the autogpts/autogpt directory.
- Refactored the `MemoryItem` class in the `autogpt.memory.vector.memory_item` module to improve code organization and readability.
- Split the `MemoryItem` class into two separate classes: `MemoryItem` and `MemoryItemFactory`.
- Modified the `get_embedding` function in the `autogpt.memory.vector.utils` module to accept an `EmbeddingModelProvider` for creating embeddings.
- Updated the usage of the `get_embedding` function in the `MemoryItem` class to pass the `embedding_provider` parameter.
- Updated the imports in the affected modules.
- Modify check_requirements.py to correctly handle optional dependencies
- Skip optional dependencies when iterating through dependence group dependencies in check_requirements.py
- Update autogpt.bat to use `poetry install` instead of `%PYTHON_CMD% -m poetry install`
- Update autogpt.sh to use `poetry install` instead of `$PYTHON_CMD -m poetry install`
- Use `poetry run` to execute the `autogpt` command in both scripts
- Update the reference to the VCR submodule in the autogpt tests
- Previous reference: 1896d8ac12ff1d27b7e9e5db6549abc38b260b40
- New reference: 9996f1d104a1e4f33c1e10aa664d01ea78db2a06
- Updated the `run` script to also check if `$OPENAI_API_KEY` is empty before copying `.env.example` and prompting the user to set API keys.
- Modified the `setup` script to install `--extras benchmark` separately from the initial `poetry install` command.
- Added `POETRY_INSTALLER_PARALLEL=false` flag to prevent conflicts between `forge` and `agbenchmark` during installation.
* Fix all but one flake8 linting errors
* Remove unused imports
* Wrap strings that are too long
* Add basic autogpts/autogpt/.flake8
* Delete planning_agent.py
* Delete default_prompts.py
* Delete _test_json_parser.py
* Refactor the example function call in AgentProfileGeneratorConfiguration from a string to an object
* Rewrite/update docstrings here and there while I'm at it
* Minor change to the description of the `open_file` command
* Use `user-agent` from config in web_selenium.py
* Delete hardcoded ABILITIES from core/planning/templates.py
* Delete duplicate and superseded test from test_image_gen.py
* Fix parameter definitions in mock_commands.py
* Delete code analysis blocks from test_spinner.py, test_url_validation.py
- Modify the test_url_validation_fails_local_path function to remove the specific match parameter and raise the ValueError without any match requirement.
- Update `test_config.py` to check if `config.smart_llm` starts with "gpt-4"
- Delete `test_retry_provider_openai.py` as it is no longer needed
- Update `test_url_validation.py` to properly test local file URLs
- Update `test_web_search.py` to assert against expected parts of output
- Refactor the `run_auto_gpt_server` function to make the Agent Protocol server database URL configurable.
- Use the `os.getenv` method to retrieve the database URL from the environment variable `AP_SERVER_DB_URL`.
- Created a new `LoggingConfig` class to represent the logging configuration in the `Config` class.
- Created a new `LogFormatName` enum to represent the available log formats: 'simple', 'debug', and 'structured_google_cloud'.
- Modified the `configure_logging` function to also accept an unpacked `LoggingConfig` object for arguments.
- Updated the `configure_logging` function to use the appropriate log format based on the log level.
- Added a `StructuredLoggingFormatter` class to handle formatting for structured logs.
- Updated the import statements and usages of `configure_logging` etc. in relevant modules to reflect the changes.
- Updated the `config` fixture in the unit tests to include the new logging configuration attributes.
- Updated the CLI with new parameters for log level and format.
- Reordered the parameters of the CLI.
- Removed memory related parameter from CLI.
- Remove the unused debug argument from the functions `inspect_zip_for_modules`, `initialize_openai_plugins`, `instantiate_openai_plugin_clients`, and `scan_plugins`.
- The debug argument was not being used in these functions and was unnecessary.
- Update the `RESTRICT_TO_WORKSPACE` variable in `.env.template` to use the new workspace location
- Update the `.gitignore` files to remove the old workspace directory
- Update the `voice.md` file in the documentation to reflect the new command for speech mode
- Renamed `run.sh` to `autogpt.sh` for consistent naming and easier command usage.
- Updated relevant documentation to reflect the new entrypoint name.
- Adjusted the Docker setup for AutoGPT to expose the full CLI and allow access to the Agent Protocol Server's port.
- Updated the AutoGPT+Docker guide in the documentation to reflect the changes.
Changes made:
- In the `Dockerfile`, removed the `--install-plugin-deps` option from the `ENTRYPOINT` command.
- In the `docker-compose.yml` file, added the `ports` section to expose port `8000`.
- In the `pyproject.toml` file, changed the `run` script to `autogpt.app.cli:cli`.
- In the `docker.md` file, added instructions for the new Docker setup and updated the configuration steps.
* README.md
- Mark evo.ninja as hackathon winner and new Current Best Agent.
- Remove hackathon banner.
- Rewrite sections about Forge, Benchmark, UI, Agent Protocol.
- Add sections about Leaderboard and CLI.
- Add quick links for improved user navigation, including links to documentation, contributing guidelines, and quickstart guide.
- Remove Quickstart.
* docs.agpt.co
- Removed links to outdated pages from navbar.
- Added quick links to several pages.
- Refactored and updated titles in docs site navbar for better readability and consistency.
- Rewrite intros on homepage to be more clear and professional and less cringe-worthy.
- Fix broken links.
- Rewrote setup and usage guides for AutoGPT Agent.
- Removed mentions of Azure support, except in the Docker guide.
- Added page with general information about AutoGPT.
* CONTRIBUTING.md
- Make CONTRIBUTING.md more friendly and accessible: added link to public kanban board, encouraged collaboration, removed section about net-negative PRs.
* autogpt/README.md
- Update description of AutoGPT to mention "modern Large Language Models" instead of GPT-4.
- Add quick links for improved user navigation, including links to documentation and contributing guidelines.
- Add features and setup guide: Agent Protocol, UI features, setup instructions, configuration options, Quickstart, CLI instructions, Agent Protocol server instructions, additional resources (wiki, project board, roadmap), and a note on sustainable development.
- Update links: documentation, setup instructions.
- Remove outdated Twitter accounts section.
- Modify the check_local_file_access function to only check for local file prefixes and return True if the URL starts with any of them.
- Remove the section of code that parsed the URL and checked if the hostname was in a list of local domains.
This change fixes the URL validation in the validators.py file. Previously, only local domains were allowed. After the change, non-local domains are allowed again.
(Note: this change was made in response to PR #5318 where the validation was modified to allow more local domains, but also accidentally to block any non-local domain.
- Removed Discord, GitHub stars, and Twitter follow badges
- Removed funding and sponsor sections
- Removed contributor and sponsor avatars
- Removed unnecessary line breaks and div tags
- Update `big_brain` attribute in `BaseAgentConfiguration` to default to `True`.
- This change disables hybrid mode in AutoGPT, making it use the configured smart LLM for thinking.
* feat: Add support for new models and features from OpenAI's November 6 update
- Updated the `OpenAIModelName` enum to include new GPT-3.5 Turbo and GPT-4 models
- Added support for the `GPT3_v3` and `GPT4_v3` models in the `OPEN_AI_CHAT_MODELS` dictionary
- Modified the `OpenAIProvider` class to handle the new models and features
- Updated the schema definitions in the `schema.py` module to include `AssistantToolCall` and `AssistantToolCallDict`
models
- Updated the `AssistantChatMessage` and `AssistantChatMessageDict` models to include the `tool_calls` field
- Refactored the code in various modules to handle the new tool calls and function arguments
Added support for the new models and features introduced with OpenAI's latest update. This commit allows the system to utilize the `GPT3_v3` and `GPT4_v3` models and includes all necessary modifications to the codebase to handle the new models and associated features.
* Fix validation error in LLM response handling
* fix: Fix profile generator in-prompt example for functions compatibility mode
- Updated the in-prompt example in the profile generator to be compatible with functions compatibility mode.
- Modified the example call section to correctly reflect the structure of function calls.
- Make minor adjustments to prompts in the OneShotAgentPromptConfiguration class
- Enhance the output format of the execute_result in AgentProtocolServer
- Update the key name from "criticism" to "self_criticism" in print_assistant_thoughts function
- Modify the output format of the web search results in the web_search function
- Updated `agent.py` to check if `command_name` exists before registering an action in `event_history`.
- Updated `agent_protocol_server.py` to handle the scenario when `execute_command` is not provided by LLM.
- Updated `main.py` to check if `command_name` exists before executing the command and logging the result.
- Create a unique container name based on agent ID
- Check if the container with the name exists, otherwise create a new container
- If the container is not running, start it; otherwise, restart it
- Execute the code in the container
- Return the output of the code execution
This change enables reusing the same container for consecutive code execution commands, allowing for iterative changes to the execution environment.
Note: This change also includes handling the case where the Docker image is not found locally by pulling it from Docker Hub. The image used in this case is "python:3-alpine".
- Change `ErrorInfo` class attribute `_repr` to `repr` for consistent serialization
- Update `__repr__` method to return `self.repr` instead of `self._repr`
- Replaced `error` field in `ActionErrorResult` with `ErrorInfo` model.
- Implemented `ErrorInfo` model with necessary fields (`args`, `message`, `exception_type`, `_repr`).
- Added `from_exception` method to `ErrorInfo` model to create an instance from an Exception object.
- Updated `ActionErrorResult.from_exception` method to utilize `ErrorInfo.from_exception`.
- Ensured that `ActionErrorResult` is now fully serializable and won't cause crashes.
- Made necessary changes in code comments and documentation.
This commit fixes crashes caused by attempted serialiation of `AgentException` objects in the `ActionHistory` (as part of `ActionErrorResult`s). To this end, it introduces a new `ErrorInfo` model to encapsulate information about the exception, including the exception type, message, arguments, and representation. The `from_exception` method is added to both `ActionErrorResult` and `ErrorInfo` to create an `ActionErrorResult` object from an exception, extracting the relevant information.