## Config
- For Supabase, the back end needs `SUPABASE_URL`, `SUPABASE_SERVICE_ROLE_KEY`, and `SUPABASE_JWT_SECRET`
- For the GitHub integration to work, the back end needs `GITHUB_CLIENT_ID` and `GITHUB_CLIENT_SECRET`
- For integrations OAuth flows to work in local development, the back end needs `FRONTEND_BASE_URL` to generate login URLs with accurate redirect URLs
## REST API
- Tweak output of OAuth `/login` endpoint: add `state_token` separately in response
- Add `POST /integrations/{provider}/credentials` (for API keys)
- Add `DELETE /integrations/{provider}/credentials/{cred_id}`
## Back end
- Add Supabase support to `AppService`
- Add `FRONTEND_BASE_URL` config option, mainly for local development use
### `autogpt_libs.supabase_integration_credentials_store`
- Add `CredentialsType` alias
- Add `.bearer()` helper methods to `APIKeyCredentials` and `OAuth2Credentials`
### Blocks
- Add `CredentialsField(..) -> CredentialsMetaInput`
## Front end
### UI components
- `CredentialsInput` for use on `CustomNode`: allows user to add/select credentials for a service.
- `APIKeyCredentialsModal`: a dialog for creating API keys
- `OAuth2FlowWaitingModal`: a dialog to indicate that the application is waiting for the user to log in to the 3rd party service in the provided pop-up window
- `NodeCredentialsInput`: wrapper for `CredentialsInput` with the "usual" interface of node input components
- New icons: `IconKey`, `IconKeyPlus`, `IconUser`, `IconUserPlus`
### Data model
- `CredentialsProvider`: introduces the app-level `CredentialsProvidersContext`, which acts as an application-wide store and cache for credentials metadata.
- `useCredentials` for use on `CustomNode`: uses `CredentialsProvidersContext` and provides node-specific credential data and provider-specific data/functions
- `/auth/integrations/oauth_callback` route to close the loop to the `CredentialsInput` after a user completes sign-in to the external service
- Add `BlockIOCredentialsSubSchema`
### API client
- Add `isAuthenticated` method
- Add methods for integration OAuth flow: `oAuthLogin`, `oAuthCallback`
- Add CRD methods for credentials: `createAPIKeyCredentials`, `listCredentials`, `getCredentials`, `deleteCredentials`
- Add mirrored types `CredentialsMetaResponse`, `CredentialsMetaInput`, `OAuth2Credentials`, `APIKeyCredentials`
- Add GitHub blocks + "DEVELOPER_TOOLS" category
- Add `**kwargs` to `Block.run(..)` signature to support additional kwargs
- Add support for loading blocks from nested modules (e.g. `blocks/github/issues.py`)
#### Executor
- Add strict support for `credentials` fields on blocks
- Fetch credentials for graph execution and pass them down through to the node execution
* move to supabase pg instance
* remove postgres and bind supabase port
* Updated setup
- Switched db name to postgres to work with prisma studio
- Added platform schema
- Added Market-migartions
- bound prisma studio port
* remove studio port
* updated .env
* updated readmes
---------
Co-authored-by: SwiftyOS <craigswift13@gmail.com>
Restructuring the Repo to make it clear the difference between classic autogpt and the autogpt platform:
* Move the "classic" projects `autogpt`, `forge`, `frontend`, and `benchmark` into a `classic` folder
* Also rename `autogpt` to `original_autogpt` for absolute clarity
* Rename `rnd/` to `autogpt_platform/`
* `rnd/autogpt_builder` -> `autogpt_platform/frontend`
* `rnd/autogpt_server` -> `autogpt_platform/backend`
* Adjust any paths accordingly
* move migrations, update networking and dockignore
* update docs
* remove sqlite from ci
* remove schema linting checks
* fix formatting
* remove schema linting
* add test script
* formatting and linting
* stop pg not down
* seperate test db
* diff port
* remove duplicate
* talking head
* linting
* remove clip id, not needed
* add more descriptive name
* add min requirement to polling attempts and intervals
* add docs and link to docs
* remove extra space
* force new tab
* fix linting
* add did key to .env.template
* Add minimal implementation of `LlamafileProvider`, a new `ChatModelProvider` for llamafiles. It extends `BaseOpenAIProvider` and only overrides methods that are necessary to get the system to work at a basic level.
* Add support for `mistral-7b-instruct-v0.2`. This is the only model currently supported by `LlamafileProvider` because this is the only model I tested anything with.
* Add instructions to use AutoGPT with llamafile in the docs at `autogpt/setup/index.md`
* Add helper script to get it running quickly at `scripts/llamafile/serve.py`
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
- Implement message based history in `ActionHistoryComponent`
- Make non-summarized message count configurable (`ActionHistoryComponent.full_message_count`)
- Run `ActionHistoryComponent` after `SystemComponent` so that history messages are last in the prompt
- Omit final instruction message if prompt already contains assistant messages
- Filter `raw_message` from `ActionProposal.schema()`
---------
Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
Remove many env vars and use component-level configuration that could be loaded from file instead.
### Changed
- `BaseAgent` provides `serialize_configs` and `deserialize_configs` that can save and load all component configuration as json `str`. Deserialized components/values overwrite existing values, so not all values need to be present in the serialized config.
- Decoupled `forge/content_processing/text.py` from `Config`
- Kept `execute_local_commands` in `Config` because it's needed to know if OS info should be included in the prompt
- Updated docs to reflect changes
- Renamed `Config` to `AppConfig`
### Added
- Added `ConfigurableComponent` class for components and following configs:
- `ActionHistoryConfiguration`
- `CodeExecutorConfiguration`
- `FileManagerConfiguration` - now file manager allows to have multiple agents using the same workspace
- `GitOperationsConfiguration`
- `ImageGeneratorConfiguration`
- `WebSearchConfiguration`
- `WebSeleniumConfiguration`
- `BaseConfig` in `forge` and moved `Config` (now inherits from `BaseConfig`) back to `autogpt`
- Required `config_class` attribute for the `ConfigurableComponent` class that should be set to configuration class for a component
`--component-config-file` CLI option and `COMPONENT_CONFIG_FILE` env var and field in `Config`. This option allows to load configuration from a specific file, CLI option takes precedence over env var.
- Added comments to config models
### Removed
- Unused `change_agent_id` method from `FileManagerComponent`
- Unused `allow_downloads` from `Config` and CLI options (it should be in web component config if needed)
- CLI option `--browser-name` (the option is inside `WebSeleniumConfiguration`)
- Unused `workspace_directory` from CLI options
- No longer needed variables from `Config` and docs
- Unused fields from `Config`: `image_size`, `audio_to_text_provider`, `huggingface_audio_to_text_model`
- Removed `files` and `workspace` class attributes from `FileManagerComponent`
* Update instructions to set up OpenAI / GPT-4 access
* Add instructions to set up Anthropic access
* Add instructions to set up Groq access
* Remove GPT-specific `--gpt3only`, `--gpt4only` CLI flags and related logic
* Remove duplicate config instructions from docker setup page, replace it by a link to the standard setup instructions
- **FIX ALL LINT/TYPE ERRORS IN AUTOGPT, FORGE, AND BENCHMARK**
### Linting
- Clean up linter configs for `autogpt`, `forge`, and `benchmark`
- Add type checking with Pyright
- Create unified pre-commit config
- Create unified linting and type checking CI workflow
### Testing
- Synchronize CI test setups for `autogpt`, `forge`, and `benchmark`
- Add missing pytest-cov to benchmark dependencies
- Mark GCS tests as slow to speed up pre-commit test runs
- Repair `forge` test suite
- Add `AgentDB.close()` method for test DB teardown in db_test.py
- Use actual temporary dir instead of forge/test_workspace/
- Move left-behind dependencies for moved `forge`-code to from autogpt to forge
### Notable type changes
- Replace uses of `ChatModelProvider` by `MultiProvider`
- Removed unnecessary exports from various __init__.py
- Simplify `FileStorage.open_file` signature by removing `IOBase` from return type union
- Implement `S3BinaryIOWrapper(BinaryIO)` type interposer for `S3FileStorage`
- Expand overloads of `GCSFileStorage.open_file` for improved typing of read and write modes
Had to silence type checking for the extra overloads, because (I think) Pyright is reporting a false-positive:
https://github.com/microsoft/pyright/issues/8007
- Change `count_tokens`, `get_tokenizer`, `count_message_tokens` methods on `ModelProvider`s from class methods to instance methods
- Move `CompletionModelFunction.schema` method -> helper function `format_function_def_for_openai` in `forge.llm.providers.openai`
- Rename `ModelProvider` -> `BaseModelProvider`
- Rename `ChatModelProvider` -> `BaseChatModelProvider`
- Add type `ChatModelProvider` which is a union of all subclasses of `BaseChatModelProvider`
### Removed rather than fixed
- Remove deprecated and broken autogpt/agbenchmark_config/benchmarks.py
- Various base classes and properties on base classes in `forge.llm.providers.schema` and `forge.models.providers`
### Fixes for other issues that came to light
- Clean up `forge.agent_protocol.api_router`, `forge.agent_protocol.database`, and `forge.agent.agent`
- Add fallback behavior to `ImageGeneratorComponent`
- Remove test for deprecated failure behavior
- Fix `agbenchmark.challenges.builtin` challenge exclusion mechanism on Windows
- Fix `_tool_calls_compat_extract_calls` in `forge.llm.providers.openai`
- Add support for `any` (= no type specified) in `JSONSchema.typescript_type`
- Moved `autogpt` and `forge` to project root
- Removed `autogpts` directory
- Moved and renamed submodule `autogpts/autogpt/tests/vcr_cassettes` to `autogpt/tests/vcr_cassettes`
- When using CLI agents will be created in `agents` directory (instead of `autogpts`)
- Renamed relevant docs, code and config references from `autogpts/[forge|autogpt]` to `[forge|autogpt]` and from `*../../*` to `*../*`
- Updated `CODEOWNERS`, GitHub Actions and Docker `*.yml` configs
- Updated symbolic links in `docs`
Remove unused `forge` code and improve structure of `forge`.
* Put all Agent Protocol stuff together in `forge.agent_protocol`
* ... including `forge.agent_protocol.database` (was `forge.db`)
* Remove duplicate/unused parts from `forge`
* `forge.actions`, containing old commands; replaced by `forge.components` from `autogpt`
* `forge/agent.py` (the old one, `ForgeAgent`)
* `forge/app.py`, which was used to serve and run the `ForgeAgent`
* `forge/db.py` (`ForgeDatabase`), which was used for `ForgeAgent`
* `forge/llm.py`, which has been replaced by new `forge.llm` module which was ported from `autogpt.core.resource.model_providers`
* `forge.memory`, which is not in use and not being maintained
* `forge.sdk`, much of which was moved into other modules and the rest is deprecated
* `AccessDeniedError`: unused
* `forge_log.py`: replaced with `logging`
* `validate_yaml_file`: not needed
* `ai_settings_file` and associated loading logic and env var `AI_SETTINGS_FILE`: unused
* `prompt_settings_file` and associated loading logic and env var `PROMPT_SETTINGS_FILE`: default directives are now provided by the `SystemComponent`
* `request_user_double_check`, which was only used in `AIDirectives.load`
* `TypingConsoleHandler`: not used
Moved from `autogpt` to `forge`:
- `autogpt.config` -> `forge.config`
- `autogpt.processing` -> `forge.content_processing`
- `autogpt.file_storage` -> `forge.file_storage`
- `autogpt.logs` -> `forge.logging`
- `autogpt.speech` -> `forge.speech`
- `autogpt.agents.(base|components|protocols)` -> `forge.agent.*`
- `autogpt.command_decorator` -> `forge.command.decorator`
- `autogpt.models.(command|command_parameter)` -> `forge.command.(command|parameter)`
- `autogpt.(commands|components|features)` -> `forge.components`
- `autogpt.core.utils.json_utils` -> `forge.json.parsing`
- `autogpt.prompts.utils` -> `forge.llm.prompting.utils`
- `autogpt.core.prompting.(base|schema|utils)` -> `forge.llm.prompting.*`
- `autogpt.core.resource.model_providers` -> `forge.llm.providers`
- `autogpt.llm.providers.openai` + `autogpt.core.resource.model_providers.utils`
-> `forge.llm.providers.utils`
- `autogpt.models.action_history:Action*` -> `forge.models.action`
- `autogpt.core.configuration.schema` -> `forge.models.config`
- `autogpt.core.utils.json_schema` -> `forge.models.json_schema`
- `autogpt.core.resource.schema` -> `forge.models.providers`
- `autogpt.models.utils` -> `forge.models.utils`
- `forge.sdk.(errors|utils)` + `autogpt.utils.(exceptions|file_operations_utils|validators)`
-> `forge.utils.(exceptions|file_operations|url_validator)`
- `autogpt.utils.utils` -> `forge.utils.const` + `forge.utils.yaml_validator`
Moved within `forge`:
- forge/prompts/* -> forge/llm/prompting/*
The rest are mostly import updates, and some sporadic removals and necessary updates (for example to fix circular deps):
- Changed `CommandOutput = Any` to remove coupling with `ContextItem` (no longer needed)
- Removed unused `Singleton` class
- Reluctantly moved `speech` to forge due to coupling (tts needs to be changed into component)
- Moved `function_specs_from_commands` and `core/resource/model_providers` to `llm/providers` (resources were a `core` thing and are no longer relevant)
- Keep tests in `autogpt` to reduce changes in this PR
- Removed unused memory-related code from tests
- Removed duplicated classes: `FancyConsoleFormatter`, `BelowLevelFilter`
- `prompt_settings.yaml` is in both `autogpt` and `forge` because for some reason doesn't work when placed in just one dir (need to be taken care of)
- Removed `config` param from `clean_input`, it wasn't used and caused circular dependency
- Renamed `BaseAgentActionProposal` to `ActionProposal`
- Updated `pyproject.toml` in `forge` and `autogpt`
- Moved `Action*` models from `forge/components/action_history/model.py` to `forge/models/action.py` as those are relevant to the entire agent and not just `EventHistoryComponent` + to reduce coupling
- Renamed `DEFAULT_ASK_COMMAND` to `ASK_COMMAND` and `DEFAULT_FINISH_COMMAND` to `FINISH_COMMAND`
- Renamed `AutoGptFormatter` to `ForgeFormatter` and moved to `forge`
Includes changes from PR https://github.com/Significant-Gravitas/AutoGPT/pull/7148
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Documentation files were in docs/content/AutoGPT/components, symlinks in autogpts/autogpt/autogpt/(agents|commands).
Chef doesn't allow symlinks that point to locations outside of package dir.
Replacing the documentation files with symlinks, and symlinks with the actual documentation files, should fix this.
- feat(agent/core): Add `AnthropicProvider`
- Add `ANTHROPIC_API_KEY` to .env.template and docs
Notable differences in logic compared to `OpenAIProvider`:
- Merges subsequent user messages in `AnthropicProvider._get_chat_completion_args`
- Merges and extracts all system messages into `system` parameter in `AnthropicProvider._get_chat_completion_args`
- Supports prefill; merges prefill content (if any) into generated response
- Prompt changes to improve compatibility with `AnthropicProvider`
Anthropic has a slightly different API compared to OpenAI, and has much stricter input validation. E.g. Anthropic only supports a single `system` prompt, where OpenAI allows multiple `system` messages. Anthropic also forbids sequences of multiple `user` or `assistant` messages and requires that messages alternate between roles.
- Move response format instruction from separate message into main system prompt
- Fix clock message format
- Add pre-fill to `OneShot` generated prompt
- refactor(agent/core): Tweak `model_providers.schema`
- Simplify `ModelProviderUsage`
- Remove attribute `total_tokens` as it is always equal to `prompt_tokens + completion_tokens`
- Modify signature of `update_usage(..)`; no longer requires a full `ModelResponse` object as input
- Improve `ModelProviderBudget`
- Change type of attribute `usage` to `defaultdict[str, ModelProviderUsage]` -> allow per-model usage tracking
- Modify signature of `update_usage_and_cost(..)`; no longer requires a full `ModelResponse` object as input
- Allow `ModelProviderBudget` zero-argument instantiation
- Fix type of `AssistantChatMessage.role` to match `ChatMessage.role` (str -> `ChatMessage.Role`)
- Add shared attributes and constructor to `ModelProvider` base class
- Add `max_output_tokens` parameter to `create_chat_completion` interface
- Add pre-filling as a global feature
- Add `prefill_response` field to `ChatPrompt` model
- Add `prefill_response` parameter to `create_chat_completion` interface
- Add `ChatModelProvider.get_available_models()` and remove `ApiManager`
- Remove unused `OpenAIChatParser` typedef in openai.py
- Remove redundant `budget` attribute definition on `OpenAISettings`
- Remove unnecessary `usage` in `OpenAIProvider` > `default_settings` > `budget`
- feat(agent): Allow use of any available LLM provider through `MultiProvider`
- Add `MultiProvider` (`model_providers.multi`)
- Replace all references to / uses of `OpenAIProvider` with `MultiProvider`
- Change type of `Config.smart_llm` and `Config.fast_llm` from `str` to `ModelName`
- feat(agent/core): Validate function call arguments in `create_chat_completion`
- Add `validate_call` method to `CompletionModelFunction` in `model_providers.schema`
- Add `validate_tool_calls` utility function in `model_providers.utils`
- Add tool call validation step to `create_chat_completion` in `OpenAIProvider` and `AnthropicProvider`
- Remove (now redundant) command argument validation logic in agent.py and models/command.py
- refactor(agent): Rename `get_openai_command_specs` to `function_specs_from_commands`
### Background
Follow up after merging https://github.com/Significant-Gravitas/AutoGPT/pull/7054, old plugins will no longer be used.
### Changes 🏗️
- Removed all dead code needed to load and use plugins.
- Removed `auto-gpt-plugin-template` dependency
- Removed `rev=` from `autogpt-forge` dependency (the set `rev` had incompatible `duckduckgo-search` versions)
- Kept `--install-plugin-deps` CLI option and dead code associated (may be needed for new plugins)
* feat: add all the new component docs to the site
* fix(docs): relative links and markdown warnings
* feat(docs): How to contribute to the docs as a docs section
* fix(docs): missed docs page for developer setup
* fix(docs): re-add configurations options
* fix(docs): bad link to components fixed
* fix(docs): bad link to components fixed
* ref(docs): reorder some items to make more sense
* fix(docs): bad indentation and duplicate block
* fix(docs): warning about out of date markdown extension
* fix(docs): broken links fixed
* fix(docs): markdown formatter complaints
This incremental re-architecture unifies Agent code and plugins, so everything is component-based.
## Breaking changes
- Removed command categories and `DISABLED_COMMAND_CATEGORIES` environment variable. Use `DISABLED_COMMANDS` environment variable to disable individual commands.
- Changed `command` decorator; old-style commands are no longer supported. Implement `CommandProvider` on components instead.
- Removed `CommandRegistry`, now all commands are provided by components implementing `CommandProvider`.
- Removed `prompt_config` from `AgentSettings`.
- Removed plugin support: old plugins will no longer be loaded and executed.
- Removed `PromptScratchpad`, it was used by plugins and is no longer needed.
- Changed `ThoughtProcessOutput` from tuple to pydantic `BaseModel`.
## Other changes
- Created `AgentComponent`, protocols and logic to execute them.
- `BaseAgent` and `Agent` is now composed of components.
- Moved some logic from `BaseAgent` to `Agent`.
- Moved agent features and commands to components.
- Removed check if the same operation is about to be executed twice in a row.
- Removed file logging from `FileManagerComponent` (formerly `AgentFileManagerMixin`)
- Updated tests
- Added docs
See [Introduction](https://github.com/kcze/AutoGPT/blob/kpczerwinski/open-440-modular-agents/docs/content/AutoGPT/component%20agent/introduction.md) for more information.
- Change default `SMART_LLM` from `gpt-4` to `gpt-4-turbo-preview`
- Change default `FAST_LLM` from `gpt-3.5-turbo-16k` to `gpt-3.5-turbo-0125`
- Change default `EMBEDDING_MODEL` from `text-embedding-ada-002` to `text-embedding-3-small`
- Update .env.template, azure.yaml.template, and documentation accordingly
* docs: Add documentation on how to use Agent Protocol in the template
- Added documentation on how to use Agent Protocol and its settings in the `.env.template` file.
- An explanation is provided for the `AP_SERVER_PORT` and `AP_SERVER_DB_URL` settings.
- This change aims to improve the understanding and usage of Agent Protocol in the project.
* docs: Update usage.md with information about configuring the API port
- Update the documentation for the `serve` mode in `usage.md`
- Add information about configuring the port for the API server using the `AP_SERVER_PORT` environment variable.
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* feat: Refactor config loading and initialization to be modular and decentralized
- Refactored the `ConfigBuilder` class to support modular loading and initialization of the configuration from environment variables.
- Implemented recursive loading and initialization of nested config objects.
- Introduced the `SystemConfiguration` base class to provide common functionality for all system settings.
- Added the `from_env` attribute to the `UserConfigurable` decorator to provide environment variable mappings.
- Updated the `Config` class and its related classes to inherit from `SystemConfiguration` and use the `UserConfigurable` decorator.
- Updated `LoggingConfig` and `TTSConfig` to use the `UserConfigurable` decorator for their fields.
- Modified the implementation of the `build_config_from_env` method in `ConfigBuilder` to utilize the new modular and recursive loading and initialization logic.
- Updated applicable test cases to reflect the changes in the config loading and initialization logic.
This refactor improves the flexibility and maintainability of the configuration loading process by introducing modular and recursive behavior, allowing for easier extension and customization through environment variables.
* refactor: Move OpenAI credentials into `OpenAICredentials` sub-config
- Move OpenAI API key and other OpenAI credentials from the global config to a new sub-config called OpenAICredentials.
- Update the necessary code to use the new OpenAICredentials sub-config instead of the global config when accessing OpenAI credentials.
- (Hopefully) unbreak Azure support.
- Update azure.yaml.template.
- Enable validation of assignment operations on SystemConfiguration and SystemSettings objects.
* feat: Update AutoGPT configuration options and setup instructions
- Added new configuration options for logging and OpenAI usage to .env.template
- Removed deprecated configuration options in config/config.py
- Updated setup instructions in Docker and general setup documentation to include information on using Azure's OpenAI services
* fix: Fix image generation with Dall-E
- Fix issue with image generation with Dall-E API
Additional user context: This commit fixes an issue with image generation using the Dall-E API. The code now correctly retrieves the API key from the agent's legacy configuration.
* refactor(agent/core): Refactor `autogpt.core.configuration.schema` and update docstrings
- Refactor the `schema.py` file in the `autogpt.core.configuration` module.
- Added docstring to `SystemConfiguration.from_env()`
- Updated docstrings for functions `_get_user_config_values`, `_get_non_default_user_config_values`, `_recursive_init_model`, `_recurse_user_config_fields`, and `_recurse_user_config_values`.