We have seen instances where the executor gets stuck in a failing
message-consuming loop due to the upstream RabbitMQ being down. The
current message-consuming pattern is not optimal for handling this.
### Changes 🏗️
* Add a retry limit to the execution loop limit.
* Use `basic_consume` instead of `basic_get` for handling message
consumption.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run agents cancel them
- Resolves#9823
The key-value pairs input, like those used in CreateDictionaryBlock, are
assumed to be either a numeric or a string type.
When it has `any` type, it was randomly assumed to be a numeric type.
### Changes 🏗️
Only convert to number when it's explicitly defined to do so on
key-value pair input.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tried two different key-value pair input: AiTextGenerator &
CreateDictionary
### Changes 🏗️
The recent change to the execution cancelation fix turns out to only
work on the first request.
This PR change fixes it by reworking how the thread_cached work on async
functions.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Cancel agent executions multiple times
### Changes 🏗️
Fix this broken behaviors:
Input data mix-up caused by running two different executions of the same
agent with the same input.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run agent with old user
- [x] Running two different executions of the same agent with the same
input.
<!-- Clearly explain the need for these changes: -->
Swap to pooling supabase connections rather than depending on x number
of max open connections
### Changes 🏗️
Adds direct connect URL to be used throughout the system
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test thoroughly all of the endpoints in the dev env with switched
infra matching pr
- [x] Follow the new release plan tests
- [x] Follow the old release plan tests
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>configuration changes</summary>
- Change how we connect to the database to use direct when configured
and database URL when not
- update prisma for this
- have default matching database and default
</details>
### Changes 🏗️
- Update onboarding to give user rewards for completing steps
- Remove `canvas-confetti` lib and add `party-js` instead; the former
didn't allow to play confetti from a component
- Add onboarding videos in `frontend/public/onboarding/`
- Remove Balance (`CreditsCard.tsx`) and add openable `Wallet.tsx` (and
accompanying `WalletTaskGroup.tsx`) instead that displays grouped
onboarding tasks with descriptions and short instructional videos
- Further relevant updates to `useOnboarding`, `types.ts`
- Implement onboarding rewards
- Add `onboarding_reward` function in `credit.py` that is used to reward
user for finished onboarding tasks safely - transaction key is
deterministic, so the same user won't be rewarded twice for the same
step.
- Add `reward_user` in `onboarding.py`
- Update `UserOnboarding` model and add a migration
<img width="464" alt="Screenshot 2025-04-05 at 6 06 29 PM"
src="https://github.com/user-attachments/assets/fca8d09e-0139-466b-b679-d24117ad01f0"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Onboarding works
- [x] Tasks can be completed
- [x] Rewards are added correctly for all completed tasks
Currently the execution task is not properly distributed between
executors because we need to send the execution request to the execution
server.
The execution manager now accepts the execution request from the message
queue. Thus, we can remove the synchronous RPC system from this service,
let the system focus on executing the agent, and not spare any process
for the HTTP API interface.
This will also reduce the risk of the execution service being too busy
and not able to accept any add execution requests.
### Changes 🏗️
* Remove the RPC system in Agent Executor
* Allow the cancellation of the execution that is still waiting in the
queue (by avoiding it from being executed).
* Make a unified helper for adding an execution request to the system
and move other execution-related helper functions into
`executor/utils.py`.
* Remove non-db connections (redis / rabbitmq) in Database Manager and
let the client manage this by themselves.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Existing CI, some agent runs
The graph execution queue is not disk-persisted; when the executor dies,
the executions are lost.
The scope of this issue is migrating the execution queue from an
inter-process queue to a RabbitMQ message queue. A sync client should be
used for this.
- Resolves#9746
- Resolves#9714
### Changes 🏗️
Move the execution manager from multiprocess.Queue into persisted
Rabbit-MQ.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Execute agents.
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
<!-- Clearly explain the need for these changes: -->
We want to be able to filter errors according to where they occur in
sentry so we need to track and include that data. We also are not
logging everything from app services correctly so fix that up
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Adds env tracking for frontend
- adds sentry init in app service spawn
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tested by running and making sure all events + logs are inserted
into sentry correctly
- fix#9003
- fix - #8969
- fix#8970
Adding correct margins in between headers, divider and content.
### Changes made
- Remove any vertical padding or margin from the section.
- Add top and bottom margins to the separator, so the spacing between
sections is handled only by the separator.
- Also, add a size prop in AvatarFallback because its size is currently
broken. It’s not able to extract the size properly from the className.
<!-- Clearly explain the need for these changes: -->
We want to be able to filter errors according to where they occur in
sentry so we need to track and include that data. We also are not
logging everything from app services correctly so fix that up
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Adds env tracking for frontend
- adds sentry init in app service spawn
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tested by running and making sure all events + logs are inserted
into sentry correctly
- fix#9003
- fix - #8969
- fix#8970
Adding correct margins in between headers, divider and content.
### Changes made
- Remove any vertical padding or margin from the section.
- Add top and bottom margins to the separator, so the spacing between
sections is handled only by the separator.
- Also, add a size prop in AvatarFallback because its size is currently
broken. It’s not able to extract the size properly from the className.
- fix#9222
- fix#9221
- fix#8966
### Changes made
- Standardized the height of store cards.
- Corrected spacing and responsiveness behavior.
- Removed horizontal margin and max-width from the featured section.
- Fixed the aspect ratio of the agent image in the store card.
- Now, a normal desktop screen displays 3 columns of agents instead of
4.
<img width="1512" alt="Screenshot 2025-04-07 at 7 09 40 AM"
src="https://github.com/user-attachments/assets/50d3b5c9-4e7c-456e-b5f1-7c0093509bd3"
/>
Distilled from #9541 to reduce the scope of that PR.
- Part of #9307
- ❗ Blocks #9786
- ❗ Blocks #9541
### Changes 🏗️
- Fix `LibraryAgent` schema (for #9786)
- Fix relationships between `LibraryAgent`, `AgentGraph`, and
`AgentPreset`
- Impose uniqueness constraint on `LibraryAgent`
- Rename things that are called `agent` that actually refer to a
`graph`/`agentGraph`
- Fix singular/plural forms in DB schema
- Simplify reference names of closely related objects (e.g.
`AgentGraph.AgentGraphExecutions` -> `AgentGraph.Executions`)
- Eliminate use of `# type: ignore` in DB statements
- Add `typed` and `typed_cast` utilities to `backend.util.type`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI static type checking (with all risky `# type: ignore` removed)
- [x] Check that column references in views are updated
- fix#9222
- fix#9221
- fix#8966
### Changes made
- Standardized the height of store cards.
- Corrected spacing and responsiveness behavior.
- Removed horizontal margin and max-width from the featured section.
- Fixed the aspect ratio of the agent image in the store card.
- Now, a normal desktop screen displays 3 columns of agents instead of
4.
<img width="1512" alt="Screenshot 2025-04-07 at 7 09 40 AM"
src="https://github.com/user-attachments/assets/50d3b5c9-4e7c-456e-b5f1-7c0093509bd3"
/>
<!-- Clearly explain the need for these changes: -->
We were duplicating placeholder values across all agents 😨
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
Deep copies the schema instead
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test the broken agent in dev
- Resolves#9792
### Changes 🏗️
- Replace all `default=[]` -> `default_factory=list`
- Replace all `default={}` -> `default_factory=dict`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] CI
---------
Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
The linter currently exits with exit code 0 even if linting fails. This
makes the CI linter permissive which isn't good.
Changes:
- Make linter exit with an error code if a linting step fails
- Fix existing formatting issues
This PR is to add the new [Meta: Llama 4
Maverick](https://openrouter.ai/meta-llama/llama-4-maverick) and [Meta:
Llama 4 Scout](https://openrouter.ai/meta-llama/llama-4-scout) models
via [OpenRouter](https://openrouter.ai/)
### Changes 🏗️
Added the model names to ``llm.py``
```
META_LLAMA_4_SCOUT = "meta-llama/llama-4-scout"
META_LLAMA_4_MAVERICK = "meta-llama/llama-4-maverick"
```
and the modela metadata
```
LlmModel.META_LLAMA_4_SCOUT: ModelMetadata("open_router", 131072, 131072),
LlmModel.META_LLAMA_4_MAVERICK: ModelMetadata("open_router", 1048576, 1000000),
```
and i have added the model price to ``block_cost_config.py``
```
LlmModel.META_LLAMA_4_SCOUT: 1,
LlmModel.META_LLAMA_4_MAVERICK: 1,
```
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open the build page and place a ai text block, open the model
select and scroll to the bottom and select either of the 2 models
- [x] test them with a prompt and wait for a reply!
This is a prerequisite infra change for
https://github.com/Significant-Gravitas/AutoGPT/issues/9714.
We will need a service where we can maintain our own client (db, redis,
rabbitmq, be it async/sync) and configure our own cadence of
initialization and cleanup.
While refactoring the service.py, an option to use Pyro as an RPC
protocol is also removed.
### Changes 🏗️
* Decouple resource initialization and cleanup from the parent
AppService logic.
* Removed Pyro.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] CI
<!-- Clearly explain the need for these changes: -->
Now that we are trying to use Sentry more, cleaning up some errors ->
warnings is a good idea
### Changes 🏗️
- reduces log level of retry to warning
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] check it comes through in sentry
<!-- Clearly explain the need for these changes: -->
We got this error in
sentry:[AUTOGPT-SERVER-33P](https://significant-gravitas.sentry.io/issues/6462614597/events/bb4871d796b04e759ade55197498cff9/)
```
Level: Error
'Secrets' object has no attribute 'ProviderName.GOOGLE_client_id'
```
### Changes 🏗️
- Follows pattern used when accessing these in
`_get_provider_oauth_handler` in the router
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test to make sure getting works
Fix https://github.com/Significant-Gravitas/AutoGPT/pull/9632
### Changes 🏗️
- Set default values from input schema to `hardcodedValues` in
`CustomNode.tsx`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Default values are correctly applied to newly created node
- fix#8956
### Changes:
- Updated line height from 28px to 36px for improved readability.
- Ensured that all section headings (“Featured agents”, “Top agents”,
“Featured creators”, and “Become a creator”) now have a uniform style.
- Verified that font-poppins is correctly set in the Tailwind config
file and layout.tsx.
- Color changed from #282828 to #262626
### Scope:
- This PR only includes typography-related adjustments.

- fix#9425
- Enhancing the functionality of searching blocks on the build page
Currently, it only performs exact matching on the block name and
description. I added a scoring mechanism for searching.
- The scoring algorithm works as follows:
- Returns 1 if no query (all blocks match equally)
- Normalized query for case-insensitive matching
- Returns 3 for exact substring matches in block name (highest priority)
- Returns 2 when all query words appear in the block name (regardless of
order)
- Returns 1.X for blocks with names similar to query using Jaro-Winkler
distance (X is similarity score)
- Returns 0.5 when all query words appear in the block description
(lowest priority)
- Returns 0 for no match
Higher scores will appear first in search results.
> I have used an external library for Jaro-Winkler distance -
[link](https://www.npmjs.com/package/jaro-winkler)
Before

After

### Changes 🏗️
- Use the same code as in Library to display inputs for onboarding agent
- Fixes bug that crashes frontend when showing onboarding inputs
- Remove no longer needed `OnboardingAgentInput` component
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All input types display correctly
- [x] Onboarding agent runs
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
This series of upgrades:
https://github.com/significant-gravitas/autogpt/pull/9727https://github.com/Significant-Gravitas/AutoGPT/pull/9728https://github.com/Significant-Gravitas/AutoGPT/pull/9560
Caused some code in the repo being deprecated, this PR addresses those.
### Changes 🏗️
Fix pydantic config, usage of field, usage of proper prisma
`CreateInput` type, pytest loop-scope.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI, manual test on running some agents.
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
Sentry just released logs so lets enrich our details there too
### Changes 🏗️
- Adds sentry logging
- Adds dependencies tracking all of our sentry integrations
- Adds environment tracking to sentry
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tested to make sure events show up in sentry with the correct
environment logging
<!-- Clearly explain the need for these changes: -->
I oopsed and had an extra unneeded parameter (as @majdyz pointed out)
and wasn't respected everywhere it was used.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Remove parameter
- update all the places AuthFeedback is called
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] Test all pages with authfeedback on it
Co-authored-by: Bently <tomnoon9@gmail.com>
- Resolves#9730
### Changes 🏗️
- feat: Add "Open in builder" run action
- refactor: Add `ActionButtonGroup` to replace boilerplate code in
`AgentRunDetailsView`, `AgentRunDraftView`, `AgentScheduleDetailsView`
- feat: Add link support to `ActionButtonGroup`, `ButtonAction`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/library/agents/[id]`
- [x] "Run again" button works
- [x] "Open in builder" button-link works
- Resolves#9731
### Changes 🏗️
- feat: Add "Steps" showing `node_execution_count` to agent run view
- Add `GraphExecutionMeta.stats.node_exec_count` attribute
- feat(backend/executor): Send graph execution update after *every* node
execution (instead of only I/O node executions)
- Update graph execution stats after every node execution
- refactor: Move `GraphExecutionMeta` stats into sub-object
(`cost`, `duration`, `total_run_time` -> `stats.cost`, `stats.duration`,
`stats.node_exec_time`)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- View an agent run with 1+ steps on `/library/agents/[id]`
- [x] "Info" section layout doesn't break
- [x] Number of steps is shown
- Initiate a new agent run
- [x] "Steps" increments in real time during execution
In this pull request, the following changes have been made in response
to Issue #9190:
Documentation Changes:
- Added a note to the AutoGPT documentation regarding Docker
installation on Windows.
- Specifically, the note advises users to opt for WSL2 (Windows
Subsystem for Linux version 2) instead of Hyper-V during Docker setup to
prevent issues with Supabase, such as the "unhealthy" status for
supabase-db.
---------
Co-authored-by: Madura Herath <madurah@verdentra.com>
Co-authored-by: Bently <tomnoon9@gmail.com>
- Resolves#9679
### Changes 🏗️
Frontend:
- Fix crash on `payload` graph input
- Fix crash on object type agent I/O values
- Hide "+ New run" if `graph.webhook_id` is set
Backend:
- Add computed field `webhook_id` to `GraphModel`
- Add computed property `webhook_input_node` to `GraphModel`
- Refactor:
- Move `Node.webhook_id` -> `NodeModel.webhook_id`
- Move `NodeModel.block` -> `Node.block` (computed property)
- Replace `get_block(node.block_id)` with `node.block` where sensible
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create and run a simple graph
- [x] Create a graph with a webhook trigger and ensure it works
- [x] Check out the runs of a webhook-triggered graph and ensure the
page works
<!-- Clearly explain the need for these changes: -->
Uvicorn and our logs were ending up in different places, this pr enures
uvicorn using our logging config, not their own.
### Changes 🏗️
- Clears uvicorn's loggers for rest, ws
- always log to stdout,stderr and additionally log to gcp is appropriate
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test all possible variants of the log cloud vs not and ensure that
uvicorn logs show up in the same place that rest of the system logs do
for all
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
- Resolves#9716
- Builds on the work done in #9627
### Changes 🏗️
- Remove `safeCopyGraph`; export directly from backend instead
- Explicitly name sanitization functions for *importing* graphs; move to
`@/lib/autogpt-server-api/utils`
- Amend `BackendAPI.getGraph(..)` to delete `.user_id` if `for_export ==
true`
Out-of-scope improvements:
- Add missing `user_id` to frontend `Graph` types
- Add `UserID` branded type for `User.id` + all `user_id` properties
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create and configure an agent with the Publish To Medium block, a
block that uses credentials, and a webhook trigger
- Go to `/monitoring` and click the agent you just created
- [x] -> "Export" button should work
- [x] -> Exported file contains no credentials or secrets
- [x] -> Exported file contains no user IDs
- [x] -> Exported file contains no webhook IDs
- fix#8958
Currently, the arrow button and carousel have a 16px margin, and the
button is placed 12px below the top of the container. This makes the
spacing appear to be 28px. Therefore, place the button and indicator at
the top of the container.