<!-- Clearly explain the need for these changes: -->
Swap to pooling supabase connections rather than depending on x number
of max open connections
### Changes 🏗️
Adds direct connect URL to be used throughout the system
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test thoroughly all of the endpoints in the dev env with switched
infra matching pr
- [x] Follow the new release plan tests
- [x] Follow the old release plan tests
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>configuration changes</summary>
- Change how we connect to the database to use direct when configured
and database URL when not
- update prisma for this
- have default matching database and default
</details>
DB Manager uses DB connections, and multiple instances of it hog the
very limited Database connection quota. We need this service to be a
unified place to control the limited db connection.
### Changes 🏗️
Connect all the database manager usage in a single place, currently
running on the rest service.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
<!-- Clearly explain the need for these changes: -->
Per chat with toran, we want one click unsubscribe to work in things
like gmail to reduce spam. This implements the one click unsubscribe
feature, but it is difficult to test due to not being able to control
when it is shown.
### Changes 🏗️
- Adds one click unsub
- Casually fix google reauth by checking the correct variables (approved
to be in pr by Toran)
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Send an email
- [x] Open headers on email and pull the link out
- [x] `curl -X POST <link>` and ensure unsubscribed from all messages.
Note that we can't control when the box is shown on various providers so
we just verify it works on our side for now by checking the headers
#### Configuration changes
Infra pr coming separately, but we did add the secret default to the
.env.example and the docker compose
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
We want to send emails on a schedule, in response to events, and be
expandable without being overbearing on the amount of effort to
implement. We also want this to use rabbitmq and be easy for other
services to send messages into.
This PR adds the first use of the service to simply show a log message
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Adds a new backend service for notifications
- Adds first notification into the service -> Agent Execution
- Adds spawning the notification service
Also
- Adds RabbitMQ to CI so we can test stuff
- Adds a minor fix for one of the migrations that I thought was causing
failures, but isn't but the change is still useful
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Built and ran an agent and ensured the following log line appeared
which shows the event would have sent an email
```
2025-02-10 15:52:02,232 INFO Processing notification:
user_id='96b8d2f5-a036-437f-bd8e-ba8856028553'
type=<NotificationType.AGENT_RUN: 'AGENT_RUN'>
data=AgentRunData(agent_name='CalculatorBlock', credits_used=0.0,
execution_time=0.0, graph_id='30e5f332-a092-4795-892a-b063a8c7bdd9',
node_count=1) created_at=datetime.datetime(2025, 2, 10, 15, 52, 2,
162865)
```
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
None of the other ports are configurable via .env.example listing so
left as is
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
<!-- Clearly explain the need for these changes: -->
We want to use RabbitMQ for email (and executor in the future) to ensure
message delivery -- something we currently lack with Redis. This PR is
adding RabbitMQ to the docker-compose and setup details with defaults so
that when we start bringing services up, they have the backing to do so.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Adds rabbitmq container (with exposed API and mgmt ports)
- Adds .env.example config for the backend
- Adds dockercompose config for the backend, and passes the variables
around as needed
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Start the container using docker compose deps subset
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
* feat(frontend,backend): testing
* feat: testing
* feat(backend): it works for reading email
* feat(backend): more docs on google
* fix(frontend,backend): formatting
* feat(backend): more logigin (i know this should be debug)
* feat(backend): make real the default scopes
* feat(backend): tests and linting
* fix: code review prep
* feat: sheets block
* feat: liniting
* Update route.ts
* Update autogpt_platform/backend/backend/integrations/oauth/google.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* Update autogpt_platform/backend/backend/server/routers/integrations.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix: revert opener change
* feat(frontend): add back opener
required to work on mac edge
* feat(frontend): drop typing list import from gmail
* fix: code review comments
* feat: code review changes
* feat: code review changes
* fix(backend): move from asserts to checks so they don't get optimized away in the future
* fix(backend): code review changes
* fix(backend): remove google specific check
* fix: add typing
* fix: only enable google blocks when oauth is configured for google
* fix: errors are real and valid outputs always when output
* fix(backend): add provider detail for debuging scope declines
* Update autogpt_platform/frontend/src/components/integrations/credentials-input.tsx
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix(frontend): enhance with comment, typeof error isn't known so this is best way to ensure the stringifyication will work
* feat: code review change requests
* fix: linting
* fix: reduce error catching
* fix: doc messages in code
* fix: check the correct scopes object 😄
* fix: remove double (and not needed) try catch
* fix: lint
* fix: scopes
* feat: handle the default scopes better
* feat: better email objectification
* feat: process attachements
turns out an email doesn't need a body
* fix: lint
* Update google.py
* Update autogpt_platform/backend/backend/data/block.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix: quit trying and except failure
* Update autogpt_platform/backend/backend/server/routers/integrations.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* feat: don't allow expired states
* fix: clarify function name and purpose
* feat: code links updates
* feat: additional docs on adding a block
* fix: type hint missing which means the block won't work
* fix: linting
* fix: docs formatting
* Update issues.py
* fix: improve the naming
* fix: formatting
* Update new_blocks.md
* Update new_blocks.md
* feat: better docs on what the args mean
* feat: more details on yield
* Update new_blocks.md
* fix: remove ignore from docs build
* feat: initial migration
* feat: migration tested with supabase-> prisma data location
* add custom migrations and script
* update migration command
* formatting and linting
* updated migration script
* add direct db url
* add find files
* rename
* use binary instead of source
* temp adding supabase
* remove unused functions
* adding missed merge
* fix: commit hash for lock
* ci: fix lint
* fix: minor bugs that prevented connecting and migrating to dbs and auth
* fix: linting
* fix: missed await
* fix(backend): phase one pr updates
* fix: handle error with returning user object from database_manager
* fix: linting
* Address comments
* Make the migration safe
* Update migration doc
* Move misplaced model functions
* Grammar
* Revert lock
* Remove irrelevant changes
* Remove irrelevant changes
* Avoid adding trigger on public schema
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Aarushi <aarushik93@gmail.com>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
## Config
- For Supabase, the back end needs `SUPABASE_URL`, `SUPABASE_SERVICE_ROLE_KEY`, and `SUPABASE_JWT_SECRET`
- For the GitHub integration to work, the back end needs `GITHUB_CLIENT_ID` and `GITHUB_CLIENT_SECRET`
- For integrations OAuth flows to work in local development, the back end needs `FRONTEND_BASE_URL` to generate login URLs with accurate redirect URLs
## REST API
- Tweak output of OAuth `/login` endpoint: add `state_token` separately in response
- Add `POST /integrations/{provider}/credentials` (for API keys)
- Add `DELETE /integrations/{provider}/credentials/{cred_id}`
## Back end
- Add Supabase support to `AppService`
- Add `FRONTEND_BASE_URL` config option, mainly for local development use
### `autogpt_libs.supabase_integration_credentials_store`
- Add `CredentialsType` alias
- Add `.bearer()` helper methods to `APIKeyCredentials` and `OAuth2Credentials`
### Blocks
- Add `CredentialsField(..) -> CredentialsMetaInput`
## Front end
### UI components
- `CredentialsInput` for use on `CustomNode`: allows user to add/select credentials for a service.
- `APIKeyCredentialsModal`: a dialog for creating API keys
- `OAuth2FlowWaitingModal`: a dialog to indicate that the application is waiting for the user to log in to the 3rd party service in the provided pop-up window
- `NodeCredentialsInput`: wrapper for `CredentialsInput` with the "usual" interface of node input components
- New icons: `IconKey`, `IconKeyPlus`, `IconUser`, `IconUserPlus`
### Data model
- `CredentialsProvider`: introduces the app-level `CredentialsProvidersContext`, which acts as an application-wide store and cache for credentials metadata.
- `useCredentials` for use on `CustomNode`: uses `CredentialsProvidersContext` and provides node-specific credential data and provider-specific data/functions
- `/auth/integrations/oauth_callback` route to close the loop to the `CredentialsInput` after a user completes sign-in to the external service
- Add `BlockIOCredentialsSubSchema`
### API client
- Add `isAuthenticated` method
- Add methods for integration OAuth flow: `oAuthLogin`, `oAuthCallback`
- Add CRD methods for credentials: `createAPIKeyCredentials`, `listCredentials`, `getCredentials`, `deleteCredentials`
- Add mirrored types `CredentialsMetaResponse`, `CredentialsMetaInput`, `OAuth2Credentials`, `APIKeyCredentials`
- Add GitHub blocks + "DEVELOPER_TOOLS" category
- Add `**kwargs` to `Block.run(..)` signature to support additional kwargs
- Add support for loading blocks from nested modules (e.g. `blocks/github/issues.py`)
#### Executor
- Add strict support for `credentials` fields on blocks
- Fetch credentials for graph execution and pass them down through to the node execution
* move to supabase pg instance
* remove postgres and bind supabase port
* Updated setup
- Switched db name to postgres to work with prisma studio
- Added platform schema
- Added Market-migartions
- bound prisma studio port
* remove studio port
* updated .env
* updated readmes
---------
Co-authored-by: SwiftyOS <craigswift13@gmail.com>