Merge branch 'master' into bently/secrt-881-find-local-businesses-using-google-maps-list-building

bently/secrt-881-find-local-businesses-using-google-maps-list-building
Zamil Majdy 2024-09-26 13:40:56 -05:00 committed by GitHub
commit 22771490e9
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
86 changed files with 4905 additions and 655 deletions

97
.github/workflows/codeql.yml vendored Normal file
View File

@ -0,0 +1,97 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ "master", "release-*" ]
pull_request:
branches: [ "master", "release-*" ]
schedule:
- cron: '15 4 * * 0'
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: typescript
build-mode: none
- language: python
build-mode: none
# CodeQL supports the following values keywords for 'language': 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
config: |
paths-ignore:
- classic/frontend/build/**
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

View File

@ -105,7 +105,7 @@ jobs:
LOG_LEVEL: ${{ runner.debug && 'DEBUG' || 'INFO' }}
DATABASE_URL: ${{ steps.supabase.outputs.DB_URL }}
SUPABASE_URL: ${{ steps.supabase.outputs.API_URL }}
SUPABASE_SERVICE_KEY: ${{ steps.supabase.outputs.SERVICE_ROLE_KEY }}
SUPABASE_SERVICE_ROLE_KEY: ${{ steps.supabase.outputs.SERVICE_ROLE_KEY }}
SUPABASE_JWT_SECRET: ${{ steps.supabase.outputs.JWT_SECRET }}
env:
CI: true

View File

@ -2,14 +2,14 @@ name: AutoGPT Platform - Frontend CI
on:
push:
branches: [ master ]
branches: [master]
paths:
- '.github/workflows/platform-frontend-ci.yml'
- 'autogpt_platform/frontend/**'
- ".github/workflows/platform-frontend-ci.yml"
- "autogpt_platform/frontend/**"
pull_request:
paths:
- '.github/workflows/platform-frontend-ci.yml'
- 'autogpt_platform/frontend/**'
- ".github/workflows/platform-frontend-ci.yml"
- "autogpt_platform/frontend/**"
defaults:
run:
@ -17,25 +17,67 @@ defaults:
working-directory: autogpt_platform/frontend
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '21'
- uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Install dependencies
run: |
npm install
- name: Install dependencies
run: |
npm install
- name: Check formatting with Prettier
run: |
npx prettier --check .
- name: Check formatting with Prettier
run: |
npx prettier --check .
- name: Run lint
run: |
npm run lint
- name: Run lint
run: |
npm run lint
test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Copy default supabase .env
run: |
cp ../supabase/docker/.env.example ../.env
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml up -d
- name: Install dependencies
run: |
npm install
- name: Setup Builder .env
run: |
cp .env.example .env
- name: Install Playwright Browsers
run: npx playwright install --with-deps
- name: Run tests
run: |
npm run test
- uses: actions/upload-artifact@v4
if: ${{ !cancelled() }}
with:
name: playwright-report
path: playwright-report/
retention-days: 30

View File

@ -10,6 +10,9 @@ Also check out our [🚀 Roadmap][roadmap] for information about our priorities
[roadmap]: https://github.com/Significant-Gravitas/AutoGPT/discussions/6971
[kanban board]: https://github.com/orgs/Significant-Gravitas/projects/1
## Contributing to the AutoGPT Platform Folder
All contributions to [the autogpt_platform folder](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform) will be under our [Contribution License Agreement](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/Contributor%20License%20Agreement%20(CLA).md). By making a pull request contributing to this folder, you agree to the terms of our CLA for your contribution.
## In short
1. Avoid duplicate work, issues, PRs etc.
2. We encourage you to collaborate with fellow community members on some of our bigger

View File

@ -1,41 +1,68 @@
# AutoGPT: Build & Use AI Agents
# AutoGPT: Build, Deploy, and Run AI Agents
[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt)  
[![Twitter Follow](https://img.shields.io/twitter/follow/Auto_GPT?style=social)](https://twitter.com/Auto_GPT)  
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
**AutoGPT** is a powerful tool that lets you create and run intelligent agents. These agents can perform various tasks automatically, making your life easier.
**AutoGPT** is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.
## How to Get Started
## Hosting Options
- Download to self-host
- [Join the Waitlist](https://bit.ly/3ZDijAI) for the cloud-hosted beta
https://github.com/user-attachments/assets/8508f4dc-b362-4cab-900f-644964a96cdf
## How to Setup for Self-Hosting
> [!NOTE]
> Setting up and hosting the AutoGPT Platform yourself is a technical process.
> If you'd rather something that just works, we recommend [joining the waitlist](https://bit.ly/3ZDijAI) for the cloud-hosted beta.
### 🧱 AutoGPT Builder
https://github.com/user-attachments/assets/d04273a5-b36a-4a37-818e-f631ce72d603
The AutoGPT Builder is the frontend. It allows you to design agents using an easy flowchart style. You build your agent by connecting blocks, where each block performs a single action. It's simple and intuitive!
This tutorial assumes you have Docker, VSCode, git and npm installed.
### 🧱 AutoGPT Frontend
The AutoGPT frontend is where users interact with our powerful AI automation platform. It offers multiple ways to engage with and leverage our AI agents. This is the interface where you'll bring your AI automation ideas to life:
**Agent Builder:** For those who want to customize, our intuitive, low-code interface allows you to design and configure your own AI agents.
**Workflow Management:** Build, modify, and optimize your automation workflows with ease. You build your agent by connecting blocks, where each block performs a single action.
**Deployment Controls:** Manage the lifecycle of your agents, from testing to production.
**Ready-to-Use Agents:** Don't want to build? Simply select from our library of pre-configured agents and put them to work immediately.
**Agent Interaction:** Whether you've built your own or are using pre-configured agents, easily run and interact with them through our user-friendly interface.
**Monitoring and Analytics:** Keep track of your agents' performance and gain insights to continually improve your automation processes.
[Read this guide](https://docs.agpt.co/server/new_blocks/) to learn how to build your own custom blocks.
### 💽 AutoGPT Server
The AutoGPT Server is the backend. This is where your agents run. Once deployed, agents can be triggered by external sources and can operate continuously.
The AutoGPT Server is the powerhouse of our platform This is where your agents run. Once deployed, agents can be triggered by external sources and can operate continuously. It contains all the essential components that make AutoGPT run smoothly.
**Source Code:** The core logic that drives our agents and automation processes.
**Infrastructure:** Robust systems that ensure reliable and scalable performance.
**Marketplace:** A comprehensive marketplace where you can find and deploy a wide range of pre-built agents.
### 🐙 Example Agents
Here are two examples of what you can do with AutoGPT:
1. **Reddit Marketing Agent**
- This agent reads comments on Reddit.
- It looks for people asking about your product.
- It then automatically responds to them.
1. **Generate Viral Videos from Trending Topics**
- This agent reads topics on Reddit.
- It identifies trending topics.
- It then automatically creates a short-form video based on the content.
2. **YouTube Content Repurposing Agent**
2. **Identify Top Quotes from Videos for Social Media**
- This agent subscribes to your YouTube channel.
- When you post a new video, it transcribes it.
- It uses AI to write a search engine optimized blog post.
- Then, it publishes this blog post to your Medium account.
- It uses AI to identify the most impactful quotes to generate a summary.
- Then, it writes a post to automatically publish to your social media.
These examples show just a glimpse of what you can achieve with AutoGPT!
These examples show just a glimpse of what you can achieve with AutoGPT! You can create customized workflows to build agents for any use case.
---
Our mission is to provide the tools, so that you can focus on what matters:

View File

@ -0,0 +1,21 @@
**Determinist Ltd**
**Contributor License Agreement (“Agreement”)**
Thank you for your interest in the AutoGPT open source project at [https://github.com/Significant-Gravitas/AutoGPT](https://github.com/Significant-Gravitas/AutoGPT) stewarded by Determinist Ltd (“**Determinist**”), with offices at 3rd Floor 1 Ashley Road, Altrincham, Cheshire, WA14 2DT, United Kingdom. The form of license below is a document that clarifies the terms under which You, the person listed below, may contribute software code described below (the “**Contribution**”) to the project. We appreciate your participation in our project, and your help in improving our products, so we want you to understand what will be done with the Contributions. This license is for your protection as well as the protection of Determinist and its licensees; it does not change your rights to use your own Contributions for any other purpose.
By submitting a Pull Request which modifies the content of the “autogpt\_platform” folder at [https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpt\_platform](https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpt_platform), You hereby agree:
1\. **You grant us the ability to use the Contributions in any way**. You hereby grant to Determinist a non-exclusive, irrevocable, worldwide, royalty-free, sublicenseable, transferable license under all of Your relevant intellectual property rights (including copyright, patent, and any other rights), to use, copy, prepare derivative works of, distribute and publicly perform and display the Contributions on any licensing terms, including without limitation: (a) open source licenses like the GNU General Public License (GPL), the GNU Lesser General Public License (LGPL), the Common Public License, or the Berkeley Science Division license (BSD); and (b) binary, proprietary, or commercial licenses.
2\. **Grant of Patent License**. You hereby grant to Determinist a worldwide, non-exclusive, royalty-free, irrevocable, license, under any rights you may have, now or in the future, in any patents or patent applications, to make, have made, use, offer to sell, sell, and import products containing the Contribution or portions of the Contribution. This license extends to patent claims that are infringed by the Contribution alone or by combination of the Contribution with other inventions.
4\. **Limitations on Licenses**. The licenses granted in this Agreement will continue for the duration of the applicable patent or intellectual property right under which such license is granted. The licenses granted in this Agreement will include the right to grant and authorize sublicenses, so long as the sublicenses are within the scope of the licenses granted in this Agreement. Except for the licenses granted herein, You reserve all right, title, and interest in and to the Contribution.
5\. **You are able to grant us these rights**. You represent that You are legally entitled to grant the above license. If Your employer has rights to intellectual property that You create, You represent that You are authorized to make the Contributions on behalf of that employer, or that Your employer has waived such rights for the Contributions.
3\. **The Contributions are your original work**. You represent that the Contributions are Your original works of authorship, and to Your knowledge, no other person claims, or has the right to claim, any right in any invention or patent related to the Contributions. You also represent that You are not legally obligated, whether by entering into an agreement or otherwise, in any way that conflicts with the terms of this license. For example, if you have signed an agreement requiring you to assign the intellectual property rights in the Contributions to an employer or customer, that would conflict with the terms of this license.
6\. **We determine the code that is in our products**. You understand that the decision to include the Contribution in any product or source repository is entirely that of Determinist, and this agreement does not guarantee that the Contributions will be included in any product.
7\. **No Implied Warranties.** Determinist acknowledges that, except as explicitly described in this Agreement, the Contribution is provided on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.

View File

@ -0,0 +1,164 @@
# PolyForm Shield License 1.0.0
<https://polyformproject.org/licenses/shield/1.0.0>
## Acceptance
In order to get any license under these terms, you must agree
to them as both strict obligations and conditions to all
your licenses.
## Copyright License
The licensor grants you a copyright license for the
software to do everything you might do with the software
that would otherwise infringe the licensor's copyright
in it for any permitted purpose. However, you may
only distribute the software according to [Distribution
License](#distribution-license) and make changes or new works
based on the software according to [Changes and New Works
License](#changes-and-new-works-license).
## Distribution License
The licensor grants you an additional copyright license
to distribute copies of the software. Your license
to distribute covers distributing the software with
changes and new works permitted by [Changes and New Works
License](#changes-and-new-works-license).
## Notices
You must ensure that anyone who gets a copy of any part of
the software from you also gets a copy of these terms or the
URL for them above, as well as copies of any plain-text lines
beginning with `Required Notice:` that the licensor provided
with the software. For example:
> Required Notice: Copyright Yoyodyne, Inc. (http://example.com)
## Changes and New Works License
The licensor grants you an additional copyright license to
make changes and new works based on the software for any
permitted purpose.
## Patent License
The licensor grants you a patent license for the software that
covers patent claims the licensor can license, or becomes able
to license, that you would infringe by using the software.
## Noncompete
Any purpose is a permitted purpose, except for providing any
product that competes with the software or any product the
licensor or any of its affiliates provides using the software.
## Competition
Goods and services compete even when they provide functionality
through different kinds of interfaces or for different technical
platforms. Applications can compete with services, libraries
with plugins, frameworks with development tools, and so on,
even if they're written in different programming languages
or for different computer architectures. Goods and services
compete even when provided free of charge. If you market a
product as a practical substitute for the software or another
product, it definitely competes.
## New Products
If you are using the software to provide a product that does
not compete, but the licensor or any of its affiliates brings
your product into competition by providing a new version of
the software or another product using the software, you may
continue using versions of the software available under these
terms beforehand to provide your competing product, but not
any later versions.
## Discontinued Products
You may begin using the software to compete with a product
or service that the licensor or any of its affiliates has
stopped providing, unless the licensor includes a plain-text
line beginning with `Licensor Line of Business:` with the
software that mentions that line of business. For example:
> Licensor Line of Business: YoyodyneCMS Content Management
System (http://example.com/cms)
## Sales of Business
If the licensor or any of its affiliates sells a line of
business developing the software or using the software
to provide a product, the buyer can also enforce
[Noncompete](#noncompete) for that product.
## Fair Use
You may have "fair use" rights for the software under the
law. These terms do not limit them.
## No Other Rights
These terms do not allow you to sublicense or transfer any of
your licenses to anyone else, or prevent the licensor from
granting licenses to anyone else. These terms do not imply
any other licenses.
## Patent Defense
If you make any written claim that the software infringes or
contributes to infringement of any patent, your patent license
for the software granted under these terms ends immediately. If
your company makes such a claim, your patent license ends
immediately for work on behalf of your company.
## Violations
The first time you are notified in writing that you have
violated any of these terms, or done anything with the software
not covered by your licenses, your licenses can nonetheless
continue if you come into full compliance with these terms,
and take practical steps to correct past violations, within
32 days of receiving notice. Otherwise, all your licenses
end immediately.
## No Liability
***As far as the law allows, the software comes as is, without
any warranty or condition, and the licensor will not be liable
to you for any damages arising out of these terms or the use
or nature of the software, under any kind of legal claim.***
## Definitions
The **licensor** is the individual or entity offering these
terms, and the **software** is the software the licensor makes
available under these terms.
A **product** can be a good or service, or a combination
of them.
**You** refers to the individual or entity agreeing to these
terms.
**Your company** is any legal entity, sole proprietorship,
or other kind of organization that you work for, plus all
its affiliates.
**Affiliates** means the other organizations than an
organization has control over, is under the control of, or is
under common control with.
**Control** means ownership of substantially all the assets of
an entity, or the power to direct its management and policies
by vote, contract, or otherwise. Control can be direct or
indirect.
**Your licenses** are all the licenses granted to you for the
software under these terms.
**Use** means anything you do with the software requiring one
of your licenses.

View File

@ -8,39 +8,60 @@ Welcome to the AutoGPT Platform - a powerful system for creating and running AI
- Docker
- Docker Compose V2 (comes with Docker Desktop, or can be installed separately)
- Node.js & NPM (for running the frontend application)
### Running the System
To run the AutoGPT Platform, follow these steps:
1. Clone this repository to your local machine.
2. Navigate to autogpt_platform/supabase
1. Clone this repository to your local machine and navigate to the `autogpt_platform` directory within the repository:
```
git clone <https://github.com/Significant-Gravitas/AutoGPT.git | git@github.com:Significant-Gravitas/AutoGPT.git>
cd AutoGPT/autogpt_platform
```
2. Run the following command:
```
git submodule update --init --recursive
```
This command will initialize and update the submodules in the repository. The `supabase` folder will be cloned to the root directory.
3. Run the following command:
```
git submodule update --init --recursive
cp supabase/docker/.env.example .env
```
4. Navigate back to autogpt_platform (cd ..)
5. Run the following command:
```
cp supabase/docker/.env.example .env
```
6. Run the following command:
This command will copy the `.env.example` file to `.env` in the `supabase/docker` directory. You can modify the `.env` file to add your own environment variables.
4. Run the following command:
```
docker compose up -d
```
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
5. Navigate to `frontend` within the `autogpt_platform` directory:
```
cd frontend
```
You will need to run your frontend application separately on your local machine.
6. Run the following command:
```
cp .env.example .env
```
This command will copy the `.env.example` file to `.env` in the `frontend` directory. You can modify the `.env` within this folder to add your own environment variables for the frontend application.
7. Run the following command:
```
npm install
npm run dev
```
This command will install the necessary dependencies and start the frontend application in development mode.
If you are using Yarn, you can run the following commands instead:
```
yarn install && yarn dev
```
This command will start all the necessary backend services defined in the `docker-compose.combined.yml` file in detached mode.
7. Navigate to autogpt_platform/frontend.
8. Run the following command:
```
cp .env.example .env.local
```
9. Run the following command:
```
yarn dev
```
8. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
### Docker Compose Commands

View File

@ -7,12 +7,13 @@ from .config import settings
from .jwt_utils import parse_jwt_token
security = HTTPBearer()
logger = logging.getLogger(__name__)
async def auth_middleware(request: Request):
if not settings.ENABLE_AUTH:
# If authentication is disabled, allow the request to proceed
logging.warn("Auth disabled")
logger.warn("Auth disabled")
return {}
security = HTTPBearer()
@ -24,7 +25,7 @@ async def auth_middleware(request: Request):
try:
payload = parse_jwt_token(credentials.credentials)
request.state.user = payload
logging.info("Token decoded successfully")
logger.debug("Token decoded successfully")
except ValueError as e:
raise HTTPException(status_code=401, detail=str(e))
return payload

View File

@ -29,6 +29,9 @@ class OAuth2Credentials(_BaseCredentials):
scopes: list[str]
metadata: dict[str, Any] = Field(default_factory=dict)
def bearer(self) -> str:
return f"Bearer {self.access_token.get_secret_value()}"
class APIKeyCredentials(_BaseCredentials):
type: Literal["api_key"] = "api_key"
@ -36,6 +39,9 @@ class APIKeyCredentials(_BaseCredentials):
expires_at: Optional[int]
"""Unix timestamp (seconds) indicating when the API key expires (if at all)"""
def bearer(self) -> str:
return f"Bearer {self.api_key.get_secret_value()}"
Credentials = Annotated[
OAuth2Credentials | APIKeyCredentials,
@ -43,6 +49,9 @@ Credentials = Annotated[
]
CredentialsType = Literal["api_key", "oauth2"]
class OAuthState(BaseModel):
token: str
provider: str

View File

@ -5,17 +5,36 @@ DB_PORT=5432
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@localhost:${DB_PORT}/${DB_NAME}?connect_timeout=60&schema=platform"
PRISMA_SCHEMA="postgres/schema.prisma"
BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=password
ENABLE_AUTH=false
ENABLE_CREDIT=false
APP_ENV="local"
PYRO_HOST=localhost
SENTRY_DSN=
# This is needed when ENABLE_AUTH is true
SUPABASE_JWT_SECRET=our-super-secret-jwt-token-with-at-least-32-characters-long
## User auth with Supabase is required for any of the 3rd party integrations with auth to work.
ENABLE_AUTH=false
SUPABASE_URL=http://localhost:8000
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
# For local development, you may need to set FRONTEND_BASE_URL for the OAuth flow for integrations to work.
# FRONTEND_BASE_URL=http://localhost:3000
## == INTEGRATION CREDENTIALS == ##
# Each set of server side credentials is required for the corresponding 3rd party
# integration to work.
# For the OAuth callback URL, use <your_frontend_url>/auth/integrations/oauth_callback,
# e.g. http://localhost:3000/auth/integrations/oauth_callback
# GitHub OAuth App server credentials - https://github.com/settings/developers
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
## ===== OPTIONAL API KEYS ===== ##

View File

@ -1,4 +1,3 @@
import glob
import importlib
import os
import re
@ -8,17 +7,17 @@ from backend.data.block import Block
# Dynamically load all modules under backend.blocks
AVAILABLE_MODULES = []
current_dir = os.path.dirname(__file__)
modules = glob.glob(os.path.join(current_dir, "*.py"))
current_dir = Path(__file__).parent
modules = [
Path(f).stem
for f in modules
if os.path.isfile(f) and f.endswith(".py") and not f.endswith("__init__.py")
str(f.relative_to(current_dir))[:-3].replace(os.path.sep, ".")
for f in current_dir.rglob("*.py")
if f.is_file() and f.name != "__init__.py"
]
for module in modules:
if not re.match("^[a-z_]+$", module):
if not re.match("^[a-z_.]+$", module):
raise ValueError(
f"Block module {module} error: module name must be lowercase, separated by underscores, and contain only alphabet characters"
f"Block module {module} error: module name must be lowercase, "
"separated by underscores, and contain only alphabet characters"
)
importlib.import_module(f".{module}", package=__name__)

View File

@ -4,13 +4,7 @@ from typing import Any, List
from jinja2 import BaseLoader, Environment
from pydantic import Field
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockUIType,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockType
from backend.data.model import SchemaField
from backend.util.mock import MockObject
@ -41,8 +35,7 @@ class StoreValueBlock(Block):
def __init__(self):
super().__init__(
id="1ff065e9-88e8-4358-9d82-8dc91f622ba9",
description="This block forwards the `input` pin to `output` pin. "
"This block output will be static, the output can be consumed many times.",
description="This block forwards an input value as output, allowing reuse without change.",
categories={BlockCategory.BASIC},
input_schema=StoreValueBlock.Input,
output_schema=StoreValueBlock.Output,
@ -57,7 +50,7 @@ class StoreValueBlock(Block):
static_output=True,
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "output", input_data.data or input_data.input
@ -79,7 +72,7 @@ class PrintToConsoleBlock(Block):
test_output=("status", "printed"),
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
print(">>>>> Print: ", input_data.text)
yield "status", "printed"
@ -118,7 +111,7 @@ class FindInDictionaryBlock(Block):
categories={BlockCategory.BASIC},
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
obj = input_data.input
key = input_data.key
@ -197,10 +190,10 @@ class AgentInputBlock(Block):
("result", "Hello, World!"),
],
categories={BlockCategory.INPUT, BlockCategory.BASIC},
ui_type=BlockUIType.INPUT,
block_type=BlockType.INPUT,
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "result", input_data.value
@ -244,14 +237,7 @@ class AgentOutputBlock(Block):
def __init__(self):
super().__init__(
id="363ae599-353e-4804-937e-b2ee3cef3da4",
description=(
"This block records the graph output. It takes a value to record, "
"with a name, description, and optional format string. If a format "
"string is given, it tries to format the recorded value. The "
"formatted (or raw, if formatting fails) value is then output. "
"This block is key for capturing and presenting final results or "
"important intermediate outputs of the graph execution."
),
description=("Stores the output of the graph for users to see."),
input_schema=AgentOutputBlock.Input,
output_schema=AgentOutputBlock.Output,
test_input=[
@ -280,10 +266,10 @@ class AgentOutputBlock(Block):
("output", MockObject(value="!!", key="key")),
],
categories={BlockCategory.OUTPUT, BlockCategory.BASIC},
ui_type=BlockUIType.OUTPUT,
block_type=BlockType.OUTPUT,
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
"""
Attempts to format the recorded_value using the fmt_string if provided.
If formatting fails or no fmt_string is given, returns the original recorded_value.
@ -343,7 +329,7 @@ class AddToDictionaryBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
# If no dictionary is provided, create a new one
if input_data.dictionary is None:
@ -414,7 +400,7 @@ class AddToListBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
# If no list is provided, create a new one
if input_data.list is None:
@ -452,8 +438,8 @@ class NoteBlock(Block):
test_output=[
("output", "Hello, World!"),
],
ui_type=BlockUIType.NOTE,
block_type=BlockType.NOTE,
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "output", input_data.text

View File

@ -31,7 +31,7 @@ class BlockInstallationBlock(Block):
disabled=True,
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
code = input_data.code
if search := re.search(r"class (\w+)\(Block\):", code):

View File

@ -70,7 +70,7 @@ class ConditionBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
value1 = input_data.value1
operator = input_data.operator
value2 = input_data.value2

View File

@ -22,6 +22,7 @@ class ReadCsvBlock(Block):
id="acf7625e-d2cb-4941-bfeb-2819fc6fc015",
input_schema=ReadCsvBlock.Input,
output_schema=ReadCsvBlock.Output,
description="Reads a CSV file and outputs the data as a list of dictionaries and individual rows via rows.",
contributors=[ContributorDetails(name="Nicholas Tindle")],
categories={BlockCategory.TEXT},
test_input={
@ -40,7 +41,7 @@ class ReadCsvBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
import csv
from io import StringIO

View File

@ -31,6 +31,7 @@ class ReadDiscordMessagesBlock(Block):
id="d3f4g5h6-1i2j-3k4l-5m6n-7o8p9q0r1s2t", # Unique ID for the node
input_schema=ReadDiscordMessagesBlock.Input, # Assign input schema
output_schema=ReadDiscordMessagesBlock.Output, # Assign output schema
description="Reads messages from a Discord channel using a bot token.",
categories={BlockCategory.SOCIAL},
test_input={"discord_bot_token": "test_token", "continuous_read": False},
test_output=[
@ -81,14 +82,14 @@ class ReadDiscordMessagesBlock(Block):
await client.start(token)
def run(self, input_data: "ReadDiscordMessagesBlock.Input") -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
while True:
for output_name, output_value in self.__run(input_data):
yield output_name, output_value
if not input_data.continuous_read:
break
def __run(self, input_data: "ReadDiscordMessagesBlock.Input") -> BlockOutput:
def __run(self, input_data: Input) -> BlockOutput:
try:
loop = asyncio.get_event_loop()
future = self.run_bot(input_data.discord_bot_token.get_secret_value())
@ -148,6 +149,7 @@ class SendDiscordMessageBlock(Block):
id="h1i2j3k4-5l6m-7n8o-9p0q-r1s2t3u4v5w6", # Unique ID for the node
input_schema=SendDiscordMessageBlock.Input, # Assign input schema
output_schema=SendDiscordMessageBlock.Output, # Assign output schema
description="Sends a message to a Discord channel using a bot token.",
categories={BlockCategory.SOCIAL},
test_input={
"discord_bot_token": "YOUR_DISCORD_BOT_TOKEN",
@ -187,7 +189,7 @@ class SendDiscordMessageBlock(Block):
"""Splits a message into chunks not exceeding the Discord limit."""
return [message[i : i + limit] for i in range(0, len(message), limit)]
def run(self, input_data: "SendDiscordMessageBlock.Input") -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
loop = asyncio.get_event_loop()
future = self.send_message(

View File

@ -88,7 +88,7 @@ class SendEmailBlock(Block):
except Exception as e:
return f"Failed to send email: {str(e)}"
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
status = self.send_email(
input_data.creds,
input_data.to_email,

View File

@ -0,0 +1,54 @@
from typing import Literal
from autogpt_libs.supabase_integration_credentials_store.types import (
APIKeyCredentials,
OAuth2Credentials,
)
from pydantic import SecretStr
from backend.data.model import CredentialsField, CredentialsMetaInput
from backend.util.settings import Secrets
secrets = Secrets()
GITHUB_OAUTH_IS_CONFIGURED = bool(
secrets.github_client_id and secrets.github_client_secret
)
GithubCredentials = APIKeyCredentials | OAuth2Credentials
GithubCredentialsInput = CredentialsMetaInput[
Literal["github"],
Literal["api_key", "oauth2"] if GITHUB_OAUTH_IS_CONFIGURED else Literal["api_key"],
]
def GithubCredentialsField(scope: str) -> GithubCredentialsInput:
"""
Creates a GitHub credentials input on a block.
Params:
scope: The authorization scope needed for the block to work. ([list of available scopes](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/scopes-for-oauth-apps#available-scopes))
""" # noqa
return CredentialsField(
provider="github",
supported_credential_types=(
{"api_key", "oauth2"} if GITHUB_OAUTH_IS_CONFIGURED else {"api_key"}
),
required_scopes={scope},
description="The GitHub integration can be used with OAuth, "
"or any API key with sufficient permissions for the blocks it is used on.",
)
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="github",
api_key=SecretStr("mock-github-api-key"),
title="Mock GitHub API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.type,
}

View File

@ -0,0 +1,683 @@
import requests
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from ._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
GithubCredentials,
GithubCredentialsField,
GithubCredentialsInput,
)
class GithubCommentBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue or pull request",
placeholder="https://github.com/owner/repo/issues/1",
)
comment: str = SchemaField(
description="Comment to post on the issue or pull request",
placeholder="Enter your comment",
)
class Output(BlockSchema):
id: int = SchemaField(description="ID of the created comment")
url: str = SchemaField(description="URL to the comment on GitHub")
error: str = SchemaField(
description="Error message if the comment posting failed"
)
def __init__(self):
super().__init__(
id="a8db4d8d-db1c-4a25-a1b0-416a8c33602b",
description="This block posts a comment on a specified GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubCommentBlock.Input,
output_schema=GithubCommentBlock.Output,
test_input={
"issue_url": "https://github.com/owner/repo/issues/1",
"comment": "This is a test comment.",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("id", 1337),
("url", "https://github.com/owner/repo/issues/1#issuecomment-1337"),
],
test_mock={
"post_comment": lambda *args, **kwargs: (
1337,
"https://github.com/owner/repo/issues/1#issuecomment-1337",
)
},
)
@staticmethod
def post_comment(
credentials: GithubCredentials, issue_url: str, body_text: str
) -> tuple[int, str]:
if "/pull/" in issue_url:
api_url = (
issue_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/issues/"
)
+ "/comments"
)
else:
api_url = (
issue_url.replace("github.com", "api.github.com/repos") + "/comments"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
data = {"body": body_text}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
comment = response.json()
return comment["id"], comment["html_url"]
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
id, url = self.post_comment(
credentials,
input_data.issue_url,
input_data.comment,
)
yield "id", id
yield "url", url
except Exception as e:
yield "error", f"Failed to post comment: {str(e)}"
class GithubMakeIssueBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
title: str = SchemaField(
description="Title of the issue", placeholder="Enter the issue title"
)
body: str = SchemaField(
description="Body of the issue", placeholder="Enter the issue body"
)
class Output(BlockSchema):
number: int = SchemaField(description="Number of the created issue")
url: str = SchemaField(description="URL of the created issue")
error: str = SchemaField(
description="Error message if the issue creation failed"
)
def __init__(self):
super().__init__(
id="691dad47-f494-44c3-a1e8-05b7990f2dab",
description="This block creates a new issue on a specified GitHub repository.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubMakeIssueBlock.Input,
output_schema=GithubMakeIssueBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"title": "Test Issue",
"body": "This is a test issue.",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("number", 1),
("url", "https://github.com/owner/repo/issues/1"),
],
test_mock={
"create_issue": lambda *args, **kwargs: (
1,
"https://github.com/owner/repo/issues/1",
)
},
)
@staticmethod
def create_issue(
credentials: GithubCredentials, repo_url: str, title: str, body: str
) -> tuple[int, str]:
api_url = repo_url.replace("github.com", "api.github.com/repos") + "/issues"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
data = {"title": title, "body": body}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
issue = response.json()
return issue["number"], issue["html_url"]
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
number, url = self.create_issue(
credentials,
input_data.repo_url,
input_data.title,
input_data.body,
)
yield "number", number
yield "url", url
except Exception as e:
yield "error", f"Failed to create issue: {str(e)}"
class GithubReadIssueBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue",
placeholder="https://github.com/owner/repo/issues/1",
)
class Output(BlockSchema):
title: str = SchemaField(description="Title of the issue")
body: str = SchemaField(description="Body of the issue")
user: str = SchemaField(description="User who created the issue")
error: str = SchemaField(
description="Error message if reading the issue failed"
)
def __init__(self):
super().__init__(
id="6443c75d-032a-4772-9c08-230c707c8acc",
description="This block reads the body, title, and user of a specified GitHub issue.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubReadIssueBlock.Input,
output_schema=GithubReadIssueBlock.Output,
test_input={
"issue_url": "https://github.com/owner/repo/issues/1",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("title", "Title of the issue"),
("body", "This is the body of the issue."),
("user", "username"),
],
test_mock={
"read_issue": lambda *args, **kwargs: (
"Title of the issue",
"This is the body of the issue.",
"username",
)
},
)
@staticmethod
def read_issue(
credentials: GithubCredentials, issue_url: str
) -> tuple[str, str, str]:
api_url = issue_url.replace("github.com", "api.github.com/repos")
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
data = response.json()
title = data.get("title", "No title found")
body = data.get("body", "No body content found")
user = data.get("user", {}).get("login", "No user found")
return title, body, user
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
title, body, user = self.read_issue(
credentials,
input_data.issue_url,
)
yield "title", title
yield "body", body
yield "user", user
except Exception as e:
yield "error", f"Failed to read issue: {str(e)}"
class GithubListIssuesBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class IssueItem(TypedDict):
title: str
url: str
issue: IssueItem = SchemaField(
title="Issue", description="Issues with their title and URL"
)
error: str = SchemaField(description="Error message if listing issues failed")
def __init__(self):
super().__init__(
id="c215bfd7-0e57-4573-8f8c-f7d4963dcd74",
description="This block lists all issues for a specified GitHub repository.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListIssuesBlock.Input,
output_schema=GithubListIssuesBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"issue",
{
"title": "Issue 1",
"url": "https://github.com/owner/repo/issues/1",
},
)
],
test_mock={
"list_issues": lambda *args, **kwargs: [
{
"title": "Issue 1",
"url": "https://github.com/owner/repo/issues/1",
}
]
},
)
@staticmethod
def list_issues(
credentials: GithubCredentials, repo_url: str
) -> list[Output.IssueItem]:
api_url = repo_url.replace("github.com", "api.github.com/repos") + "/issues"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
data = response.json()
issues: list[GithubListIssuesBlock.Output.IssueItem] = [
{"title": issue["title"], "url": issue["html_url"]} for issue in data
]
return issues
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
issues = self.list_issues(
credentials,
input_data.repo_url,
)
yield from (("issue", issue) for issue in issues)
except Exception as e:
yield "error", f"Failed to list issues: {str(e)}"
class GithubAddLabelBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue or pull request",
placeholder="https://github.com/owner/repo/issues/1",
)
label: str = SchemaField(
description="Label to add to the issue or pull request",
placeholder="Enter the label",
)
class Output(BlockSchema):
status: str = SchemaField(description="Status of the label addition operation")
error: str = SchemaField(
description="Error message if the label addition failed"
)
def __init__(self):
super().__init__(
id="98bd6b77-9506-43d5-b669-6b9733c4b1f1",
description="This block adds a label to a specified GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAddLabelBlock.Input,
output_schema=GithubAddLabelBlock.Output,
test_input={
"issue_url": "https://github.com/owner/repo/issues/1",
"label": "bug",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Label added successfully")],
test_mock={"add_label": lambda *args, **kwargs: "Label added successfully"},
)
@staticmethod
def add_label(credentials: GithubCredentials, issue_url: str, label: str) -> str:
# Convert the provided GitHub URL to the API URL
if "/pull/" in issue_url:
api_url = (
issue_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/issues/"
)
+ "/labels"
)
else:
api_url = (
issue_url.replace("github.com", "api.github.com/repos") + "/labels"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
data = {"labels": [label]}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
return "Label added successfully"
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
status = self.add_label(
credentials,
input_data.issue_url,
input_data.label,
)
yield "status", status
except Exception as e:
yield "error", f"Failed to add label: {str(e)}"
class GithubRemoveLabelBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue or pull request",
placeholder="https://github.com/owner/repo/issues/1",
)
label: str = SchemaField(
description="Label to remove from the issue or pull request",
placeholder="Enter the label",
)
class Output(BlockSchema):
status: str = SchemaField(description="Status of the label removal operation")
error: str = SchemaField(
description="Error message if the label removal failed"
)
def __init__(self):
super().__init__(
id="78f050c5-3e3a-48c0-9e5b-ef1ceca5589c",
description="This block removes a label from a specified GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubRemoveLabelBlock.Input,
output_schema=GithubRemoveLabelBlock.Output,
test_input={
"issue_url": "https://github.com/owner/repo/issues/1",
"label": "bug",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Label removed successfully")],
test_mock={
"remove_label": lambda *args, **kwargs: "Label removed successfully"
},
)
@staticmethod
def remove_label(credentials: GithubCredentials, issue_url: str, label: str) -> str:
# Convert the provided GitHub URL to the API URL
if "/pull/" in issue_url:
api_url = (
issue_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/issues/"
)
+ f"/labels/{label}"
)
else:
api_url = (
issue_url.replace("github.com", "api.github.com/repos")
+ f"/labels/{label}"
)
# Log the constructed API URL for debugging
print(f"Constructed API URL: {api_url}")
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.delete(api_url, headers=headers)
response.raise_for_status()
return "Label removed successfully"
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
status = self.remove_label(
credentials,
input_data.issue_url,
input_data.label,
)
yield "status", status
except Exception as e:
yield "error", f"Failed to remove label: {str(e)}"
class GithubAssignIssueBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue",
placeholder="https://github.com/owner/repo/issues/1",
)
assignee: str = SchemaField(
description="Username to assign to the issue",
placeholder="Enter the username",
)
class Output(BlockSchema):
status: str = SchemaField(
description="Status of the issue assignment operation"
)
error: str = SchemaField(
description="Error message if the issue assignment failed"
)
def __init__(self):
super().__init__(
id="90507c72-b0ff-413a-886a-23bbbd66f542",
description="This block assigns a user to a specified GitHub issue.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAssignIssueBlock.Input,
output_schema=GithubAssignIssueBlock.Output,
test_input={
"issue_url": "https://github.com/owner/repo/issues/1",
"assignee": "username1",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Issue assigned successfully")],
test_mock={
"assign_issue": lambda *args, **kwargs: "Issue assigned successfully"
},
)
@staticmethod
def assign_issue(
credentials: GithubCredentials,
issue_url: str,
assignee: str,
) -> str:
# Extracting repo path and issue number from the issue URL
repo_path, issue_number = issue_url.replace("https://github.com/", "").split(
"/issues/"
)
api_url = (
f"https://api.github.com/repos/{repo_path}/issues/{issue_number}/assignees"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
data = {"assignees": [assignee]}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
return "Issue assigned successfully"
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
status = self.assign_issue(
credentials,
input_data.issue_url,
input_data.assignee,
)
yield "status", status
except Exception as e:
yield "error", f"Failed to assign issue: {str(e)}"
class GithubUnassignIssueBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue",
placeholder="https://github.com/owner/repo/issues/1",
)
assignee: str = SchemaField(
description="Username to unassign from the issue",
placeholder="Enter the username",
)
class Output(BlockSchema):
status: str = SchemaField(
description="Status of the issue unassignment operation"
)
error: str = SchemaField(
description="Error message if the issue unassignment failed"
)
def __init__(self):
super().__init__(
id="d154002a-38f4-46c2-962d-2488f2b05ece",
description="This block unassigns a user from a specified GitHub issue.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubUnassignIssueBlock.Input,
output_schema=GithubUnassignIssueBlock.Output,
test_input={
"issue_url": "https://github.com/owner/repo/issues/1",
"assignee": "username1",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Issue unassigned successfully")],
test_mock={
"unassign_issue": lambda *args, **kwargs: "Issue unassigned successfully"
},
)
@staticmethod
def unassign_issue(
credentials: GithubCredentials,
issue_url: str,
assignee: str,
) -> str:
# Extracting repo path and issue number from the issue URL
repo_path, issue_number = issue_url.replace("https://github.com/", "").split(
"/issues/"
)
api_url = (
f"https://api.github.com/repos/{repo_path}/issues/{issue_number}/assignees"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
data = {"assignees": [assignee]}
response = requests.delete(api_url, headers=headers, json=data)
response.raise_for_status()
return "Issue unassigned successfully"
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
status = self.unassign_issue(
credentials,
input_data.issue_url,
input_data.assignee,
)
yield "status", status
except Exception as e:
yield "error", f"Failed to unassign issue: {str(e)}"

View File

@ -0,0 +1,596 @@
import requests
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from ._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
GithubCredentials,
GithubCredentialsField,
GithubCredentialsInput,
)
class GithubListPullRequestsBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class PRItem(TypedDict):
title: str
url: str
pull_request: PRItem = SchemaField(
title="Pull Request", description="PRs with their title and URL"
)
error: str = SchemaField(description="Error message if listing issues failed")
def __init__(self):
super().__init__(
id="ffef3c4c-6cd0-48dd-817d-459f975219f4",
description="This block lists all pull requests for a specified GitHub repository.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListPullRequestsBlock.Input,
output_schema=GithubListPullRequestsBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"pull_request",
{
"title": "Pull request 1",
"url": "https://github.com/owner/repo/pull/1",
},
)
],
test_mock={
"list_prs": lambda *args, **kwargs: [
{
"title": "Pull request 1",
"url": "https://github.com/owner/repo/pull/1",
}
]
},
)
@staticmethod
def list_prs(credentials: GithubCredentials, repo_url: str) -> list[Output.PRItem]:
api_url = repo_url.replace("github.com", "api.github.com/repos") + "/pulls"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
data = response.json()
pull_requests: list[GithubListPullRequestsBlock.Output.PRItem] = [
{"title": pr["title"], "url": pr["html_url"]} for pr in data
]
return pull_requests
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
pull_requests = self.list_prs(
credentials,
input_data.repo_url,
)
yield from (("pull_request", pr) for pr in pull_requests)
except Exception as e:
yield "error", f"Failed to list pull requests: {str(e)}"
class GithubMakePullRequestBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
title: str = SchemaField(
description="Title of the pull request",
placeholder="Enter the pull request title",
)
body: str = SchemaField(
description="Body of the pull request",
placeholder="Enter the pull request body",
)
head: str = SchemaField(
description="The name of the branch where your changes are implemented. For cross-repository pull requests in the same network, namespace head with a user like this: username:branch.",
placeholder="Enter the head branch",
)
base: str = SchemaField(
description="The name of the branch you want the changes pulled into.",
placeholder="Enter the base branch",
)
class Output(BlockSchema):
number: int = SchemaField(description="Number of the created pull request")
url: str = SchemaField(description="URL of the created pull request")
error: str = SchemaField(
description="Error message if the pull request creation failed"
)
def __init__(self):
super().__init__(
id="dfb987f8-f197-4b2e-bf19-111812afd692",
description="This block creates a new pull request on a specified GitHub repository.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubMakePullRequestBlock.Input,
output_schema=GithubMakePullRequestBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"title": "Test Pull Request",
"body": "This is a test pull request.",
"head": "feature-branch",
"base": "main",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("number", 1),
("url", "https://github.com/owner/repo/pull/1"),
],
test_mock={
"create_pr": lambda *args, **kwargs: (
1,
"https://github.com/owner/repo/pull/1",
)
},
)
@staticmethod
def create_pr(
credentials: GithubCredentials,
repo_url: str,
title: str,
body: str,
head: str,
base: str,
) -> tuple[int, str]:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/pulls"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
data = {"title": title, "body": body, "head": head, "base": base}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
pr_data = response.json()
return pr_data["number"], pr_data["html_url"]
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
number, url = self.create_pr(
credentials,
input_data.repo_url,
input_data.title,
input_data.body,
input_data.head,
input_data.base,
)
yield "number", number
yield "url", url
except requests.exceptions.HTTPError as http_err:
if http_err.response.status_code == 422:
error_details = http_err.response.json()
error_message = error_details.get("message", "Unknown error")
else:
error_message = str(http_err)
yield "error", f"Failed to create pull request: {error_message}"
except Exception as e:
yield "error", f"Failed to create pull request: {str(e)}"
class GithubReadPullRequestBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
pr_url: str = SchemaField(
description="URL of the GitHub pull request",
placeholder="https://github.com/owner/repo/pull/1",
)
include_pr_changes: bool = SchemaField(
description="Whether to include the changes made in the pull request",
default=False,
)
class Output(BlockSchema):
title: str = SchemaField(description="Title of the pull request")
body: str = SchemaField(description="Body of the pull request")
author: str = SchemaField(description="User who created the pull request")
changes: str = SchemaField(description="Changes made in the pull request")
error: str = SchemaField(
description="Error message if reading the pull request failed"
)
def __init__(self):
super().__init__(
id="bf94b2a4-1a30-4600-a783-a8a44ee31301",
description="This block reads the body, title, user, and changes of a specified GitHub pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubReadPullRequestBlock.Input,
output_schema=GithubReadPullRequestBlock.Output,
test_input={
"pr_url": "https://github.com/owner/repo/pull/1",
"include_pr_changes": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("title", "Title of the pull request"),
("body", "This is the body of the pull request."),
("author", "username"),
("changes", "List of changes made in the pull request."),
],
test_mock={
"read_pr": lambda *args, **kwargs: (
"Title of the pull request",
"This is the body of the pull request.",
"username",
),
"read_pr_changes": lambda *args, **kwargs: "List of changes made in the pull request.",
},
)
@staticmethod
def read_pr(credentials: GithubCredentials, pr_url: str) -> tuple[str, str, str]:
api_url = pr_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/issues/"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
data = response.json()
title = data.get("title", "No title found")
body = data.get("body", "No body content found")
author = data.get("user", {}).get("login", "No user found")
return title, body, author
@staticmethod
def read_pr_changes(credentials: GithubCredentials, pr_url: str) -> str:
api_url = (
pr_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/pulls/"
)
+ "/files"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
files = response.json()
changes = []
for file in files:
filename = file.get("filename")
patch = file.get("patch")
if filename and patch:
changes.append(f"File: {filename}\n{patch}")
return "\n\n".join(changes)
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
title, body, author = self.read_pr(
credentials,
input_data.pr_url,
)
yield "title", title
yield "body", body
yield "author", author
if input_data.include_pr_changes:
changes = self.read_pr_changes(
credentials,
input_data.pr_url,
)
yield "changes", changes
except Exception as e:
yield "error", f"Failed to read pull request: {str(e)}"
class GithubAssignPRReviewerBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
pr_url: str = SchemaField(
description="URL of the GitHub pull request",
placeholder="https://github.com/owner/repo/pull/1",
)
reviewer: str = SchemaField(
description="Username of the reviewer to assign",
placeholder="Enter the reviewer's username",
)
class Output(BlockSchema):
status: str = SchemaField(
description="Status of the reviewer assignment operation"
)
error: str = SchemaField(
description="Error message if the reviewer assignment failed"
)
def __init__(self):
super().__init__(
id="c0d22c5e-e688-43e3-ba43-d5faba7927fd",
description="This block assigns a reviewer to a specified GitHub pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAssignPRReviewerBlock.Input,
output_schema=GithubAssignPRReviewerBlock.Output,
test_input={
"pr_url": "https://github.com/owner/repo/pull/1",
"reviewer": "reviewer_username",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Reviewer assigned successfully")],
test_mock={
"assign_reviewer": lambda *args, **kwargs: "Reviewer assigned successfully"
},
)
@staticmethod
def assign_reviewer(
credentials: GithubCredentials, pr_url: str, reviewer: str
) -> str:
# Convert the PR URL to the appropriate API endpoint
api_url = (
pr_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/pulls/"
)
+ "/requested_reviewers"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
data = {"reviewers": [reviewer]}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
return "Reviewer assigned successfully"
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
status = self.assign_reviewer(
credentials,
input_data.pr_url,
input_data.reviewer,
)
yield "status", status
except requests.exceptions.HTTPError as http_err:
if http_err.response.status_code == 422:
error_msg = (
"Failed to assign reviewer: "
f"The reviewer '{input_data.reviewer}' may not have permission "
"or the pull request is not in a valid state. "
f"Detailed error: {http_err.response.text}"
)
else:
error_msg = f"HTTP error: {http_err} - {http_err.response.text}"
yield "error", error_msg
except Exception as e:
yield "error", f"Failed to assign reviewer: {str(e)}"
class GithubUnassignPRReviewerBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
pr_url: str = SchemaField(
description="URL of the GitHub pull request",
placeholder="https://github.com/owner/repo/pull/1",
)
reviewer: str = SchemaField(
description="Username of the reviewer to unassign",
placeholder="Enter the reviewer's username",
)
class Output(BlockSchema):
status: str = SchemaField(
description="Status of the reviewer unassignment operation"
)
error: str = SchemaField(
description="Error message if the reviewer unassignment failed"
)
def __init__(self):
super().__init__(
id="9637945d-c602-4875-899a-9c22f8fd30de",
description="This block unassigns a reviewer from a specified GitHub pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubUnassignPRReviewerBlock.Input,
output_schema=GithubUnassignPRReviewerBlock.Output,
test_input={
"pr_url": "https://github.com/owner/repo/pull/1",
"reviewer": "reviewer_username",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Reviewer unassigned successfully")],
test_mock={
"unassign_reviewer": lambda *args, **kwargs: "Reviewer unassigned successfully"
},
)
@staticmethod
def unassign_reviewer(
credentials: GithubCredentials, pr_url: str, reviewer: str
) -> str:
api_url = (
pr_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/pulls/"
)
+ "/requested_reviewers"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
data = {"reviewers": [reviewer]}
response = requests.delete(api_url, headers=headers, json=data)
response.raise_for_status()
return "Reviewer unassigned successfully"
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
status = self.unassign_reviewer(
credentials,
input_data.pr_url,
input_data.reviewer,
)
yield "status", status
except Exception as e:
yield "error", f"Failed to unassign reviewer: {str(e)}"
class GithubListPRReviewersBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
pr_url: str = SchemaField(
description="URL of the GitHub pull request",
placeholder="https://github.com/owner/repo/pull/1",
)
class Output(BlockSchema):
class ReviewerItem(TypedDict):
username: str
url: str
reviewer: ReviewerItem = SchemaField(
title="Reviewer",
description="Reviewers with their username and profile URL",
)
error: str = SchemaField(
description="Error message if listing reviewers failed"
)
def __init__(self):
super().__init__(
id="2646956e-96d5-4754-a3df-034017e7ed96",
description="This block lists all reviewers for a specified GitHub pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListPRReviewersBlock.Input,
output_schema=GithubListPRReviewersBlock.Output,
test_input={
"pr_url": "https://github.com/owner/repo/pull/1",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"reviewer",
{
"username": "reviewer1",
"url": "https://github.com/reviewer1",
},
)
],
test_mock={
"list_reviewers": lambda *args, **kwargs: [
{
"username": "reviewer1",
"url": "https://github.com/reviewer1",
}
]
},
)
@staticmethod
def list_reviewers(
credentials: GithubCredentials, pr_url: str
) -> list[Output.ReviewerItem]:
api_url = (
pr_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/pulls/"
)
+ "/requested_reviewers"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
data = response.json()
reviewers: list[GithubListPRReviewersBlock.Output.ReviewerItem] = [
{"username": reviewer["login"], "url": reviewer["html_url"]}
for reviewer in data.get("users", [])
]
return reviewers
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
reviewers = self.list_reviewers(
credentials,
input_data.pr_url,
)
yield from (("reviewer", reviewer) for reviewer in reviewers)
except Exception as e:
yield "error", f"Failed to list reviewers: {str(e)}"

View File

@ -0,0 +1,786 @@
import base64
import requests
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from ._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
GithubCredentials,
GithubCredentialsField,
GithubCredentialsInput,
)
class GithubListTagsBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class TagItem(TypedDict):
name: str
url: str
tag: TagItem = SchemaField(
title="Tag", description="Tags with their name and file tree browser URL"
)
error: str = SchemaField(description="Error message if listing tags failed")
def __init__(self):
super().__init__(
id="358924e7-9a11-4d1a-a0f2-13c67fe59e2e",
description="This block lists all tags for a specified GitHub repository.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListTagsBlock.Input,
output_schema=GithubListTagsBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"tag",
{
"name": "v1.0.0",
"url": "https://github.com/owner/repo/tree/v1.0.0",
},
)
],
test_mock={
"list_tags": lambda *args, **kwargs: [
{
"name": "v1.0.0",
"url": "https://github.com/owner/repo/tree/v1.0.0",
}
]
},
)
@staticmethod
def list_tags(
credentials: GithubCredentials, repo_url: str
) -> list[Output.TagItem]:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/tags"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
data = response.json()
tags: list[GithubListTagsBlock.Output.TagItem] = [
{
"name": tag["name"],
"url": f"https://github.com/{repo_path}/tree/{tag['name']}",
}
for tag in data
]
return tags
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
tags = self.list_tags(
credentials,
input_data.repo_url,
)
yield from (("tag", tag) for tag in tags)
except Exception as e:
yield "error", f"Failed to list tags: {str(e)}"
class GithubListBranchesBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class BranchItem(TypedDict):
name: str
url: str
branch: BranchItem = SchemaField(
title="Branch",
description="Branches with their name and file tree browser URL",
)
error: str = SchemaField(description="Error message if listing branches failed")
def __init__(self):
super().__init__(
id="74243e49-2bec-4916-8bf4-db43d44aead5",
description="This block lists all branches for a specified GitHub repository.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListBranchesBlock.Input,
output_schema=GithubListBranchesBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"branch",
{
"name": "main",
"url": "https://github.com/owner/repo/tree/main",
},
)
],
test_mock={
"list_branches": lambda *args, **kwargs: [
{
"name": "main",
"url": "https://github.com/owner/repo/tree/main",
}
]
},
)
@staticmethod
def list_branches(
credentials: GithubCredentials, repo_url: str
) -> list[Output.BranchItem]:
api_url = repo_url.replace("github.com", "api.github.com/repos") + "/branches"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
data = response.json()
branches: list[GithubListBranchesBlock.Output.BranchItem] = [
{"name": branch["name"], "url": branch["commit"]["url"]} for branch in data
]
return branches
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
branches = self.list_branches(
credentials,
input_data.repo_url,
)
yield from (("branch", branch) for branch in branches)
except Exception as e:
yield "error", f"Failed to list branches: {str(e)}"
class GithubListDiscussionsBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
num_discussions: int = SchemaField(
description="Number of discussions to fetch", default=5
)
class Output(BlockSchema):
class DiscussionItem(TypedDict):
title: str
url: str
discussion: DiscussionItem = SchemaField(
title="Discussion", description="Discussions with their title and URL"
)
error: str = SchemaField(
description="Error message if listing discussions failed"
)
def __init__(self):
super().__init__(
id="3ef1a419-3d76-4e07-b761-de9dad4d51d7",
description="This block lists recent discussions for a specified GitHub repository.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListDiscussionsBlock.Input,
output_schema=GithubListDiscussionsBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"num_discussions": 3,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"discussion",
{
"title": "Discussion 1",
"url": "https://github.com/owner/repo/discussions/1",
},
)
],
test_mock={
"list_discussions": lambda *args, **kwargs: [
{
"title": "Discussion 1",
"url": "https://github.com/owner/repo/discussions/1",
}
]
},
)
@staticmethod
def list_discussions(
credentials: GithubCredentials, repo_url: str, num_discussions: int
) -> list[Output.DiscussionItem]:
repo_path = repo_url.replace("https://github.com/", "")
owner, repo = repo_path.split("/")
query = """
query($owner: String!, $repo: String!, $num: Int!) {
repository(owner: $owner, name: $repo) {
discussions(first: $num) {
nodes {
title
url
}
}
}
}
"""
variables = {"owner": owner, "repo": repo, "num": num_discussions}
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.post(
"https://api.github.com/graphql",
json={"query": query, "variables": variables},
headers=headers,
)
response.raise_for_status()
data = response.json()
discussions: list[GithubListDiscussionsBlock.Output.DiscussionItem] = [
{"title": discussion["title"], "url": discussion["url"]}
for discussion in data["data"]["repository"]["discussions"]["nodes"]
]
return discussions
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
discussions = self.list_discussions(
credentials, input_data.repo_url, input_data.num_discussions
)
yield from (("discussion", discussion) for discussion in discussions)
except Exception as e:
yield "error", f"Failed to list discussions: {str(e)}"
class GithubListReleasesBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class ReleaseItem(TypedDict):
name: str
url: str
release: ReleaseItem = SchemaField(
title="Release",
description="Releases with their name and file tree browser URL",
)
error: str = SchemaField(description="Error message if listing releases failed")
def __init__(self):
super().__init__(
id="3460367a-6ba7-4645-8ce6-47b05d040b92",
description="This block lists all releases for a specified GitHub repository.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListReleasesBlock.Input,
output_schema=GithubListReleasesBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"release",
{
"name": "v1.0.0",
"url": "https://github.com/owner/repo/releases/tag/v1.0.0",
},
)
],
test_mock={
"list_releases": lambda *args, **kwargs: [
{
"name": "v1.0.0",
"url": "https://github.com/owner/repo/releases/tag/v1.0.0",
}
]
},
)
@staticmethod
def list_releases(
credentials: GithubCredentials, repo_url: str
) -> list[Output.ReleaseItem]:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/releases"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
data = response.json()
releases: list[GithubListReleasesBlock.Output.ReleaseItem] = [
{"name": release["name"], "url": release["html_url"]} for release in data
]
return releases
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
releases = self.list_releases(
credentials,
input_data.repo_url,
)
yield from (("release", release) for release in releases)
except Exception as e:
yield "error", f"Failed to list releases: {str(e)}"
class GithubReadFileBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
file_path: str = SchemaField(
description="Path to the file in the repository",
placeholder="path/to/file",
)
branch: str = SchemaField(
description="Branch to read from",
placeholder="branch_name",
default="master",
)
class Output(BlockSchema):
text_content: str = SchemaField(
description="Content of the file (decoded as UTF-8 text)"
)
raw_content: str = SchemaField(
description="Raw base64-encoded content of the file"
)
size: int = SchemaField(description="The size of the file (in bytes)")
error: str = SchemaField(description="Error message if the file reading failed")
def __init__(self):
super().__init__(
id="87ce6c27-5752-4bbc-8e26-6da40a3dcfd3",
description="This block reads the content of a specified file from a GitHub repository.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubReadFileBlock.Input,
output_schema=GithubReadFileBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"file_path": "path/to/file",
"branch": "master",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("raw_content", "RmlsZSBjb250ZW50"),
("text_content", "File content"),
("size", 13),
],
test_mock={"read_file": lambda *args, **kwargs: ("RmlsZSBjb250ZW50", 13)},
)
@staticmethod
def read_file(
credentials: GithubCredentials, repo_url: str, file_path: str, branch: str
) -> tuple[str, int]:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/contents/{file_path}?ref={branch}"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
content = response.json()
if isinstance(content, list):
# Multiple entries of different types exist at this path
if not (file := next((f for f in content if f["type"] == "file"), None)):
raise TypeError("Not a file")
content = file
if content["type"] != "file":
raise TypeError("Not a file")
return content["content"], content["size"]
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
raw_content, size = self.read_file(
credentials,
input_data.repo_url,
input_data.file_path.lstrip("/"),
input_data.branch,
)
yield "raw_content", raw_content
yield "text_content", base64.b64decode(raw_content).decode("utf-8")
yield "size", size
except Exception as e:
yield "error", f"Failed to read file: {str(e)}"
class GithubReadFolderBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
folder_path: str = SchemaField(
description="Path to the folder in the repository",
placeholder="path/to/folder",
)
branch: str = SchemaField(
description="Branch name to read from (defaults to master)",
placeholder="branch_name",
default="master",
)
class Output(BlockSchema):
class DirEntry(TypedDict):
name: str
path: str
class FileEntry(TypedDict):
name: str
path: str
size: int
file: FileEntry = SchemaField(description="Files in the folder")
dir: DirEntry = SchemaField(description="Directories in the folder")
error: str = SchemaField(
description="Error message if reading the folder failed"
)
def __init__(self):
super().__init__(
id="1355f863-2db3-4d75-9fba-f91e8a8ca400",
description="This block reads the content of a specified folder from a GitHub repository.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubReadFolderBlock.Input,
output_schema=GithubReadFolderBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"folder_path": "path/to/folder",
"branch": "master",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"file",
{
"name": "file1.txt",
"path": "path/to/folder/file1.txt",
"size": 1337,
},
),
("dir", {"name": "dir2", "path": "path/to/folder/dir2"}),
],
test_mock={
"read_folder": lambda *args, **kwargs: (
[
{
"name": "file1.txt",
"path": "path/to/folder/file1.txt",
"size": 1337,
}
],
[{"name": "dir2", "path": "path/to/folder/dir2"}],
)
},
)
@staticmethod
def read_folder(
credentials: GithubCredentials, repo_url: str, folder_path: str, branch: str
) -> tuple[list[Output.FileEntry], list[Output.DirEntry]]:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/contents/{folder_path}?ref={branch}"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
content = response.json()
if isinstance(content, list):
# Multiple entries of different types exist at this path
if not (dir := next((d for d in content if d["type"] == "dir"), None)):
raise TypeError("Not a folder")
content = dir
if content["type"] != "dir":
raise TypeError("Not a folder")
return (
[
GithubReadFolderBlock.Output.FileEntry(
name=entry["name"],
path=entry["path"],
size=entry["size"],
)
for entry in content["entries"]
if entry["type"] == "file"
],
[
GithubReadFolderBlock.Output.DirEntry(
name=entry["name"],
path=entry["path"],
)
for entry in content["entries"]
if entry["type"] == "dir"
],
)
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
files, dirs = self.read_folder(
credentials,
input_data.repo_url,
input_data.folder_path.lstrip("/"),
input_data.branch,
)
yield from (("file", file) for file in files)
yield from (("dir", dir) for dir in dirs)
except Exception as e:
yield "error", f"Failed to read folder: {str(e)}"
class GithubMakeBranchBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
new_branch: str = SchemaField(
description="Name of the new branch",
placeholder="new_branch_name",
)
source_branch: str = SchemaField(
description="Name of the source branch",
placeholder="source_branch_name",
)
class Output(BlockSchema):
status: str = SchemaField(description="Status of the branch creation operation")
error: str = SchemaField(
description="Error message if the branch creation failed"
)
def __init__(self):
super().__init__(
id="944cc076-95e7-4d1b-b6b6-b15d8ee5448d",
description="This block creates a new branch from a specified source branch.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubMakeBranchBlock.Input,
output_schema=GithubMakeBranchBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"new_branch": "new_branch_name",
"source_branch": "source_branch_name",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Branch created successfully")],
test_mock={
"create_branch": lambda *args, **kwargs: "Branch created successfully"
},
)
@staticmethod
def create_branch(
credentials: GithubCredentials,
repo_url: str,
new_branch: str,
source_branch: str,
) -> str:
repo_path = repo_url.replace("https://github.com/", "")
ref_api_url = (
f"https://api.github.com/repos/{repo_path}/git/refs/heads/{source_branch}"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(ref_api_url, headers=headers)
response.raise_for_status()
sha = response.json()["object"]["sha"]
create_branch_api_url = f"https://api.github.com/repos/{repo_path}/git/refs"
data = {"ref": f"refs/heads/{new_branch}", "sha": sha}
response = requests.post(create_branch_api_url, headers=headers, json=data)
response.raise_for_status()
return "Branch created successfully"
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
status = self.create_branch(
credentials,
input_data.repo_url,
input_data.new_branch,
input_data.source_branch,
)
yield "status", status
except Exception as e:
yield "error", f"Failed to create branch: {str(e)}"
class GithubDeleteBranchBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
branch: str = SchemaField(
description="Name of the branch to delete",
placeholder="branch_name",
)
class Output(BlockSchema):
status: str = SchemaField(description="Status of the branch deletion operation")
error: str = SchemaField(
description="Error message if the branch deletion failed"
)
def __init__(self):
super().__init__(
id="0d4130f7-e0ab-4d55-adc3-0a40225e80f4",
description="This block deletes a specified branch.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubDeleteBranchBlock.Input,
output_schema=GithubDeleteBranchBlock.Output,
test_input={
"repo_url": "https://github.com/owner/repo",
"branch": "branch_name",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Branch deleted successfully")],
test_mock={
"delete_branch": lambda *args, **kwargs: "Branch deleted successfully"
},
)
@staticmethod
def delete_branch(
credentials: GithubCredentials, repo_url: str, branch: str
) -> str:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/git/refs/heads/{branch}"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.delete(api_url, headers=headers)
response.raise_for_status()
return "Branch deleted successfully"
def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
status = self.delete_branch(
credentials,
input_data.repo_url,
input_data.branch,
)
yield "status", status
except Exception as e:
yield "error", f"Failed to delete branch: {str(e)}"

View File

@ -37,7 +37,7 @@ class SendWebRequestBlock(Block):
output_schema=SendWebRequestBlock.Output,
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
if isinstance(input_data.body, str):
input_data.body = json.loads(input_data.body)

View File

@ -21,6 +21,7 @@ class ListIteratorBlock(Block):
id="f8e7d6c5-b4a3-2c1d-0e9f-8g7h6i5j4k3l",
input_schema=ListIteratorBlock.Input,
output_schema=ListIteratorBlock.Output,
description="Iterates over a list of items and outputs each item with its index.",
categories={BlockCategory.LOGIC},
test_input={"items": [1, "two", {"three": 3}, [4, 5]]},
test_output=[
@ -31,6 +32,6 @@ class ListIteratorBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
for index, item in enumerate(input_data.items):
yield "item", (index, item)

View File

@ -30,6 +30,8 @@ class ModelMetadata(NamedTuple):
class LlmModel(str, Enum):
# OpenAI models
O1_PREVIEW = "o1-preview"
O1_MINI = "o1-mini"
GPT4O_MINI = "gpt-4o-mini"
GPT4O = "gpt-4o"
GPT4_TURBO = "gpt-4-turbo"
@ -57,6 +59,8 @@ class LlmModel(str, Enum):
MODEL_METADATA = {
LlmModel.O1_PREVIEW: ModelMetadata("openai", 32000, cost_factor=60),
LlmModel.O1_MINI: ModelMetadata("openai", 62000, cost_factor=30),
LlmModel.GPT4O_MINI: ModelMetadata("openai", 128000, cost_factor=10),
LlmModel.GPT4O: ModelMetadata("openai", 128000, cost_factor=12),
LlmModel.GPT4_TURBO: ModelMetadata("openai", 128000, cost_factor=11),
@ -84,8 +88,16 @@ for model in LlmModel:
class AIStructuredResponseGeneratorBlock(Block):
class Input(BlockSchema):
prompt: str
expected_format: dict[str, str]
model: LlmModel = LlmModel.GPT4_TURBO
expected_format: dict[str, str] = SchemaField(
description="Expected format of the response. If provided, the response will be validated against this format. "
"The keys should be the expected fields in the response, and the values should be the description of the field.",
)
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4_TURBO,
description="The language model to use for answering the prompt.",
advanced=False,
)
api_key: BlockSecret = SecretField(value="")
sys_prompt: str = ""
retry: int = 3
@ -132,7 +144,18 @@ class AIStructuredResponseGeneratorBlock(Block):
if provider == "openai":
openai.api_key = api_key
response_format = {"type": "json_object"} if json_format else None
response_format = None
if model in [LlmModel.O1_MINI, LlmModel.O1_PREVIEW]:
sys_messages = [p["content"] for p in prompt if p["role"] == "system"]
usr_messages = [p["content"] for p in prompt if p["role"] != "system"]
prompt = [
{"role": "user", "content": "\n".join(sys_messages)},
{"role": "user", "content": "\n".join(usr_messages)},
]
elif json_format:
response_format = {"type": "json_object"}
response = openai.chat.completions.create(
model=model.value,
messages=prompt, # type: ignore
@ -185,7 +208,7 @@ class AIStructuredResponseGeneratorBlock(Block):
else:
raise ValueError(f"Unsupported LLM provider: {provider}")
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
prompt = []
def trim_prompt(s: str) -> str:
@ -207,11 +230,11 @@ class AIStructuredResponseGeneratorBlock(Block):
format_prompt = ",\n ".join(expected_format)
sys_prompt = trim_prompt(
f"""
|Reply in json format:
|{{
| {format_prompt}
|}}
"""
|Reply strictly only in the following JSON format:
|{{
| {format_prompt}
|}}
"""
)
prompt.append({"role": "system", "content": sys_prompt})
@ -289,7 +312,12 @@ class AIStructuredResponseGeneratorBlock(Block):
class AITextGeneratorBlock(Block):
class Input(BlockSchema):
prompt: str
model: LlmModel = LlmModel.GPT4_TURBO
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4_TURBO,
description="The language model to use for answering the prompt.",
advanced=False,
)
api_key: BlockSecret = SecretField(value="")
sys_prompt: str = ""
retry: int = 3
@ -323,7 +351,7 @@ class AITextGeneratorBlock(Block):
raise RuntimeError(output_data)
raise ValueError("Failed to get a response from the LLM.")
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
object_input_data = AIStructuredResponseGeneratorBlock.Input(
**{attr: getattr(input_data, attr) for attr in input_data.model_fields},
@ -334,10 +362,23 @@ class AITextGeneratorBlock(Block):
yield "error", str(e)
class SummaryStyle(Enum):
CONCISE = "concise"
DETAILED = "detailed"
BULLET_POINTS = "bullet points"
NUMBERED_LIST = "numbered list"
class AITextSummarizerBlock(Block):
class Input(BlockSchema):
text: str
model: LlmModel = LlmModel.GPT4_TURBO
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4_TURBO,
description="The language model to use for summarizing the text.",
)
focus: str = "general information"
style: SummaryStyle = SummaryStyle.CONCISE
api_key: BlockSecret = SecretField(value="")
# TODO: Make this dynamic
max_tokens: int = 4000 # Adjust based on the model's context window
@ -365,7 +406,7 @@ class AITextSummarizerBlock(Block):
},
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
for output in self._run(input_data):
yield output
@ -408,7 +449,7 @@ class AITextSummarizerBlock(Block):
raise ValueError("Failed to get a response from the LLM.")
def _summarize_chunk(self, chunk: str, input_data: Input) -> str:
prompt = f"Summarize the following text concisely:\n\n{chunk}"
prompt = f"Summarize the following text in a {input_data.style} form. Focus your summary on the topic of `{input_data.focus}` if present, otherwise just provide a general summary:\n\n```{chunk}```"
llm_response = self.llm_call(
AIStructuredResponseGeneratorBlock.Input(
@ -422,13 +463,10 @@ class AITextSummarizerBlock(Block):
return llm_response["summary"]
def _combine_summaries(self, summaries: list[str], input_data: Input) -> str:
combined_text = " ".join(summaries)
combined_text = "\n\n".join(summaries)
if len(combined_text.split()) <= input_data.max_tokens:
prompt = (
"Provide a final, concise summary of the following summaries:\n\n"
+ combined_text
)
prompt = f"Provide a final summary of the following section summaries in a {input_data.style} form, focus your summary on the topic of `{input_data.focus}` if present:\n\n ```{combined_text}```\n\n Just respond with the final_summary in the format specified."
llm_response = self.llm_call(
AIStructuredResponseGeneratorBlock.Input(
@ -474,6 +512,7 @@ class AIConversationBlock(Block):
description="List of messages in the conversation.", min_length=1
)
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4_TURBO,
description="The language model to use for the conversation.",
)
@ -564,7 +603,7 @@ class AIConversationBlock(Block):
else:
raise ValueError(f"Unsupported LLM provider: {provider}")
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
api_key = (
input_data.api_key.get_secret_value()

View File

@ -39,6 +39,7 @@ class CalculatorBlock(Block):
id="b1ab9b19-67a6-406d-abf5-2dba76d00c79",
input_schema=CalculatorBlock.Input,
output_schema=CalculatorBlock.Output,
description="Performs a mathematical operation on two numbers.",
categories={BlockCategory.LOGIC},
test_input={
"operation": Operation.ADD.value,
@ -51,7 +52,7 @@ class CalculatorBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
operation = input_data.operation
a = input_data.a
b = input_data.b
@ -98,6 +99,7 @@ class CountItemsBlock(Block):
id="3c9c2f42-b0c3-435f-ba35-05f7a25c772a",
input_schema=CountItemsBlock.Input,
output_schema=CountItemsBlock.Output,
description="Counts the number of items in a collection.",
categories={BlockCategory.LOGIC},
test_input={"collection": [1, 2, 3, 4, 5]},
test_output=[
@ -105,7 +107,7 @@ class CountItemsBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
collection = input_data.collection
try:

View File

@ -69,6 +69,7 @@ class PublishToMediumBlock(Block):
id="3f7b2dcb-4a78-4e3f-b0f1-88132e1b89df",
input_schema=PublishToMediumBlock.Input,
output_schema=PublishToMediumBlock.Output,
description="Publishes a post to Medium.",
categories={BlockCategory.SOCIAL},
test_input={
"author_id": "1234567890abcdef",
@ -136,7 +137,7 @@ class PublishToMediumBlock(Block):
return response.json()
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
response = self.create_post(
input_data.api_key.get_secret_value(),

View File

@ -116,7 +116,7 @@ class GetRedditPostsBlock(Block):
subreddit = client.subreddit(input_data.subreddit)
return subreddit.new(limit=input_data.post_limit)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
current_time = datetime.now(tz=timezone.utc)
for post in self.get_posts(input_data):
if input_data.last_minutes:
@ -167,5 +167,5 @@ class PostRedditCommentBlock(Block):
comment = submission.reply(comment.comment)
return comment.id # type: ignore
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "comment_id", self.reply_post(input_data.creds, input_data.data)

View File

@ -46,6 +46,7 @@ class ReadRSSFeedBlock(Block):
id="c6731acb-4105-4zp1-bc9b-03d0036h370g",
input_schema=ReadRSSFeedBlock.Input,
output_schema=ReadRSSFeedBlock.Output,
description="Reads RSS feed entries from a given URL.",
categories={BlockCategory.INPUT},
test_input={
"rss_url": "https://example.com/rss",
@ -86,7 +87,7 @@ class ReadRSSFeedBlock(Block):
def parse_feed(url: str) -> dict[str, Any]:
return feedparser.parse(url) # type: ignore
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
keep_going = True
start_time = datetime.now(timezone.utc) - timedelta(
minutes=input_data.time_period

View File

@ -93,7 +93,7 @@ class DataSamplingBlock(Block):
)
self.accumulated_data = []
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
if input_data.accumulate:
if isinstance(input_data.data, dict):
self.accumulated_data.append(input_data.data)

View File

@ -35,7 +35,7 @@ class GetWikipediaSummaryBlock(Block, GetRequest):
test_mock={"get_request": lambda url, json: {"extract": "summary content"}},
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
topic = input_data.topic
url = f"https://en.wikipedia.org/api/rest_v1/page/summary/{topic}"
@ -72,7 +72,7 @@ class SearchTheWebBlock(Block, GetRequest):
test_mock={"get_request": lambda url, json: "search content"},
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
# Encode the search query
encoded_query = quote(input_data.query)
@ -113,7 +113,7 @@ class ExtractWebsiteContentBlock(Block, GetRequest):
test_mock={"get_request": lambda url, json: "scraped content"},
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
# Prepend the Jina-ai Reader URL to the input URL
jina_url = f"https://r.jina.ai/{input_data.url}"
@ -148,6 +148,7 @@ class GetWeatherInformationBlock(Block, GetRequest):
id="f7a8b2c3-6d4e-5f8b-9e7f-6d4e5f8b9e7f",
input_schema=GetWeatherInformationBlock.Input,
output_schema=GetWeatherInformationBlock.Output,
description="Retrieves weather information for a specified location using OpenWeatherMap API.",
test_input={
"location": "New York",
"api_key": "YOUR_API_KEY",
@ -166,7 +167,7 @@ class GetWeatherInformationBlock(Block, GetRequest):
},
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
units = "metric" if input_data.use_celsius else "imperial"
api_key = input_data.api_key.get_secret_value()

View File

@ -105,7 +105,7 @@ class CreateTalkingAvatarVideoBlock(Block):
response.raise_for_status()
return response.json()
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
# Create the clip
payload = {

View File

@ -25,9 +25,7 @@ class MatchTextPatternBlock(Block):
def __init__(self):
super().__init__(
id="3060088f-6ed9-4928-9ba7-9c92823a7ccd",
description="This block matches the given text with the pattern (regex) and"
" forwards the provided data to positive (if matching) or"
" negative (if not matching) output.",
description="Matches text against a regex pattern and forwards data to positive or negative output based on the match.",
categories={BlockCategory.TEXT},
input_schema=MatchTextPatternBlock.Input,
output_schema=MatchTextPatternBlock.Output,
@ -45,7 +43,7 @@ class MatchTextPatternBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
output = input_data.data or input_data.text
flags = 0
if not input_data.case_sensitive:
@ -97,7 +95,7 @@ class ExtractTextInformationBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
flags = 0
if not input_data.case_sensitive:
flags = flags | re.IGNORECASE
@ -147,7 +145,7 @@ class FillTextTemplateBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
# For python.format compatibility: replace all {...} with {{..}}.
# But avoid replacing {{...}} to {{{...}}}.
fmt = re.sub(r"(?<!{){[ a-zA-Z0-9_]+}", r"{\g<0>}", input_data.format)
@ -180,6 +178,6 @@ class CombineTextsBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
combined_text = input_data.delimiter.join(input_data.input)
yield "output", combined_text

View File

@ -27,7 +27,7 @@ class GetCurrentTimeBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
current_time = time.strftime("%H:%M:%S")
yield "time", current_time
@ -59,7 +59,7 @@ class GetCurrentDateBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
offset = int(input_data.offset)
except ValueError:
@ -96,7 +96,7 @@ class GetCurrentDateAndTimeBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
current_date_time = time.strftime("%Y-%m-%d %H:%M:%S")
yield "date_time", current_date_time
@ -129,7 +129,7 @@ class CountdownTimerBlock(Block):
],
)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
seconds = int(input_data.seconds)
minutes = int(input_data.minutes)
hours = int(input_data.hours)

View File

@ -26,6 +26,7 @@ class TranscribeYouTubeVideoBlock(Block):
id="f3a8f7e1-4b1d-4e5f-9f2a-7c3d5a2e6b4c",
input_schema=TranscribeYouTubeVideoBlock.Input,
output_schema=TranscribeYouTubeVideoBlock.Output,
description="Transcribes a YouTube video.",
categories={BlockCategory.SOCIAL},
test_input={"youtube_url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ"},
test_output=[
@ -62,7 +63,7 @@ class TranscribeYouTubeVideoBlock(Block):
def get_transcript(video_id: str):
return YouTubeTranscriptApi.get_transcript(video_id)
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
video_id = self.extract_video_id(input_data.youtube_url)
yield "video_id", video_id

View File

@ -1,26 +1,35 @@
import inspect
from abc import ABC, abstractmethod
from enum import Enum
from typing import Any, ClassVar, Generator, Generic, Type, TypeVar, cast
from typing import (
Any,
ClassVar,
Generator,
Generic,
Optional,
Type,
TypeVar,
cast,
get_origin,
)
import jsonref
import jsonschema
from autogpt_libs.supabase_integration_credentials_store.types import Credentials
from prisma.models import AgentBlock
from pydantic import BaseModel
from backend.data.model import ContributorDetails
from backend.util import json
from .model import CREDENTIALS_FIELD_NAME, ContributorDetails, CredentialsMetaInput
BlockData = tuple[str, Any] # Input & Output data should be a tuple of (name, data).
BlockInput = dict[str, Any] # Input: 1 input pin consumes 1 data.
BlockOutput = Generator[BlockData, None, None] # Output: 1 output pin produces n data.
CompletedBlockOutput = dict[str, list[Any]] # Completed stream, collected as a dict.
class BlockUIType(Enum):
"""
The type of Node UI to be displayed in the builder for this block.
"""
class BlockType(Enum):
STANDARD = "Standard"
INPUT = "Input"
OUTPUT = "Output"
@ -36,6 +45,7 @@ class BlockCategory(Enum):
INPUT = "Block that interacts with input of the graph."
OUTPUT = "Block that interacts with output of the graph."
LOGIC = "Programming logic to control the flow of your agent"
DEVELOPER_TOOLS = "Developer tools such as GitHub blocks."
def dict(self) -> dict[str, str]:
return {"category": self.name, "description": self.value}
@ -49,7 +59,7 @@ class BlockSchema(BaseModel):
if cls.cached_jsonschema:
return cls.cached_jsonschema
model = jsonref.replace_refs(cls.model_json_schema())
model = jsonref.replace_refs(cls.model_json_schema(), merge_props=True)
def ref_to_dict(obj):
if isinstance(obj, dict):
@ -122,6 +132,46 @@ class BlockSchema(BaseModel):
if field_info.is_required()
}
@classmethod
def __pydantic_init_subclass__(cls, **kwargs):
"""Validates the schema definition. Rules:
- Only one `CredentialsMetaInput` field may be present.
- This field MUST be called `credentials`.
- A field that is called `credentials` MUST be a `CredentialsMetaInput`.
"""
super().__pydantic_init_subclass__(**kwargs)
credentials_fields = [
field_name
for field_name, info in cls.model_fields.items()
if (
inspect.isclass(info.annotation)
and issubclass(
get_origin(info.annotation) or info.annotation,
CredentialsMetaInput,
)
)
]
if len(credentials_fields) > 1:
raise ValueError(
f"{cls.__qualname__} can only have one CredentialsMetaInput field"
)
elif (
len(credentials_fields) == 1
and credentials_fields[0] != CREDENTIALS_FIELD_NAME
):
raise ValueError(
f"CredentialsMetaInput field on {cls.__qualname__} "
"must be named 'credentials'"
)
elif (
len(credentials_fields) == 0
and CREDENTIALS_FIELD_NAME in cls.model_fields.keys()
):
raise TypeError(
f"Field 'credentials' on {cls.__qualname__} "
f"must be of type {CredentialsMetaInput.__name__}"
)
BlockSchemaInputType = TypeVar("BlockSchemaInputType", bound=BlockSchema)
BlockSchemaOutputType = TypeVar("BlockSchemaOutputType", bound=BlockSchema)
@ -143,9 +193,10 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
test_input: BlockInput | list[BlockInput] | None = None,
test_output: BlockData | list[BlockData] | None = None,
test_mock: dict[str, Any] | None = None,
test_credentials: Optional[Credentials] = None,
disabled: bool = False,
static_output: bool = False,
ui_type: BlockUIType = BlockUIType.STANDARD,
block_type: BlockType = BlockType.STANDARD,
):
"""
Initialize the block with the given schema.
@ -170,15 +221,16 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
self.test_input = test_input
self.test_output = test_output
self.test_mock = test_mock
self.test_credentials = test_credentials
self.description = description
self.categories = categories or set()
self.contributors = contributors or set()
self.disabled = disabled
self.static_output = static_output
self.ui_type = ui_type
self.block_type = block_type
@abstractmethod
def run(self, input_data: BlockSchemaInputType) -> BlockOutput:
def run(self, input_data: BlockSchemaInputType, **kwargs) -> BlockOutput:
"""
Run the block with the given input data.
Args:
@ -206,16 +258,18 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
contributor.model_dump() for contributor in self.contributors
],
"staticOutput": self.static_output,
"uiType": self.ui_type.value,
"uiType": self.block_type.value,
}
def execute(self, input_data: BlockInput) -> BlockOutput:
def execute(self, input_data: BlockInput, **kwargs) -> BlockOutput:
if error := self.input_schema.validate_data(input_data):
raise ValueError(
f"Unable to execute block with invalid input data: {error}"
)
for output_name, output_data in self.run(self.input_schema(**input_data)):
for output_name, output_data in self.run(
self.input_schema(**input_data), **kwargs
):
if error := self.output_schema.validate_field(output_name, output_data):
raise ValueError(f"Block produced an invalid output data: {error}")
yield output_name, output_data

View File

@ -3,6 +3,7 @@ from datetime import datetime, timezone
from multiprocessing import Manager
from typing import Any, Generic, TypeVar
from autogpt_libs.supabase_integration_credentials_store.types import Credentials
from prisma.enums import AgentExecutionStatus
from prisma.models import (
AgentGraphExecution,
@ -25,6 +26,7 @@ class GraphExecution(BaseModel):
graph_exec_id: str
graph_id: str
start_node_execs: list["NodeExecution"]
node_input_credentials: dict[str, Credentials] # dict[node_id, Credentials]
class NodeExecution(BaseModel):

View File

@ -1,8 +1,9 @@
from __future__ import annotations
import logging
from typing import Any, Callable, ClassVar, Optional, TypeVar
from typing import Any, Callable, ClassVar, Generic, Optional, TypeVar
from autogpt_libs.supabase_integration_credentials_store.types import CredentialsType
from pydantic import BaseModel, Field, GetCoreSchemaHandler
from pydantic_core import (
CoreSchema,
@ -136,5 +137,50 @@ def SchemaField(
)
CP = TypeVar("CP", bound=str)
CT = TypeVar("CT", bound=CredentialsType)
CREDENTIALS_FIELD_NAME = "credentials"
class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
id: str
title: Optional[str] = None
provider: CP
type: CT
def CredentialsField(
provider: CP,
supported_credential_types: set[CT],
required_scopes: set[str] = set(),
*,
title: Optional[str] = None,
description: Optional[str] = None,
**kwargs,
) -> CredentialsMetaInput[CP, CT]:
"""
`CredentialsField` must and can only be used on fields named `credentials`.
This is enforced by the `BlockSchema` base class.
"""
json_extra = {
k: v
for k, v in {
"credentials_provider": provider,
"credentials_scopes": list(required_scopes) or None, # omit if empty
"credentials_types": list(supported_credential_types),
}.items()
if v is not None
}
return Field(
title=title,
description=description,
json_schema_extra=json_extra,
**kwargs,
)
class ContributorDetails(BaseModel):
name: str = Field(title="Name", description="The name of the contributor.")

View File

@ -9,14 +9,16 @@ import threading
from concurrent.futures import Future, ProcessPoolExecutor
from contextlib import contextmanager
from multiprocessing.pool import AsyncResult, Pool
from typing import TYPE_CHECKING, Any, Coroutine, Generator, TypeVar
from typing import TYPE_CHECKING, Any, Coroutine, Generator, TypeVar, cast
from autogpt_libs.supabase_integration_credentials_store.types import Credentials
from pydantic import BaseModel
if TYPE_CHECKING:
from backend.server.rest_api import AgentServer
from backend.blocks.basic import AgentInputBlock
from backend.data import db
from backend.data.block import Block, BlockData, BlockInput, get_block
from backend.data.block import Block, BlockData, BlockInput, BlockType, get_block
from backend.data.credit import get_user_credit_model
from backend.data.execution import (
ExecutionQueue,
@ -37,6 +39,7 @@ from backend.data.execution import (
upsert_execution_output,
)
from backend.data.graph import Graph, Link, Node, get_graph, get_node
from backend.data.model import CREDENTIALS_FIELD_NAME, CredentialsMetaInput
from backend.util import json
from backend.util.decorator import error_logged, time_measured
from backend.util.logging import configure_logging
@ -100,6 +103,7 @@ def execute_node(
loop: asyncio.AbstractEventLoop,
api_client: "AgentServer",
data: NodeExecution,
input_credentials: Credentials | None = None,
execution_stats: dict[str, Any] | None = None,
) -> ExecutionStream:
"""
@ -159,13 +163,19 @@ def execute_node(
update_execution(ExecutionStatus.RUNNING)
user_credit = get_user_credit_model()
extra_exec_kwargs = {}
if input_credentials:
extra_exec_kwargs["credentials"] = input_credentials
output_size = 0
try:
credit = wait(user_credit.get_or_refill_credit(user_id))
if credit < 0:
raise ValueError(f"Insufficient credit: {credit}")
for output_name, output_data in node_block.execute(input_data):
for output_name, output_data in node_block.execute(
input_data, **extra_exec_kwargs
):
output_size += len(json.dumps(output_data))
log_metadata.info("Node produced output", output_name=output_data)
wait(upsert_execution_output(node_exec_id, output_name, output_data))
@ -460,7 +470,10 @@ class Executor:
@classmethod
@error_logged
def on_node_execution(
cls, q: ExecutionQueue[NodeExecution], node_exec: NodeExecution
cls,
q: ExecutionQueue[NodeExecution],
node_exec: NodeExecution,
input_credentials: Credentials | None,
):
log_metadata = LogMetadata(
user_id=node_exec.user_id,
@ -473,7 +486,7 @@ class Executor:
execution_stats = {}
timing_info, _ = cls._on_node_execution(
q, node_exec, log_metadata, execution_stats
q, node_exec, input_credentials, log_metadata, execution_stats
)
execution_stats["walltime"] = timing_info.wall_time
execution_stats["cputime"] = timing_info.cpu_time
@ -488,13 +501,14 @@ class Executor:
cls,
q: ExecutionQueue[NodeExecution],
node_exec: NodeExecution,
input_credentials: Credentials | None,
log_metadata: LogMetadata,
stats: dict[str, Any] | None = None,
):
try:
log_metadata.info(f"Start node execution {node_exec.node_exec_id}")
for execution in execute_node(
cls.loop, cls.agent_server_client, node_exec, stats
cls.loop, cls.agent_server_client, node_exec, input_credentials, stats
):
q.add(execution)
log_metadata.info(f"Finished node execution {node_exec.node_exec_id}")
@ -624,7 +638,11 @@ class Executor:
)
running_executions[exec_data.node_id] = cls.executor.apply_async(
cls.on_node_execution,
(queue, exec_data),
(
queue,
exec_data,
graph_exec.node_input_credentials.get(exec_data.node_id),
),
callback=make_exec_callback(exec_data),
)
@ -660,11 +678,17 @@ class ExecutionManager(AppService):
def __init__(self):
super().__init__(port=Config().execution_manager_port)
self.use_db = True
self.use_supabase = True
self.pool_size = Config().num_graph_workers
self.queue = ExecutionQueue[GraphExecution]()
self.active_graph_runs: dict[str, tuple[Future, threading.Event]] = {}
def run_service(self):
from autogpt_libs.supabase_integration_credentials_store import (
SupabaseIntegrationCredentialsStore,
)
self.credentials_store = SupabaseIntegrationCredentialsStore(self.supabase)
self.executor = ProcessPoolExecutor(
max_workers=self.pool_size,
initializer=Executor.on_graph_executor_start,
@ -705,11 +729,21 @@ class ExecutionManager(AppService):
graph: Graph | None = self.run_and_wait(get_graph(graph_id, user_id=user_id))
if not graph:
raise Exception(f"Graph #{graph_id} not found.")
graph.validate_graph(for_run=True)
node_input_credentials = self._get_node_input_credentials(graph, user_id)
nodes_input = []
for node in graph.starting_nodes:
input_data = {}
if isinstance(get_block(node.block_id), AgentInputBlock):
block = get_block(node.block_id)
# Invalid block & Note block should never be executed.
if not block or block.block_type == BlockType.NOTE:
continue
# Extract request input data, and assign it to the input pin.
if block.block_type == BlockType.INPUT:
name = node.input_default.get("name")
if name and name in data:
input_data = {"value": data[name]}
@ -753,6 +787,7 @@ class ExecutionManager(AppService):
graph_id=graph_id,
graph_exec_id=graph_exec_id,
start_node_execs=starting_node_execs,
node_input_credentials=node_input_credentials,
)
self.queue.add(graph_exec)
@ -799,6 +834,58 @@ class ExecutionManager(AppService):
)
self.agent_server_client.send_execution_update(exec_update.model_dump())
def _get_node_input_credentials(
self, graph: Graph, user_id: str
) -> dict[str, Credentials]:
"""Gets all credentials for all nodes of the graph"""
node_credentials: dict[str, Credentials] = {}
for node in graph.nodes:
block = get_block(node.block_id)
if not block:
raise ValueError(f"Unknown block {node.block_id} for node #{node.id}")
# Find any fields of type CredentialsMetaInput
model_fields = cast(type[BaseModel], block.input_schema).model_fields
if CREDENTIALS_FIELD_NAME not in model_fields:
continue
field = model_fields[CREDENTIALS_FIELD_NAME]
# The BlockSchema class enforces that a `credentials` field is always a
# `CredentialsMetaInput`, so we can safely assume this here.
credentials_meta_type = cast(CredentialsMetaInput, field.annotation)
credentials_meta = credentials_meta_type.model_validate(
node.input_default[CREDENTIALS_FIELD_NAME]
)
# Fetch the corresponding Credentials and perform sanity checks
credentials = self.credentials_store.get_creds_by_id(
user_id, credentials_meta.id
)
if not credentials:
raise ValueError(
f"Unknown credentials #{credentials_meta.id} "
f"for node #{node.id}"
)
if (
credentials.provider != credentials_meta.provider
or credentials.type != credentials_meta.type
):
logger.warning(
f"Invalid credentials #{credentials.id} for node #{node.id}: "
"type/provider mismatch: "
f"{credentials_meta.type}<>{credentials.type};"
f"{credentials_meta.provider}<>{credentials.provider}"
)
raise ValueError(
f"Invalid credentials #{credentials.id} for node #{node.id}: "
"type/provider mismatch"
)
node_credentials[node.id] = credentials
return node_credentials
def llprint(message: str):
"""

View File

@ -1,4 +1,5 @@
import inspect
import logging
from collections import defaultdict
from contextlib import asynccontextmanager
from functools import wraps
@ -27,6 +28,7 @@ from backend.util.settings import Config, Settings
from .utils import get_user_id
settings = Settings()
logger = logging.getLogger(__name__)
class AgentServer(AppService):
@ -65,9 +67,13 @@ class AgentServer(AppService):
if self._test_dependency_overrides:
app.dependency_overrides.update(self._test_dependency_overrides)
logger.debug(
f"FastAPI CORS allow origins: {Config().backend_cors_allow_origins}"
)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Allows all origins
allow_origins=Config().backend_cors_allow_origins,
allow_credentials=True,
allow_methods=["*"], # Allows all methods
allow_headers=["*"], # Allows all headers
@ -251,7 +257,12 @@ class AgentServer(AppService):
app.include_router(api_router)
uvicorn.run(app, host="0.0.0.0", port=Config().agent_api_port, log_config=None)
uvicorn.run(
app,
host=Config().agent_api_host,
port=Config().agent_api_port,
log_config=None,
)
def set_test_dependency_overrides(self, overrides: dict):
self._test_dependency_overrides = overrides

View File

@ -1,15 +1,26 @@
import logging
from typing import Annotated, Literal
from typing import Annotated
from autogpt_libs.supabase_integration_credentials_store import (
SupabaseIntegrationCredentialsStore,
)
from autogpt_libs.supabase_integration_credentials_store.types import (
APIKeyCredentials,
Credentials,
CredentialsType,
OAuth2Credentials,
)
from fastapi import APIRouter, Body, Depends, HTTPException, Path, Query, Request
from pydantic import BaseModel
from fastapi import (
APIRouter,
Body,
Depends,
HTTPException,
Path,
Query,
Request,
Response,
)
from pydantic import BaseModel, SecretStr
from supabase import Client
from backend.integrations.oauth import HANDLERS_BY_NAME, BaseOAuthHandler
@ -28,6 +39,7 @@ def get_store(supabase: Client = Depends(get_supabase)):
class LoginResponse(BaseModel):
login_url: str
state_token: str
@router.get("/{provider}/login")
@ -43,17 +55,17 @@ async def login(
handler = _get_provider_oauth_handler(request, provider)
# Generate and store a secure random state token
state = await store.store_state_token(user_id, provider)
state_token = await store.store_state_token(user_id, provider)
requested_scopes = scopes.split(",") if scopes else []
login_url = handler.get_login_url(requested_scopes, state)
login_url = handler.get_login_url(requested_scopes, state_token)
return LoginResponse(login_url=login_url)
return LoginResponse(login_url=login_url, state_token=state_token)
class CredentialsMetaResponse(BaseModel):
id: str
type: Literal["oauth2", "api_key"]
type: CredentialsType
title: str | None
scopes: list[str] | None
username: str | None
@ -127,6 +139,52 @@ async def get_credential(
return credential
@router.post("/{provider}/credentials", status_code=201)
async def create_api_key_credentials(
store: Annotated[SupabaseIntegrationCredentialsStore, Depends(get_store)],
user_id: Annotated[str, Depends(get_user_id)],
provider: Annotated[str, Path(title="The provider to create credentials for")],
api_key: Annotated[str, Body(title="The API key to store")],
title: Annotated[str, Body(title="Optional title for the credentials")],
expires_at: Annotated[
int | None, Body(title="Unix timestamp when the key expires")
] = None,
) -> APIKeyCredentials:
new_credentials = APIKeyCredentials(
provider=provider,
api_key=SecretStr(api_key),
title=title,
expires_at=expires_at,
)
try:
store.add_creds(user_id, new_credentials)
except Exception as e:
raise HTTPException(
status_code=500, detail=f"Failed to store credentials: {str(e)}"
)
return new_credentials
@router.delete("/{provider}/credentials/{cred_id}", status_code=204)
async def delete_credential(
provider: Annotated[str, Path(title="The provider to delete credentials for")],
cred_id: Annotated[str, Path(title="The ID of the credentials to delete")],
user_id: Annotated[str, Depends(get_user_id)],
store: Annotated[SupabaseIntegrationCredentialsStore, Depends(get_store)],
):
creds = store.get_creds_by_id(user_id, cred_id)
if not creds:
raise HTTPException(status_code=404, detail="Credentials not found")
if creds.provider != provider:
raise HTTPException(
status_code=404, detail="Credentials do not match the specified provider"
)
store.delete_creds_by_id(user_id, cred_id)
return Response(status_code=204)
# -------- UTILITIES --------- #
@ -145,8 +203,9 @@ def _get_provider_oauth_handler(req: Request, provider_name: str) -> BaseOAuthHa
)
handler_class = HANDLERS_BY_NAME[provider_name]
frontend_base_url = settings.config.frontend_base_url or str(req.base_url)
return handler_class(
client_id=client_id,
client_secret=client_secret,
redirect_uri=str(req.url_for("callback", provider=provider_name)),
redirect_uri=f"{frontend_base_url}/auth/integrations/oauth_callback",
)

View File

@ -20,4 +20,6 @@ def get_user_id(payload: dict = Depends(auth_middleware)) -> str:
def get_supabase() -> Client:
return create_client(settings.secrets.supabase_url, settings.secrets.supabase_key)
return create_client(
settings.secrets.supabase_url, settings.secrets.supabase_service_role_key
)

View File

@ -20,13 +20,10 @@ app = FastAPI()
event_queue = AsyncRedisEventQueue()
_connection_manager = None
logger.info(f"CORS allow origins: {settings.config.backend_cors_allow_origins}")
app.add_middleware(
CORSMiddleware,
allow_origins=[
"http://localhost:3000",
"http://127.0.0.1:3000",
"https://dev-builder.agpt.co",
],
allow_origins=settings.config.backend_cors_allow_origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
@ -174,4 +171,8 @@ async def websocket_router(
class WebsocketServer(AppProcess):
def run(self):
uvicorn.run(app, host="0.0.0.0", port=Config().websocket_server_port)
uvicorn.run(
app,
host=Config().websocket_server_host,
port=Config().websocket_server_port,
)

View File

@ -13,7 +13,7 @@ from backend.data import db
from backend.data.queue import AsyncEventQueue, AsyncRedisEventQueue
from backend.util.process import AppProcess
from backend.util.retry import conn_retry
from backend.util.settings import Config
from backend.util.settings import Config, Secrets
logger = logging.getLogger(__name__)
T = TypeVar("T")
@ -48,6 +48,7 @@ class AppService(AppProcess):
event_queue: AsyncEventQueue = AsyncRedisEventQueue()
use_db: bool = False
use_redis: bool = False
use_supabase: bool = False
def __init__(self, port):
self.port = port
@ -76,6 +77,13 @@ class AppService(AppProcess):
self.shared_event_loop.run_until_complete(db.connect())
if self.use_redis:
self.shared_event_loop.run_until_complete(self.event_queue.connect())
if self.use_supabase:
from supabase import create_client
secrets = Secrets()
self.supabase = create_client(
secrets.supabase_url, secrets.supabase_service_role_key
)
# Initialize the async loop.
async_thread = threading.Thread(target=self.__start_async_loop)

View File

@ -1,8 +1,8 @@
import json
import os
from typing import Any, Dict, Generic, Set, Tuple, Type, TypeVar
from typing import Any, Dict, Generic, List, Set, Tuple, Type, TypeVar
from pydantic import BaseModel, Field, PrivateAttr
from pydantic import BaseModel, Field, PrivateAttr, field_validator
from pydantic_settings import (
BaseSettings,
JsonConfigSettingsSource,
@ -80,6 +80,11 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
extra="allow",
)
websocket_server_host: str = Field(
default="0.0.0.0",
description="The host for the websocket server to run on",
)
websocket_server_port: int = Field(
default=8001,
description="The port for the websocket server to run on",
@ -100,11 +105,51 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
description="The port for agent server daemon to run on",
)
agent_api_host: str = Field(
default="0.0.0.0",
description="The host for agent server API to run on",
)
agent_api_port: int = Field(
default=8006,
description="The port for agent server API to run on",
)
frontend_base_url: str = Field(
default="",
description="Can be used to explicitly set the base URL for the frontend. "
"This value is then used to generate redirect URLs for OAuth flows.",
)
backend_cors_allow_origins: List[str] = Field(default_factory=list)
@field_validator("backend_cors_allow_origins")
@classmethod
def validate_cors_allow_origins(cls, v: List[str]) -> List[str]:
out = []
port = None
has_localhost = False
has_127_0_0_1 = False
for url in v:
url = url.strip()
if url.startswith(("http://", "https://")):
if "localhost" in url:
port = url.split(":")[2]
has_localhost = True
if "127.0.0.1" in url:
port = url.split(":")[2]
has_127_0_0_1 = True
out.append(url)
else:
raise ValueError(f"Invalid URL: {url}")
if has_127_0_0_1 and not has_localhost:
out.append(f"http://localhost:{port}")
if has_localhost and not has_127_0_0_1:
out.append(f"http://127.0.0.1:{port}")
return out
@classmethod
def settings_customise_sources(
cls,
@ -127,7 +172,9 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings):
"""Secrets for the server."""
supabase_url: str = Field(default="", description="Supabase URL")
supabase_key: str = Field(default="", description="Supabase key")
supabase_service_role_key: str = Field(
default="", description="Supabase service role key"
)
# OAuth server credentials for integrations
github_client_id: str = Field(default="", description="GitHub OAuth client ID")

View File

@ -4,6 +4,7 @@ import time
from backend.data import db
from backend.data.block import Block, initialize_blocks
from backend.data.execution import ExecutionResult, ExecutionStatus
from backend.data.model import CREDENTIALS_FIELD_NAME
from backend.data.queue import AsyncEventQueue
from backend.data.user import create_default_user
from backend.executor import ExecutionManager, ExecutionScheduler
@ -130,10 +131,19 @@ def execute_block_test(block: Block):
else:
log(f"{prefix} mock {mock_name} not found in block")
extra_exec_kwargs = {}
if CREDENTIALS_FIELD_NAME in block.input_schema.model_fields:
if not block.test_credentials:
raise ValueError(
f"{prefix} requires credentials but has no test_credentials"
)
extra_exec_kwargs[CREDENTIALS_FIELD_NAME] = block.test_credentials
for input_data in block.test_input:
log(f"{prefix} in: {input_data}")
for output_name, output_data in block.execute(input_data):
for output_name, output_data in block.execute(input_data, **extra_exec_kwargs):
if output_index >= len(block.test_output):
raise ValueError(f"{prefix} produced output more than expected")
ex_output_name, ex_output_data = block.test_output[output_index]

View File

@ -1,7 +1,9 @@
from backend.data.block import get_blocks
import pytest
from backend.data.block import Block, get_blocks
from backend.util.test import execute_block_test
def test_available_blocks():
for block in get_blocks().values():
execute_block_test(type(block)())
@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b.name)
def test_available_blocks(block: Block):
execute_block_test(type(block)())

View File

@ -58,7 +58,7 @@ services:
environment:
- SUPABASE_URL=http://kong:8000
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
- SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
- SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- REDIS_HOST=redis
- REDIS_PORT=6379
@ -66,6 +66,8 @@ services:
- ENABLE_AUTH=true
- PYRO_HOST=0.0.0.0
- EXECUTIONMANAGER_HOST=executor
- FRONTEND_BASE_URL=http://localhost:3000
- BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
ports:
- "8006:8006"
- "8003:8003" # execution scheduler
@ -91,9 +93,9 @@ services:
migrate:
condition: service_completed_successfully
environment:
- NEXT_PUBLIC_SUPABASE_URL=http://kong:8000
- SUPABASE_URL=http://kong:8000
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
- SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
- SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- REDIS_HOST=redis
- REDIS_PORT=6379
@ -125,15 +127,15 @@ services:
migrate:
condition: service_completed_successfully
environment:
- SUPABASE_URL=http://kong:8000
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
- SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=password
- ENABLE_AUTH=true
- PYRO_HOST=0.0.0.0
- BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
ports:
- "8001:8001"
networks:
@ -158,6 +160,7 @@ services:
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
- SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=market
- BACKEND_CORS_ALLOW_ORIGINS="http://localhost:3000,http://127.0.0.1:3000"
ports:
- "8015:8015"
networks:

View File

@ -96,36 +96,6 @@ services:
file: ./supabase/docker/docker-compose.yml
service: rest
realtime:
<<: *supabase-services
extends:
file: ./supabase/docker/docker-compose.yml
service: realtime
storage:
<<: *supabase-services
extends:
file: ./supabase/docker/docker-compose.yml
service: storage
imgproxy:
<<: *supabase-services
extends:
file: ./supabase/docker/docker-compose.yml
service: imgproxy
meta:
<<: *supabase-services
extends:
file: ./supabase/docker/docker-compose.yml
service: meta
functions:
<<: *supabase-services
extends:
file: ./supabase/docker/docker-compose.yml
service: functions
analytics:
<<: *supabase-services
extends:

View File

@ -37,3 +37,8 @@ next-env.d.ts
# Sentry Config File
.env.sentry-build-plugin
node_modules/
/test-results/
/playwright-report/
/blob-report/
/playwright/.cache/

View File

@ -3,13 +3,20 @@
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "export NODE_ENV=development && next dev",
"dev": "next dev",
"dev:nosentry": "export NODE_ENV=development && export DISABLE_SENTRY=true && next dev",
"dev:test": "export NODE_ENV=test && next dev",
"build": "next build",
"start": "next start",
"lint": "next lint",
"format": "prettier --write ."
"format": "prettier --write .",
"test": "playwright test",
"test-ui": "playwright test --ui",
"gentests": "playwright codegen http://localhost:3000"
},
"browserslist": [
"defaults"
],
"dependencies": {
"@hookform/resolvers": "^3.9.0",
"@next/third-parties": "^14.2.5",
@ -47,7 +54,7 @@
"react-day-picker": "^8.10.1",
"react-dom": "^18",
"react-hook-form": "^7.52.1",
"react-icons": "^5.2.1",
"react-icons": "^5.3.0",
"react-markdown": "^9.0.1",
"react-modal": "^3.16.1",
"react-shepherd": "^6.1.1",
@ -58,6 +65,7 @@
"zod": "^3.23.8"
},
"devDependencies": {
"@playwright/test": "^1.47.1",
"@types/node": "^20",
"@types/react": "^18",
"@types/react-dom": "^18",

View File

@ -0,0 +1,81 @@
import { defineConfig, devices } from "@playwright/test";
/**
* Read environment variables from file.
* https://github.com/motdotla/dotenv
*/
// import dotenv from 'dotenv';
// import path from 'path';
// dotenv.config({ path: path.resolve(__dirname, '.env') });
/**
* See https://playwright.dev/docs/test-configuration.
*/
export default defineConfig({
testDir: "./src/tests",
/* Run tests in files in parallel */
fullyParallel: true,
/* Fail the build on CI if you accidentally left test.only in the source code. */
forbidOnly: !!process.env.CI,
/* Retry on CI only */
retries: process.env.CI ? 2 : 0,
/* Opt out of parallel tests on CI. */
workers: process.env.CI ? 1 : undefined,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: "html",
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: {
/* Base URL to use in actions like `await page.goto('/')`. */
baseURL: "http://localhost:3000/",
/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
trace: "on-first-retry",
bypassCSP: true,
},
/* Configure projects for major browsers */
projects: [
{
name: "chromium",
use: { ...devices["Desktop Chrome"] },
},
{
name: "firefox",
use: { ...devices["Desktop Firefox"] },
},
{
name: "webkit",
use: { ...devices["Desktop Safari"] },
},
/* Test against mobile viewports. */
// {
// name: 'Mobile Chrome',
// use: { ...devices['Pixel 5'] },
// },
// {
// name: 'Mobile Safari',
// use: { ...devices['iPhone 12'] },
// },
/* Test against branded browsers. */
{
name: "Microsoft Edge",
use: { ...devices["Desktop Edge"], channel: "msedge" },
},
// {
// name: 'Google Chrome',
// use: { ...devices['Desktop Chrome'], channel: 'chrome' },
// },
],
/* Run your local dev server before starting the tests */
webServer: {
command: "npm run build && npm run start",
url: "http://localhost:3000/",
reuseExistingServer: !process.env.CI,
timeout: 120 * 1000,
},
});

View File

@ -7,7 +7,7 @@ import * as Sentry from "@sentry/nextjs";
Sentry.init({
dsn: "https://fe4e4aa4a283391808a5da396da20159@o4505260022104064.ingest.us.sentry.io/4507946746380288",
enabled: process.env.NODE_ENV !== "development",
enabled: process.env.DISABLE_SENTRY !== "true",
// Add optional integrations for additional features
integrations: [
@ -31,14 +31,6 @@ Sentry.init({
/^https:\/\/dev\-builder\.agpt\.co\/api/,
],
beforeSend(event, hint) {
// Check if it is an exception, and if so, show the report dialog
if (event.exception && event.event_id) {
Sentry.showReportDialog({ eventId: event.event_id });
}
return event;
},
// Define how likely Replay events are sampled.
// This sets the sample rate to be 10%. You may want this to be 100% while
// in development and sample at a lower rate in production

View File

@ -0,0 +1,38 @@
import { OAuthPopupResultMessage } from "@/components/integrations/credentials-input";
import { NextResponse } from "next/server";
// This route is intended to be used as the callback for integration OAuth flows,
// controlled by the CredentialsInput component. The CredentialsInput opens the login
// page in a pop-up window, which then redirects to this route to close the loop.
export async function GET(request: Request) {
const { searchParams, origin } = new URL(request.url);
const code = searchParams.get("code");
const state = searchParams.get("state");
// Send message from popup window to host window
const message: OAuthPopupResultMessage =
code && state
? { message_type: "oauth_popup_result", success: true, code, state }
: {
message_type: "oauth_popup_result",
success: false,
message: `Incomplete query: ${searchParams.toString()}`,
};
// Return a response with the message as JSON and a script to close the window
return new NextResponse(
`
<html>
<body>
<script>
window.postMessage(${JSON.stringify(message)});
window.close();
</script>
</body>
</html>
`,
{
headers: { "Content-Type": "text/html" },
},
);
}

View File

@ -50,6 +50,9 @@ export async function signup(values: z.infer<typeof loginFormSchema>) {
const { data, error } = await supabase.auth.signUp(values);
if (error) {
if (error.message.includes("P0001")) {
return "Please join our waitlist for your turn: https://agpt.co/waitlist";
}
return error.message;
}

View File

@ -37,6 +37,49 @@ type FormData = {
selectedAgentId: string;
};
const keywords = [
"Automation",
"AI Workflows",
"Integration",
"Task Automation",
"Data Processing",
"Workflow Management",
"Real-time Analytics",
"Custom Triggers",
"Event-driven",
"API Integration",
"Data Transformation",
"Multi-step Workflows",
"Collaboration Tools",
"Business Process Automation",
"No-code Solutions",
"AI-Powered",
"Smart Notifications",
"Data Syncing",
"User Engagement",
"Reporting Automation",
"Lead Generation",
"Customer Support Automation",
"E-commerce Automation",
"Social Media Management",
"Email Marketing Automation",
"Document Management",
"Data Enrichment",
"Performance Tracking",
"Predictive Analytics",
"Resource Allocation",
"Chatbot",
"Virtual Assistant",
"Workflow Automation",
"Social Media Manager",
"Email Optimizer",
"Content Generator",
"Data Analyzer",
"Task Scheduler",
"Customer Service Bot",
"Personalization Engine",
];
const SubmitPage: React.FC = () => {
const router = useRouter();
const {
@ -292,12 +335,11 @@ const SubmitPage: React.FC = () => {
</MultiSelectorTrigger>
<MultiSelectorContent>
<MultiSelectorList>
<MultiSelectorItem value="keyword1">
Keyword 1
</MultiSelectorItem>
<MultiSelectorItem value="keyword2">
Keyword 2
</MultiSelectorItem>
{keywords.map((keyword) => (
<MultiSelectorItem key={keyword} value={keyword}>
{keyword}
</MultiSelectorItem>
))}
{/* Add more predefined keywords as needed */}
</MultiSelectorList>
</MultiSelectorContent>

View File

@ -5,12 +5,15 @@ import { ThemeProvider as NextThemesProvider } from "next-themes";
import { ThemeProviderProps } from "next-themes/dist/types";
import { TooltipProvider } from "@/components/ui/tooltip";
import SupabaseProvider from "@/components/SupabaseProvider";
import CredentialsProvider from "@/components/integrations/credentials-provider";
export function Providers({ children, ...props }: ThemeProviderProps) {
return (
<NextThemesProvider {...props}>
<SupabaseProvider>
<TooltipProvider>{children}</TooltipProvider>
<CredentialsProvider>
<TooltipProvider>{children}</TooltipProvider>
</CredentialsProvider>
</SupabaseProvider>
</NextThemesProvider>
);

View File

@ -22,10 +22,10 @@ export default function CreditButton() {
<Button
onClick={fetchCredit}
variant="outline"
className="flex items-center space-x-2 text-muted-foreground"
className="flex items-center space-x-2 rounded-xl bg-gray-200"
>
<span className="flex items-center">
<IconCoin /> {credit}
<span className="mr-2 flex items-center text-foreground">
{credit} <span className="ml-2 text-muted-foreground"> credits</span>
</span>
<IconRefresh />
</Button>

View File

@ -255,13 +255,19 @@ export function CustomNode({ data, id, width, height }: NodeProps<CustomNode>) {
return (
(isRequired || isAdvancedOpen || isConnected || !isAdvanced) && (
<div key={propKey} onMouseOver={() => {}}>
<NodeHandle
keyName={propKey}
isConnected={isConnected}
isRequired={isRequired}
schema={propSchema}
side="left"
/>
{"credentials_provider" in propSchema ? (
<span className="text-m green -mb-1 text-gray-900">
Credentials
</span>
) : (
<NodeHandle
keyName={propKey}
isConnected={isConnected}
isRequired={isRequired}
schema={propSchema}
side="left"
/>
)}
{!isConnected && (
<NodeGenericInputField
className="mb-2 mt-1"
@ -534,7 +540,6 @@ export function CustomNode({ data, id, width, height }: NodeProps<CustomNode>) {
value === inputValues[key] || (!value && !inputValues[key]),
),
);
console.debug(`Block cost ${inputValues}|${data.blockCosts}=${blockCost}`);
return (
<div

View File

@ -51,7 +51,7 @@ export default function DataTable({
{beautifyString(key)}
</TableCell>
<TableCell className="cursor-text">
<div className="flex min-h-9 items-center">
<div className="flex min-h-9 items-center whitespace-pre-wrap">
<Button
className="absolute right-1 top-auto m-1 hidden p-2 group-hover:block"
variant="outline"
@ -62,7 +62,7 @@ export default function DataTable({
value
.map((i) =>
typeof i === "object"
? JSON.stringify(i)
? JSON.stringify(i, null, 2)
: String(i),
)
.join(", "),
@ -75,7 +75,9 @@ export default function DataTable({
{value
.map((i) => {
const text =
typeof i === "object" ? JSON.stringify(i) : String(i);
typeof i === "object"
? JSON.stringify(i, null, 2)
: String(i);
return truncateLongData && text.length > maxChars
? text.slice(0, maxChars) + "..."
: text;

View File

@ -34,21 +34,16 @@ import ConnectionLine from "./ConnectionLine";
import { Control, ControlPanel } from "@/components/edit/control/ControlPanel";
import { SaveControl } from "@/components/edit/control/SaveControl";
import { BlocksControl } from "@/components/edit/control/BlocksControl";
import {
IconPlay,
IconUndo2,
IconRedo2,
IconSquare,
IconOutput,
} from "@/components/ui/icons";
import { IconUndo2, IconRedo2 } from "@/components/ui/icons";
import { startTutorial } from "./tutorial";
import useAgentGraph from "@/hooks/useAgentGraph";
import { v4 as uuidv4 } from "uuid";
import { useRouter, usePathname, useSearchParams } from "next/navigation";
import { LogOut } from "lucide-react";
import { useRouter, usePathname } from "next/navigation";
import RunnerUIWrapper, {
RunnerUIWrapperRef,
} from "@/components/RunnerUIWrapper";
import PrimaryActionBar from "@/components/PrimaryActionButton";
import { useToast } from "@/components/ui/use-toast";
// This is for the history, this is the minimum distance a block must move before it is logged
// It helps to prevent spamming the history with small movements especially when pressing on a input in a block
@ -108,6 +103,8 @@ const FlowEditor: React.FC<{
const runnerUIRef = useRef<RunnerUIWrapperRef>(null);
const { toast } = useToast();
useEffect(() => {
const params = new URLSearchParams(window.location.search);
@ -557,23 +554,6 @@ const FlowEditor: React.FC<{
icon: <IconRedo2 />,
onClick: handleRedo,
},
{
label: !savedAgent
? "Please save the agent to run"
: !isRunning
? "Run"
: "Stop",
icon: !isRunning ? <IconPlay /> : <IconSquare />,
onClick: !isRunning
? () => runnerUIRef.current?.runOrOpenInput()
: requestStopRun,
disabled: !savedAgent,
},
{
label: "Runner Output",
icon: <LogOut size={18} strokeWidth={1.8} />,
onClick: () => runnerUIRef.current?.openRunnerOutput(),
},
];
return (
@ -614,6 +594,27 @@ const FlowEditor: React.FC<{
onNameChange={setAgentName}
/>
</ControlPanel>
<PrimaryActionBar
onClickAgentOutputs={() => runnerUIRef.current?.openRunnerOutput()}
onClickRunAgent={() => {
if (!savedAgent) {
toast({
title: `Please save the agent using the button in the left sidebar before running it.`,
duration: 2000,
});
return;
}
if (!isRunning) {
runnerUIRef.current?.runOrOpenInput();
} else {
requestStopRun();
}
}}
isDisabled={!savedAgent}
isRunning={isRunning}
requestStopRun={requestStopRun}
runAgentTooltip={!isRunning ? "Run Agent" : "Stop Agent"}
/>
</ReactFlow>
</div>
<RunnerUIWrapper

View File

@ -15,6 +15,9 @@ import {
} from "@/components/ui/icons";
import AutoGPTServerAPI from "@/lib/autogpt-server-api";
import CreditButton from "@/components/CreditButton";
import { BsBoxes } from "react-icons/bs";
import { LuLaptop } from "react-icons/lu";
import { LuShoppingCart } from "react-icons/lu";
export async function NavBar() {
const isAvailable = Boolean(
@ -24,7 +27,7 @@ export async function NavBar() {
const { user } = await getServerUser();
return (
<header className="sticky top-0 z-50 flex h-16 items-center gap-4 border-b bg-background px-4 md:px-6">
<header className="sticky top-0 z-50 flex h-16 items-center gap-4 border-b bg-background px-4 md:rounded-b-3xl md:px-6 md:shadow-md">
<div className="flex flex-1 items-center gap-4">
<Sheet>
<SheetTrigger asChild>
@ -40,64 +43,58 @@ export async function NavBar() {
<SheetContent side="left">
<nav className="grid gap-6 text-lg font-medium">
<Link
href="/"
className="flex flex-row gap-2 text-muted-foreground hover:text-foreground"
href="/marketplace"
className="mt-4 flex flex-row items-center gap-2 text-muted-foreground hover:text-foreground"
>
<IconSquareActivity /> Monitor
<LuShoppingCart /> Marketplace
</Link>
<Link
href="/"
className="flex flex-row items-center gap-2 text-muted-foreground hover:text-foreground"
>
<LuLaptop /> Monitor
</Link>
<Link
href="/build"
className="flex flex-row gap-2 text-muted-foreground hover:text-foreground"
className="flex flex-row items-center gap-2 text-muted-foreground hover:text-foreground"
>
<IconWorkFlow /> Build
</Link>
<Link
href="/marketplace"
className="flex flex-row gap-2 text-muted-foreground hover:text-foreground"
>
<IconPackage2 /> Marketplace
<BsBoxes /> Build
</Link>
</nav>
</SheetContent>
</Sheet>
<nav className="hidden md:flex md:flex-row md:items-center md:gap-5 lg:gap-6">
<nav className="hidden md:flex md:flex-row md:items-center md:gap-7 lg:gap-8">
<div className="flex h-10 w-20 flex-1 flex-row items-center justify-center gap-2">
<a href="https://agpt.co/">
<Image
src="/AUTOgpt_Logo_dark.png"
alt="AutoGPT Logo"
width={100}
height={40}
priority
/>
</a>
</div>
<Link
href="/marketplace"
className="text-basehover:text-foreground flex flex-row items-center gap-2 font-semibold text-foreground"
>
<LuShoppingCart /> Marketplace
</Link>
<Link
href="/"
className="flex flex-row items-center gap-2 text-muted-foreground hover:text-foreground"
className="text-basehover:text-foreground flex flex-row items-center gap-2 font-semibold text-foreground"
>
<IconSquareActivity /> Monitor
<LuLaptop className="mr-1" /> Monitor
</Link>
<Link
href="/build"
className="flex flex-row items-center gap-2 text-muted-foreground hover:text-foreground"
className="flex flex-row items-center gap-2 text-base font-semibold text-foreground hover:text-foreground"
>
<IconWorkFlow /> Build
</Link>
<Link
href="/marketplace"
className="flex flex-row items-center gap-2 text-muted-foreground hover:text-foreground"
>
<IconPackage2 /> Marketplace
<BsBoxes className="mr-1" /> Build
</Link>
</nav>
</div>
<div className="relative flex flex-1 justify-center">
<a
className="pointer-events-auto flex place-items-center gap-2"
href="https://news.agpt.co/"
target="_blank"
rel="noopener noreferrer"
>
By{" "}
<Image
src="/AUTOgpt_Logo_dark.png"
alt="AutoGPT Logo"
width={100}
height={20}
priority
/>
</a>
</div>
<div className="flex flex-1 items-center justify-end gap-4">
{isAvailable && user && <CreditButton />}

View File

@ -0,0 +1,77 @@
import React from "react";
import { Button } from "./ui/button";
import { LogOut } from "lucide-react";
import { IconPlay, IconSquare } from "@/components/ui/icons";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/ui/tooltip";
interface PrimaryActionBarProps {
onClickAgentOutputs: () => void;
onClickRunAgent: () => void;
isRunning: boolean;
isDisabled: boolean;
requestStopRun: () => void;
runAgentTooltip: string;
}
const PrimaryActionBar: React.FC<PrimaryActionBarProps> = ({
onClickAgentOutputs,
onClickRunAgent,
isRunning,
isDisabled,
requestStopRun,
runAgentTooltip,
}) => {
const runButtonLabel = !isRunning ? "Run" : "Stop";
const runButtonIcon = !isRunning ? <IconPlay /> : <IconSquare />;
const runButtonOnClick = !isRunning ? onClickRunAgent : requestStopRun;
return (
<div className="absolute bottom-0 left-0 right-0 z-50 flex items-center justify-center p-4">
<div className={`flex gap-4`}>
<Tooltip key="ViewOutputs" delayDuration={500}>
<TooltipTrigger asChild>
<Button
className="flex items-center gap-2"
onClick={onClickAgentOutputs}
size="primary"
variant="outline"
>
<LogOut className="h-5 w-5" />
<span className="text-lg font-medium">Agent Outputs </span>
</Button>
</TooltipTrigger>
<TooltipContent>
<p>View agent outputs</p>
</TooltipContent>
</Tooltip>
<Tooltip key="RunAgent" delayDuration={500}>
<TooltipTrigger asChild>
<Button
className="flex items-center gap-2"
onClick={runButtonOnClick}
size="primary"
style={{
background: isRunning ? "#FFB3BA" : "#7544DF",
opacity: isDisabled ? 0.5 : 1,
}}
>
{runButtonIcon}
<span className="text-lg font-medium">{runButtonLabel}</span>
</Button>
</TooltipTrigger>
<TooltipContent>
<p>{runAgentTooltip}</p>
</TooltipContent>
</Tooltip>
</div>
</div>
);
};
export default PrimaryActionBar;

View File

@ -8,7 +8,7 @@ import RunnerInputUI from "./runner-ui/RunnerInputUI";
import RunnerOutputUI from "./runner-ui/RunnerOutputUI";
import { Node } from "@xyflow/react";
import { filterBlocksByType } from "@/lib/utils";
import { BlockIORootSchema } from "@/lib/autogpt-server-api/types";
import { BlockIORootSchema, BlockUIType } from "@/lib/autogpt-server-api/types";
interface RunnerUIWrapperProps {
nodes: Node[];
@ -31,12 +31,12 @@ const RunnerUIWrapper = forwardRef<RunnerUIWrapperRef, RunnerUIWrapperProps>(
const getBlockInputsAndOutputs = useCallback(() => {
const inputBlocks = filterBlocksByType(
nodes,
(node) => node.data.block_id === "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
(node) => node.data.uiType === BlockUIType.INPUT,
);
const outputBlocks = filterBlocksByType(
nodes,
(node) => node.data.block_id === "363ae599-353e-4804-937e-b2ee3cef3da4",
(node) => node.data.uiType === BlockUIType.OUTPUT,
);
const inputs = inputBlocks.map((node) => ({

View File

@ -45,6 +45,13 @@ export const BlocksControl: React.FC<BlocksControlProps> = ({
}) => {
const [searchQuery, setSearchQuery] = useState("");
const [selectedCategory, setSelectedCategory] = useState<string | null>(null);
const [filteredBlocks, setFilteredBlocks] = useState<Block[]>(blocks);
const resetFilters = React.useCallback(() => {
setSearchQuery("");
setSelectedCategory(null);
setFilteredBlocks(blocks);
}, [blocks]);
// Extract unique categories from blocks
const categories = Array.from(
@ -53,18 +60,25 @@ export const BlocksControl: React.FC<BlocksControlProps> = ({
),
);
const filteredBlocks = blocks.filter(
(block: Block) =>
(block.name.toLowerCase().includes(searchQuery.toLowerCase()) ||
beautifyString(block.name)
.toLowerCase()
.includes(searchQuery.toLowerCase())) &&
(!selectedCategory ||
block.categories.some((cat) => cat.category === selectedCategory)),
);
React.useEffect(() => {
setFilteredBlocks(
blocks.filter(
(block: Block) =>
(block.name.toLowerCase().includes(searchQuery.toLowerCase()) ||
beautifyString(block.name)
.toLowerCase()
.includes(searchQuery.toLowerCase())) &&
(!selectedCategory ||
block.categories.some((cat) => cat.category === selectedCategory)),
),
);
}, [blocks, searchQuery, selectedCategory]);
return (
<Popover open={pinBlocksPopover ? true : undefined}>
<Popover
open={pinBlocksPopover ? true : undefined}
onOpenChange={(open) => open || resetFilters()}
>
<Tooltip delayDuration={500}>
<TooltipTrigger asChild>
<PopoverTrigger asChild>
@ -132,33 +146,31 @@ export const BlocksControl: React.FC<BlocksControlProps> = ({
{filteredBlocks.map((block) => (
<Card
key={block.id}
className={`m-2 ${getPrimaryCategoryColor(block.categories)}`}
className="m-2 my-4 flex h-20 border"
data-id={`block-card-${block.id}`}
onClick={() => addBlock(block.id, block.name)}
>
<div className="m-3 flex items-center justify-between">
{/* This div needs to be 10px wide and the same height as the card and be the primary color showing up on top of the card with matching rounded corners */}
<div
className={`z-20 flex min-w-4 flex-shrink-0 rounded-l-xl border ${getPrimaryCategoryColor(block.categories)}`}
></div>
<div className="mx-3 flex flex-1 items-center justify-between">
<div className="mr-2 min-w-0 flex-1">
<span
className="block truncate font-medium"
className="block truncate text-base font-semibold"
data-id={`block-name-${block.id}`}
>
{beautifyString(block.name)}
</span>
<span className="block break-words text-sm font-normal text-gray-500">
{block.description}
</span>
</div>
<SchemaTooltip description={block.description} />
<div
className="flex flex-shrink-0 items-center gap-1"
data-id={`block-tooltip-${block.id}`}
>
<Button
variant="ghost"
size="icon"
onClick={() => addBlock(block.id, block.name)}
aria-label="Add block"
data-id={`add-block-button-${block.id}`}
>
<PlusIcon />
</Button>
</div>
></div>
</div>
</Card>
))}

View File

@ -0,0 +1,419 @@
import { z } from "zod";
import { cn } from "@/lib/utils";
import { useForm } from "react-hook-form";
import { Input } from "@/components/ui/input";
import { Button } from "@/components/ui/button";
import useCredentials from "@/hooks/useCredentials";
import { zodResolver } from "@hookform/resolvers/zod";
import AutoGPTServerAPI from "@/lib/autogpt-server-api";
import { NotionLogoIcon } from "@radix-ui/react-icons";
import { FaGithub, FaGoogle } from "react-icons/fa";
import { FC, useMemo, useState } from "react";
import {
APIKeyCredentials,
CredentialsMetaInput,
} from "@/lib/autogpt-server-api/types";
import {
IconKey,
IconKeyPlus,
IconUser,
IconUserPlus,
} from "@/components/ui/icons";
import {
Dialog,
DialogContent,
DialogDescription,
DialogHeader,
DialogTitle,
} from "@/components/ui/dialog";
import {
Form,
FormControl,
FormDescription,
FormField,
FormItem,
FormLabel,
FormMessage,
} from "@/components/ui/form";
import {
Select,
SelectContent,
SelectItem,
SelectSeparator,
SelectTrigger,
SelectValue,
} from "@/components/ui/select";
const providerIcons: Record<string, React.FC<{ className?: string }>> = {
github: FaGithub,
google: FaGoogle,
notion: NotionLogoIcon,
};
export type OAuthPopupResultMessage = { message_type: "oauth_popup_result" } & (
| {
success: true;
code: string;
state: string;
}
| {
success: false;
message: string;
}
);
export const CredentialsInput: FC<{
className?: string;
selectedCredentials?: CredentialsMetaInput;
onSelectCredentials: (newValue: CredentialsMetaInput) => void;
}> = ({ className, selectedCredentials, onSelectCredentials }) => {
const api = useMemo(() => new AutoGPTServerAPI(), []);
const credentials = useCredentials();
const [isAPICredentialsModalOpen, setAPICredentialsModalOpen] =
useState(false);
const [isOAuth2FlowInProgress, setOAuth2FlowInProgress] = useState(false);
const [oAuthPopupController, setOAuthPopupController] =
useState<AbortController | null>(null);
if (!credentials) {
return null;
}
if (credentials.isLoading) {
return <div>Loading...</div>;
}
const {
schema,
provider,
providerName,
supportsApiKey,
supportsOAuth2,
savedApiKeys,
savedOAuthCredentials,
oAuthCallback,
} = credentials;
async function handleOAuthLogin() {
const { login_url, state_token } = await api.oAuthLogin(
provider,
schema.credentials_scopes,
);
setOAuth2FlowInProgress(true);
const popup = window.open(login_url, "_blank", "popup=true");
const controller = new AbortController();
setOAuthPopupController(controller);
controller.signal.onabort = () => {
setOAuth2FlowInProgress(false);
popup?.close();
};
popup?.addEventListener(
"message",
async (e: MessageEvent<OAuthPopupResultMessage>) => {
if (
typeof e.data != "object" ||
!(
"message_type" in e.data &&
e.data.message_type == "oauth_popup_result"
)
)
return;
if (!e.data.success) {
console.error("OAuth flow failed:", e.data.message);
return;
}
if (e.data.state !== state_token) return;
const credentials = await oAuthCallback(e.data.code, e.data.state);
onSelectCredentials({
id: credentials.id,
type: "oauth2",
title: credentials.title,
provider,
});
controller.abort("success");
},
{ signal: controller.signal },
);
setTimeout(
() => {
controller.abort("timeout");
},
5 * 60 * 1000,
);
}
const ProviderIcon = providerIcons[provider];
const modals = (
<>
{supportsApiKey && (
<APIKeyCredentialsModal
open={isAPICredentialsModalOpen}
onClose={() => setAPICredentialsModalOpen(false)}
onCredentialsCreate={(credsMeta) => {
onSelectCredentials(credsMeta);
setAPICredentialsModalOpen(false);
}}
/>
)}
{supportsOAuth2 && (
<OAuth2FlowWaitingModal
open={isOAuth2FlowInProgress}
onClose={() => oAuthPopupController?.abort("canceled")}
providerName={providerName}
/>
)}
</>
);
// No saved credentials yet
if (savedApiKeys.length === 0 && savedOAuthCredentials.length === 0) {
return (
<>
<div className={cn("flex flex-row space-x-2", className)}>
{supportsOAuth2 && (
<Button onClick={handleOAuthLogin}>
<ProviderIcon className="mr-2 h-4 w-4" />
{"Sign in with " + providerName}
</Button>
)}
{supportsApiKey && (
<Button onClick={() => setAPICredentialsModalOpen(true)}>
<ProviderIcon className="mr-2 h-4 w-4" />
Enter API key
</Button>
)}
</div>
{modals}
</>
);
}
function handleValueChange(newValue: string) {
if (newValue === "sign-in") {
// Trigger OAuth2 sign in flow
handleOAuthLogin();
} else if (newValue === "add-api-key") {
// Open API key dialog
setAPICredentialsModalOpen(true);
} else {
const selectedCreds = savedApiKeys
.concat(savedOAuthCredentials)
.find((c) => c.id == newValue)!;
onSelectCredentials({
id: selectedCreds.id,
type: selectedCreds.type,
provider: schema.credentials_provider,
// title: customTitle, // TODO: add input for title
});
}
}
// Saved credentials exist
return (
<>
<Select value={selectedCredentials?.id} onValueChange={handleValueChange}>
<SelectTrigger>
<SelectValue placeholder={schema.placeholder} />
</SelectTrigger>
<SelectContent className="nodrag">
{savedOAuthCredentials.map((credentials, index) => (
<SelectItem key={index} value={credentials.id}>
<ProviderIcon className="mr-2 inline h-4 w-4" />
{credentials.username}
</SelectItem>
))}
{savedApiKeys.map((credentials, index) => (
<SelectItem key={index} value={credentials.id}>
<ProviderIcon className="mr-2 inline h-4 w-4" />
<IconKey className="mr-1.5 inline" />
{credentials.title}
</SelectItem>
))}
<SelectSeparator />
{supportsOAuth2 && (
<SelectItem value="sign-in">
<IconUserPlus className="mr-1.5 inline" />
Sign in with {providerName}
</SelectItem>
)}
{supportsApiKey && (
<SelectItem value="add-api-key">
<IconKeyPlus className="mr-1.5 inline" />
Add new API key
</SelectItem>
)}
</SelectContent>
</Select>
{modals}
</>
);
};
export const APIKeyCredentialsModal: FC<{
open: boolean;
onClose: () => void;
onCredentialsCreate: (creds: CredentialsMetaInput) => void;
}> = ({ open, onClose, onCredentialsCreate }) => {
const credentials = useCredentials();
const formSchema = z.object({
apiKey: z.string().min(1, "API Key is required"),
title: z.string().min(1, "Name is required"),
expiresAt: z.string().optional(),
});
const form = useForm<z.infer<typeof formSchema>>({
resolver: zodResolver(formSchema),
defaultValues: {
apiKey: "",
title: "",
expiresAt: "",
},
});
if (!credentials || credentials.isLoading || !credentials.supportsApiKey) {
return null;
}
const { schema, provider, providerName, createAPIKeyCredentials } =
credentials;
async function onSubmit(values: z.infer<typeof formSchema>) {
const expiresAt = values.expiresAt
? new Date(values.expiresAt).getTime() / 1000
: undefined;
const newCredentials = await createAPIKeyCredentials({
api_key: values.apiKey,
title: values.title,
expires_at: expiresAt,
});
onCredentialsCreate({
provider,
id: newCredentials.id,
type: "api_key",
title: newCredentials.title,
});
}
return (
<Dialog
open={open}
onOpenChange={(open) => {
if (!open) onClose();
}}
>
<DialogContent>
<DialogHeader>
<DialogTitle>Add new API key for {providerName}</DialogTitle>
{schema.description && (
<DialogDescription>{schema.description}</DialogDescription>
)}
</DialogHeader>
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)} className="space-y-4">
<FormField
control={form.control}
name="apiKey"
render={({ field }) => (
<FormItem>
<FormLabel>API Key</FormLabel>
{schema.credentials_scopes && (
<FormDescription>
Required scope(s) for this block:{" "}
{schema.credentials_scopes?.map((s, i, a) => (
<span key={i}>
<code>{s}</code>
{i < a.length - 1 && ", "}
</span>
))}
</FormDescription>
)}
<FormControl>
<Input
type="password"
placeholder="Enter API key..."
{...field}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<FormField
control={form.control}
name="title"
render={({ field }) => (
<FormItem>
<FormLabel>Name</FormLabel>
<FormControl>
<Input
type="text"
placeholder="Enter a name for this API key..."
{...field}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<FormField
control={form.control}
name="expiresAt"
render={({ field }) => (
<FormItem>
<FormLabel>Expiration Date (Optional)</FormLabel>
<FormControl>
<Input
type="datetime-local"
placeholder="Select expiration date..."
{...field}
/>
</FormControl>
<FormMessage />
</FormItem>
)}
/>
<Button type="submit" className="w-full">
Save & use this API key
</Button>
</form>
</Form>
</DialogContent>
</Dialog>
);
};
export const OAuth2FlowWaitingModal: FC<{
open: boolean;
onClose: () => void;
providerName: string;
}> = ({ open, onClose, providerName }) => {
return (
<Dialog
open={open}
onOpenChange={(open) => {
if (!open) onClose();
}}
>
<DialogContent>
<DialogHeader>
<DialogTitle>
Waiting on {providerName} sign-in process...
</DialogTitle>
<DialogDescription>
Complete the sign-in process in the pop-up window.
<br />
Closing this dialog will cancel the sign-in process.
</DialogDescription>
</DialogHeader>
</DialogContent>
</Dialog>
);
};

View File

@ -0,0 +1,164 @@
import AutoGPTServerAPI, {
APIKeyCredentials,
CredentialsMetaResponse,
} from "@/lib/autogpt-server-api";
import {
createContext,
useCallback,
useEffect,
useMemo,
useState,
} from "react";
const CREDENTIALS_PROVIDER_NAMES = ["github", "google", "notion"] as const;
type CredentialsProviderName = (typeof CREDENTIALS_PROVIDER_NAMES)[number];
const providerDisplayNames: Record<CredentialsProviderName, string> = {
github: "GitHub",
google: "Google",
notion: "Notion",
};
type APIKeyCredentialsCreatable = Omit<
APIKeyCredentials,
"id" | "provider" | "type"
>;
export type CredentialsProviderData = {
provider: string;
providerName: string;
savedApiKeys: CredentialsMetaResponse[];
savedOAuthCredentials: CredentialsMetaResponse[];
oAuthCallback: (
code: string,
state_token: string,
) => Promise<CredentialsMetaResponse>;
createAPIKeyCredentials: (
credentials: APIKeyCredentialsCreatable,
) => Promise<CredentialsMetaResponse>;
};
export type CredentialsProvidersContextType = {
[key in CredentialsProviderName]?: CredentialsProviderData;
};
export const CredentialsProvidersContext =
createContext<CredentialsProvidersContextType | null>(null);
export default function CredentialsProvider({
children,
}: {
children: React.ReactNode;
}) {
const [providers, setProviders] =
useState<CredentialsProvidersContextType | null>(null);
const api = useMemo(() => new AutoGPTServerAPI(), []);
const addCredentials = useCallback(
(
provider: CredentialsProviderName,
credentials: CredentialsMetaResponse,
) => {
setProviders((prev) => {
if (!prev || !prev[provider]) return prev;
const updatedProvider = { ...prev[provider] };
if (credentials.type === "api_key") {
updatedProvider.savedApiKeys = [
...updatedProvider.savedApiKeys,
credentials,
];
} else if (credentials.type === "oauth2") {
updatedProvider.savedOAuthCredentials = [
...updatedProvider.savedOAuthCredentials,
credentials,
];
}
return {
...prev,
[provider]: updatedProvider,
};
});
},
[setProviders],
);
/** Wraps `AutoGPTServerAPI.oAuthCallback`, and adds the result to the internal credentials store. */
const oAuthCallback = useCallback(
async (
provider: CredentialsProviderName,
code: string,
state_token: string,
): Promise<CredentialsMetaResponse> => {
const credsMeta = await api.oAuthCallback(provider, code, state_token);
addCredentials(provider, credsMeta);
return credsMeta;
},
[api, addCredentials],
);
/** Wraps `AutoGPTServerAPI.createAPIKeyCredentials`, and adds the result to the internal credentials store. */
const createAPIKeyCredentials = useCallback(
async (
provider: CredentialsProviderName,
credentials: APIKeyCredentialsCreatable,
): Promise<CredentialsMetaResponse> => {
const credsMeta = await api.createAPIKeyCredentials({
provider,
...credentials,
});
addCredentials(provider, credsMeta);
return credsMeta;
},
[api, addCredentials],
);
useEffect(() => {
api.isAuthenticated().then((isAuthenticated) => {
if (!isAuthenticated) return;
CREDENTIALS_PROVIDER_NAMES.forEach((provider) => {
api.listCredentials(provider).then((response) => {
const { oauthCreds, apiKeys } = response.reduce<{
oauthCreds: CredentialsMetaResponse[];
apiKeys: CredentialsMetaResponse[];
}>(
(acc, cred) => {
if (cred.type === "oauth2") {
acc.oauthCreds.push(cred);
} else if (cred.type === "api_key") {
acc.apiKeys.push(cred);
}
return acc;
},
{ oauthCreds: [], apiKeys: [] },
);
setProviders((prev) => ({
...prev,
[provider]: {
provider,
providerName: providerDisplayNames[provider],
savedApiKeys: apiKeys,
savedOAuthCredentials: oauthCreds,
oAuthCallback: (code: string, state_token: string) =>
oAuthCallback(provider, code, state_token),
createAPIKeyCredentials: (
credentials: APIKeyCredentialsCreatable,
) => createAPIKeyCredentials(provider, credentials),
},
}));
});
});
});
}, [api, createAPIKeyCredentials, oAuthCallback]);
return (
<CredentialsProvidersContext.Provider value={providers}>
{children}
</CredentialsProvidersContext.Provider>
);
}

View File

@ -9,6 +9,7 @@ import {
BlockIOStringSubSchema,
BlockIONumberSubSchema,
BlockIOBooleanSubSchema,
BlockIOCredentialsSubSchema,
} from "@/lib/autogpt-server-api/types";
import React, { FC, useCallback, useEffect, useState } from "react";
import { Button } from "./ui/button";
@ -23,6 +24,7 @@ import {
import { Input } from "./ui/input";
import NodeHandle from "./NodeHandle";
import { ConnectionData } from "./CustomNode";
import { CredentialsInput } from "./integrations/credentials-input";
type NodeObjectInputTreeProps = {
selfKey?: string;
@ -47,7 +49,7 @@ const NodeObjectInputTree: FC<NodeObjectInputTreeProps> = ({
className,
displayName,
}) => {
object ??= ("default" in schema ? schema.default : null) ?? {};
object ||= ("default" in schema ? schema.default : null) ?? {};
return (
<div className={cn(className, "w-full flex-col")}>
{displayName && <strong>{displayName}</strong>}
@ -103,7 +105,7 @@ export const NodeGenericInputField: FC<{
className,
displayName,
}) => {
displayName ??= propSchema.title || beautifyString(propKey);
displayName ||= propSchema.title || beautifyString(propKey);
if ("allOf" in propSchema) {
// If this happens, that is because Pydantic wraps $refs in an allOf if the
@ -114,6 +116,18 @@ export const NodeGenericInputField: FC<{
console.warn(`Unsupported 'allOf' in schema for '${propKey}'!`, propSchema);
}
if ("credentials_provider" in propSchema) {
return (
<NodeCredentialsInput
selfKey={propKey}
value={currentValue}
errors={errors}
className={className}
handleInputChange={handleInputChange}
/>
);
}
if ("properties" in propSchema) {
return (
<NodeObjectInputTree
@ -277,6 +291,28 @@ export const NodeGenericInputField: FC<{
}
};
const NodeCredentialsInput: FC<{
selfKey: string;
value: any;
errors: { [key: string]: string | undefined };
handleInputChange: NodeObjectInputTreeProps["handleInputChange"];
className?: string;
}> = ({ selfKey, value, errors, handleInputChange, className }) => {
return (
<div className={cn("flex flex-col", className)}>
<CredentialsInput
onSelectCredentials={(credsMeta) =>
handleInputChange(selfKey, credsMeta)
}
selectedCredentials={value}
/>
{errors[selfKey] && (
<span className="error-message">{errors[selfKey]}</span>
)}
</div>
);
};
const NodeKeyValueInput: FC<{
selfKey: string;
schema: BlockIOKVSubSchema;
@ -537,6 +573,7 @@ const NodeStringInput: FC<{
className,
displayName,
}) => {
value ||= schema.default || "";
return (
<div className={className}>
{schema.enum ? (
@ -606,6 +643,7 @@ export const NodeTextBoxInput: FC<{
className,
displayName,
}) => {
value ||= schema.default || "";
return (
<div className={className}>
<div
@ -650,8 +688,8 @@ const NodeNumberInput: FC<{
className,
displayName,
}) => {
value ??= schema.default;
displayName ??= schema.title || beautifyString(selfKey);
value ||= schema.default;
displayName ||= schema.title || beautifyString(selfKey);
return (
<div className={className}>
<div className="nodrag flex items-center justify-between space-x-3">
@ -687,7 +725,7 @@ const NodeBooleanInput: FC<{
className,
displayName,
}) => {
value ??= schema.default ?? false;
value ||= schema.default ?? false;
return (
<div className={className}>
<div className="nodrag flex items-center">
@ -721,6 +759,7 @@ const NodeFallbackInput: FC<{
className,
displayName,
}) => {
value ||= (schema as BlockIOStringSubSchema)?.default;
return (
<NodeStringInput
selfKey={selfKey}

View File

@ -25,6 +25,7 @@ const buttonVariants = cva(
default: "h-9 px-4 py-2",
sm: "h-8 rounded-md px-3 text-xs",
lg: "h-10 rounded-md px-8",
primary: "h-14 w-44 rounded-2xl",
icon: "h-9 w-9",
},
},

View File

@ -9,7 +9,7 @@ const Card = React.forwardRef<
<div
ref={ref}
className={cn(
"rounded-xl border border-neutral-200 bg-white text-neutral-950 shadow dark:border-neutral-800 dark:bg-neutral-950 dark:text-neutral-50",
"rounded-xl bg-white text-neutral-950 shadow dark:border-neutral-800 dark:bg-neutral-950 dark:text-neutral-50",
className,
)}
{...props}

View File

@ -575,4 +575,148 @@ export const IconMegaphone = createIcon((props) => (
</svg>
));
/**
* Key icon component.
*
* @component IconKey
* @param {IconProps} props - The props object containing additional attributes and event handlers for the icon.
* @returns {JSX.Element} - The key icon.
*
* @example
* // Default usage
* <IconKey />
*
* @example
* // With custom color and size
* <IconKey className="text-primary" size="lg" />
*
* @example
* // With custom size and onClick handler
* <IconKey size="sm" onClick={handleOnClick} />
*/
export const IconKey = createIcon((props) => (
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
strokeLinejoin="round"
{...props}
>
<path d="M2.586 17.414A2 2 0 0 0 2 18.828V21a1 1 0 0 0 1 1h3a1 1 0 0 0 1-1v-1a1 1 0 0 1 1-1h1a1 1 0 0 0 1-1v-1a1 1 0 0 1 1-1h.172a2 2 0 0 0 1.414-.586l.814-.814a6.5 6.5 0 1 0-4-4z" />
<circle cx="16.5" cy="7.5" r=".5" fill="currentColor" />
</svg>
));
/**
* Key(+) icon component.
*
* @component IconKeyPlus
* @param {IconProps} props - The props object containing additional attributes and event handlers for the icon.
* @returns {JSX.Element} - The key(+) icon.
*
* @example
* // Default usage
* <IconKeyPlus />
*
* @example
* // With custom color and size
* <IconKeyPlus className="text-primary" size="lg" />
*
* @example
* // With custom size and onClick handler
* <IconKeyPlus size="sm" onClick={handleOnClick} />
*/
export const IconKeyPlus = createIcon((props) => (
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
strokeLinejoin="round"
{...props}
>
<path d="M2.586 17.414A2 2 0 0 0 2 18.828V21a1 1 0 0 0 1 1h3a1 1 0 0 0 1-1v-1a1 1 0 0 1 1-1h1a1 1 0 0 0 1-1v-1a1 1 0 0 1 1-1h.172a2 2 0 0 0 1.414-.586l.814-.814a6.5 6.5 0 1 0-4-4z" />
{/* <circle cx="16.5" cy="7.5" r=".5" fill="currentColor" /> */}
<line x1="15.6" x2="15.6" y1="5.4" y2="11.4" />
<line x1="12.6" x2="18.6" y1="8.4" y2="8.4" />
</svg>
));
/**
* User icon component.
*
* @component IconUser
* @param {IconProps} props - The props object containing additional attributes and event handlers for the icon.
* @returns {JSX.Element} - The user icon.
*
* @example
* // Default usage
* <IconUser />
*
* @example
* // With custom color and size
* <IconUser className="text-primary" size="lg" />
*
* @example
* // With custom size and onClick handler
* <IconUser size="sm" onClick={handleOnClick} />
*/
export const IconUser = createIcon((props) => (
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
strokeLinejoin="round"
{...props}
>
<path d="M19 21v-2a4 4 0 0 0-4-4H9a4 4 0 0 0-4 4v2" />
<circle cx="12" cy="7" r="4" />
</svg>
));
/**
* User(+) icon component.
*
* @component IconUserPlus
* @param {IconProps} props - The props object containing additional attributes and event handlers for the icon.
* @returns {JSX.Element} - The user plus icon.
*
* @example
* // Default usage
* <IconUserPlus />
*
* @example
* // With custom color and size
* <IconUserPlus className="text-primary" size="lg" />
*
* @example
* // With custom size and onClick handler
* <IconUserPlus size="sm" onClick={handleOnClick} />
*/
export const IconUserPlus = createIcon((props) => (
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
strokeLinejoin="round"
{...props}
>
<path d="M16 21v-2a4 4 0 0 0-4-4H6a4 4 0 0 0-4 4v2" />
<circle cx="9" cy="7" r="4" />
<line x1="19" x2="19" y1="8" y2="14" />
<line x1="22" x2="16" y1="11" y2="11" />
</svg>
));
export { iconVariants };

View File

@ -152,6 +152,7 @@ export default function useAgentGraph(
inputSchema: block.inputSchema,
outputSchema: block.outputSchema,
hardcodedValues: node.input_default,
uiType: block.uiType,
connections: graph.links
.filter((l) => [l.source_id, l.sink_id].includes(node.id))
.map((link) => ({

View File

@ -0,0 +1,77 @@
import { useContext } from "react";
import { CustomNodeData } from "@/components/CustomNode";
import { BlockIOCredentialsSubSchema } from "@/lib/autogpt-server-api";
import { Node, useNodeId, useNodesData } from "@xyflow/react";
import {
CredentialsProviderData,
CredentialsProvidersContext,
} from "@/components/integrations/credentials-provider";
export type CredentialsData =
| {
provider: string;
schema: BlockIOCredentialsSubSchema;
supportsApiKey: boolean;
supportsOAuth2: boolean;
isLoading: true;
}
| (CredentialsProviderData & {
schema: BlockIOCredentialsSubSchema;
supportsApiKey: boolean;
supportsOAuth2: boolean;
isLoading: false;
});
export default function useCredentials(): CredentialsData | null {
const nodeId = useNodeId();
const allProviders = useContext(CredentialsProvidersContext);
if (!nodeId) {
throw new Error("useCredentials must be within a CustomNode");
}
const data = useNodesData<Node<CustomNodeData>>(nodeId)!.data;
const credentialsSchema = data.inputSchema.properties
.credentials as BlockIOCredentialsSubSchema;
// If block input schema doesn't have credentials, return null
if (!credentialsSchema) {
return null;
}
const provider = allProviders
? allProviders[credentialsSchema?.credentials_provider]
: null;
const supportsApiKey =
credentialsSchema.credentials_types.includes("api_key");
const supportsOAuth2 = credentialsSchema.credentials_types.includes("oauth2");
// No provider means maybe it's still loading
if (!provider) {
return {
provider: credentialsSchema.credentials_provider,
schema: credentialsSchema,
supportsApiKey,
supportsOAuth2,
isLoading: true,
};
}
// Filter by OAuth credentials that have sufficient scopes for this block
const requiredScopes = credentialsSchema.credentials_scopes;
const savedOAuthCredentials = requiredScopes
? provider.savedOAuthCredentials.filter((c) =>
new Set(c.scopes).isSupersetOf(new Set(requiredScopes)),
)
: provider.savedOAuthCredentials;
return {
...provider,
schema: credentialsSchema,
supportsApiKey,
supportsOAuth2,
savedOAuthCredentials,
isLoading: false,
};
}

View File

@ -1,6 +1,10 @@
import { SupabaseClient } from "@supabase/supabase-js";
import {
AnalyticsMetrics,
AnalyticsDetails,
APIKeyCredentials,
Block,
CredentialsMetaResponse,
Graph,
GraphCreatable,
GraphUpdateable,
@ -9,9 +13,8 @@ import {
GraphExecuteResponse,
ExecutionMeta,
NodeExecutionResult,
OAuth2Credentials,
User,
AnalyticsMetrics,
AnalyticsDetails,
} from "./types";
export default class BaseAutoGPTServerAPI {
@ -34,6 +37,14 @@ export default class BaseAutoGPTServerAPI {
this.supabaseClient = supabaseClient;
}
async isAuthenticated(): Promise<boolean> {
if (!this.supabaseClient) return false;
const {
data: { session },
} = await this.supabaseClient?.auth.getSession();
return session != null;
}
async createUser(): Promise<User> {
return this._request("POST", "/auth/user", {});
}
@ -156,6 +167,53 @@ export default class BaseAutoGPTServerAPI {
).map(parseNodeExecutionResultTimestamps);
}
async oAuthLogin(
provider: string,
scopes?: string[],
): Promise<{ login_url: string; state_token: string }> {
const query = scopes ? { scopes: scopes.join(",") } : undefined;
return await this._get(`/integrations/${provider}/login`, query);
}
async oAuthCallback(
provider: string,
code: string,
state_token: string,
): Promise<CredentialsMetaResponse> {
return this._request("POST", `/integrations/${provider}/callback`, {
code,
state_token,
});
}
async createAPIKeyCredentials(
credentials: Omit<APIKeyCredentials, "id" | "type">,
): Promise<APIKeyCredentials> {
return this._request(
"POST",
`/integrations/${credentials.provider}/credentials`,
credentials,
);
}
async listCredentials(provider: string): Promise<CredentialsMetaResponse[]> {
return this._get(`/integrations/${provider}/credentials`);
}
async getCredentials(
provider: string,
id: string,
): Promise<APIKeyCredentials | OAuth2Credentials> {
return this._get(`/integrations/${provider}/credentials/${id}`);
}
async deleteCredentials(provider: string, id: string): Promise<void> {
return this._request(
"DELETE",
`/integrations/${provider}/credentials/${id}`,
);
}
async logMetric(metric: AnalyticsMetrics) {
return this._request("POST", "/analytics/log_raw_metric", metric);
}
@ -164,14 +222,14 @@ export default class BaseAutoGPTServerAPI {
return this._request("POST", "/analytics/log_raw_analytics", analytic);
}
private async _get(path: string) {
return this._request("GET", path);
private async _get(path: string, query?: Record<string, any>) {
return this._request("GET", path, query);
}
private async _request(
method: "GET" | "POST" | "PUT" | "PATCH",
method: "GET" | "POST" | "PUT" | "PATCH" | "DELETE",
path: string,
payload?: { [key: string]: any },
payload?: Record<string, any>,
) {
if (method != "GET") {
console.debug(`${method} ${path} payload:`, payload);
@ -181,18 +239,25 @@ export default class BaseAutoGPTServerAPI {
(await this.supabaseClient?.auth.getSession())?.data.session
?.access_token || "";
const response = await fetch(this.baseUrl + path, {
let url = this.baseUrl + path;
if (method === "GET" && payload) {
// For GET requests, use payload as query
const queryParams = new URLSearchParams(payload);
url += `?${queryParams.toString()}`;
}
const hasRequestBody = method !== "GET" && payload !== undefined;
const response = await fetch(url, {
method,
headers:
method != "GET"
? {
"Content-Type": "application/json",
Authorization: token ? `Bearer ${token}` : "",
}
: {
Authorization: token ? `Bearer ${token}` : "",
},
body: JSON.stringify(payload),
headers: hasRequestBody
? {
"Content-Type": "application/json",
Authorization: token ? `Bearer ${token}` : "",
}
: {
Authorization: token ? `Bearer ${token}` : "",
},
body: hasRequestBody ? JSON.stringify(payload) : undefined,
});
const response_data = await response.json();

View File

@ -41,6 +41,7 @@ export type BlockIOSubSchema =
type BlockIOSimpleTypeSubSchema =
| BlockIOObjectSubSchema
| BlockIOCredentialsSubSchema
| BlockIOKVSubSchema
| BlockIOArraySubSchema
| BlockIOStringSubSchema
@ -91,6 +92,14 @@ export type BlockIOBooleanSubSchema = BlockIOSubSchemaMeta & {
default?: boolean;
};
export type CredentialsType = "api_key" | "oauth2";
export type BlockIOCredentialsSubSchema = BlockIOSubSchemaMeta & {
credentials_provider: "github" | "google" | "notion";
credentials_scopes?: string[];
credentials_types: Array<CredentialsType>;
};
export type BlockIONullSubSchema = BlockIOSubSchemaMeta & {
type: "null";
};
@ -205,6 +214,51 @@ export type NodeExecutionResult = {
end_time?: Date;
};
/* Mirror of backend/server/integrations.py:CredentialsMetaResponse */
export type CredentialsMetaResponse = {
id: string;
type: CredentialsType;
title?: string;
scopes?: Array<string>;
username?: string;
};
/* Mirror of backend/data/model.py:CredentialsMetaInput */
export type CredentialsMetaInput = {
id: string;
type: CredentialsType;
title?: string;
provider: string;
};
/* Mirror of autogpt_libs/supabase_integration_credentials_store/types.py:_BaseCredentials */
type BaseCredentials = {
id: string;
type: CredentialsType;
title?: string;
provider: string;
};
/* Mirror of autogpt_libs/supabase_integration_credentials_store/types.py:OAuth2Credentials */
export type OAuth2Credentials = BaseCredentials & {
type: "oauth2";
scopes: string[];
username?: string;
access_token: string;
access_token_expires_at?: number;
refresh_token?: string;
refresh_token_expires_at?: number;
metadata: Record<string, any>;
};
/* Mirror of autogpt_libs/supabase_integration_credentials_store/types.py:APIKeyCredentials */
export type APIKeyCredentials = BaseCredentials & {
type: "api_key";
title: string;
api_key: string;
expires_at?: number;
};
export type User = {
id: string;
email: string;

View File

@ -0,0 +1,8 @@
import { test, expect } from "@playwright/test";
test("has title", async ({ page }) => {
await page.goto("/");
// Expect a title "to contain" a substring.
await expect(page).toHaveTitle(/NextGen AutoGPT/);
});

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,8 @@
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: {{ include "autogpt-builder.fullname" . }}-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: 301

View File

@ -25,6 +25,7 @@ ingress:
kubernetes.io/ingress.global-static-ip-name: "agpt-dev-agpt-builder-ip"
networking.gke.io/managed-certificates: "autogpt-builder-cert"
kubernetes.io/ingress.allow-http: "true"
networking.gke.io/v1beta1.FrontendConfig: "autogpt-builder-frontend-config"
hosts:
- host: dev-builder.agpt.co
paths:
@ -56,7 +57,7 @@ domain: "dev-builder.agpt.co"
env:
APP_ENV: "dev"
NEXT_PUBLIC_AGPT_SERVER_URL: "http://agpt-server:8000/api"
NEXT_PUBLIC_AGPT_SERVER_URL: ["http://agpt-server:8000/api"]
GOOGLE_CLIENT_ID: ""
GOOGLE_CLIENT_SECRET: ""
NEXT_PUBLIC_SUPABASE_URL: ""

View File

@ -100,3 +100,4 @@ env:
SUPABASE_ANON_KEY: ""
SUPABASE_URL: ""
DATABASE_URL: ""
BACKEND_CORS_ALLOW_ORIGINS: "https://dev-builder.agpt.co"

View File

@ -85,3 +85,10 @@ env:
NUM_NODE_WORKERS: 5
REDIS_HOST: "redis-dev-master.redis-dev.svc.cluster.local"
REDIS_PORT: "6379"
BACKEND_CORS_ALLOW_ORIGINS: '["https://dev-builder.agpt.co"]'
SUPABASE_SERVICE_ROLE_KEY: ""
GITHUB_CLIENT_ID: ""
GITHUB_CLIENT_SECRET: ""
FRONTEND_BASE_URL: ""
SUPABASE_URL: ""
SUPABASE_JWT_SECRET: ""

View File

@ -61,3 +61,4 @@ env:
REDIS_HOST: "redis-dev-master.redis-dev.svc.cluster.local"
REDIS_PORT: "6379"
REDIS_PASSWORD: "password"
BACKEND_CORS_ALLOW_ORIGINS: "https://dev-builder.agpt.co"

View File

@ -2,7 +2,7 @@ DB_USER=postgres
DB_PASS=your-super-secret-and-long-postgres-password
DB_NAME=postgres
DB_PORT=5432
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@localhost:${DB_PORT}/${DB_NAME}?connect_timeout=60&schema=marketplace"
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@localhost:${DB_PORT}/${DB_NAME}?connect_timeout=60&schema=market"
SENTRY_DSN=https://11d0640fef35640e0eb9f022eb7d7626@o4505260022104064.ingest.us.sentry.io/4507890252447744
ENABLE_AUTH=true

View File

@ -16,9 +16,9 @@ import sentry_sdk.integrations.starlette
import market.config
import market.routes.admin
import market.routes.agents
import market.routes.analytics
import market.routes.search
import market.routes.submissions
import market.routes.analytics
dotenv.load_dotenv()
@ -62,12 +62,9 @@ app = fastapi.FastAPI(
app.add_middleware(fastapi.middleware.gzip.GZipMiddleware, minimum_size=1000)
app.add_middleware(
middleware_class=fastapi.middleware.cors.CORSMiddleware,
allow_origins=[
"http://localhost:3000",
"http://127.0.0.1:3000",
"http://127.0.0.1:3000",
"https://dev-builder.agpt.co",
],
allow_origins=os.environ.get(
"BACKEND_CORS_ALLOW_ORIGINS", "http://localhost:3000,http://127.0.0.1:3000"
).split(","),
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
@ -87,6 +84,7 @@ def health():
content="<h1>Marketplace API</h1>", status_code=200
)
@app.get("/")
def default():
return fastapi.responses.HTMLResponse(

View File

@ -84,7 +84,7 @@ Follow these steps to create and test a new block:
5. **Implement the `run` method with error handling:**, this should contain the main logic of the block:
```python
def run(self, input_data: Input) -> BlockOutput:
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
topic = input_data.topic
url = f"https://en.wikipedia.org/api/rest_v1/page/summary/{topic}"
@ -105,6 +105,145 @@ Follow these steps to create and test a new block:
- **Error handling**: Handle various exceptions that might occur during the API request and data processing.
- **Yield**: Use `yield` to output the results.
### Blocks with authentication
Our system supports auth offloading for API keys and OAuth2 authorization flows.
Adding a block with API key authentication is straight-forward, as is adding a block
for a service that we already have OAuth2 support for.
Implementing the block itself is relatively simple. On top of the instructions above,
you're going to add a `credentials` parameter to the `Input` model and the `run` method:
```python
from autogpt_libs.supabase_integration_credentials_store.types import (
APIKeyCredentials,
OAuth2Credentials,
Credentials,
)
from backend.data.block import Block, BlockOutput, BlockSchema
from backend.data.model import CredentialsField
# API Key auth:
class BlockWithAPIKeyAuth(Block):
class Input(BlockSchema):
credentials = CredentialsField(
provider="github",
supported_credential_types={"api_key"},
required_scopes={"repo"},
description="The GitHub integration can be used with "
"any API key with sufficient permissions for the blocks it is used on.",
)
# ...
def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
...
# OAuth:
class BlockWithOAuth(Block):
class Input(BlockSchema):
credentials = CredentialsField(
provider="github",
supported_credential_types={"oauth2"},
required_scopes={"repo"},
description="The GitHub integration can be used with OAuth.",
)
# ...
def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
...
# API Key auth + OAuth:
class BlockWithAPIKeyAndOAuth(Block):
class Input(BlockSchema):
credentials = CredentialsField(
provider="github",
supported_credential_types={"api_key", "oauth2"},
required_scopes={"repo"},
description="The GitHub integration can be used with OAuth, "
"or any API key with sufficient permissions for the blocks it is used on.",
)
# ...
def run(
self,
input_data: Input,
*,
credentials: Credentials,
**kwargs,
) -> BlockOutput:
...
```
The credentials will be automagically injected by the executor in the back end.
The `APIKeyCredentials` and `OAuth2Credentials` models are defined [here](https://github.com/Significant-Gravitas/AutoGPT/blob/master/rnd/autogpt_libs/autogpt_libs/supabase_integration_credentials_store/types.py).
To use them in e.g. an API request, you can either access the token directly:
```python
# credentials: APIKeyCredentials
response = requests.post(
url,
headers={
"Authorization": f"Bearer {credentials.api_key.get_secret_value()})",
},
)
# credentials: OAuth2Credentials
response = requests.post(
url,
headers={
"Authorization": f"Bearer {credentials.access_token.get_secret_value()})",
},
)
```
or use the shortcut `credentials.bearer()`:
```python
# credentials: APIKeyCredentials | OAuth2Credentials
response = requests.post(
url,
headers={"Authorization": credentials.bearer()},
)
```
#### Adding an OAuth2 service integration
To add support for a new OAuth2-authenticated service, you'll need to add an `OAuthHandler`.
All our existing handlers and the base class can be found [here][OAuth2 handlers].
Every handler must implement the following parts of the [`BaseOAuthHandler`] interface:
- `PROVIDER_NAME`
- `__init__(client_id, client_secret, redirect_uri)`
- `get_login_url(scopes, state)`
- `exchange_code_for_tokens(code)`
- `_refresh_tokens(credentials)`
As you can see, this is modeled after the standard OAuth2 flow.
Aside from implementing the `OAuthHandler` itself, adding a handler into the system requires two more things:
- Adding the handler class to `HANDLERS_BY_NAME` [here](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/integrations/oauth/__init__.py)
- Adding `{provider}_client_id` and `{provider}_client_secret` to the application's `Secrets` [here](https://github.com/Significant-Gravitas/AutoGPT/blob/e3f35d79c7e9fc6ee0cabefcb73e0fad15a0ce2d/autogpt_platform/backend/backend/util/settings.py#L132)
[OAuth2 handlers]: https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpt_platform/backend/backend/integrations/oauth
[`BaseOAuthHandler`]: https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/integrations/oauth/base.py
#### Example: GitHub integration
- GitHub blocks with API key + OAuth2 support: [`blocks/github`](https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpt_platform/backend/backend/blocks/github/)
- GitHub OAuth2 handler: [`integrations/oauth/github.py`](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/integrations/oauth/github.py)
## Key Points to Remember
- **Unique ID**: Give your block a unique ID in the **init** method.
@ -118,6 +257,7 @@ Follow these steps to create and test a new block:
The testing of blocks is handled by `test_block.py`, which does the following:
1. It calls the block with the provided `test_input`.
If the block has a `credentials` field, `test_credentials` is passed in as well.
2. If a `test_mock` is provided, it temporarily replaces the specified methods with the mock functions.
3. It then asserts that the output matches the `test_output`.

View File

@ -9,7 +9,7 @@ This guide will help you setup the server and builder for the project.
<!-- The video is listed in the root Readme.md of the repo -->
We also offer this in video format. You can check it out [here](https://github.com/Significant-Gravitas/AutoGPT#how-to-get-started).
We also offer this in video format. You can check it out [here](https://github.com/Significant-Gravitas/AutoGPT?tab=readme-ov-file#how-to-setup-for-self-hosting).
!!! warning
**DO NOT FOLLOW ANY OUTSIDE TUTORIALS AS THEY WILL LIKELY BE OUT OF DATE**
@ -19,121 +19,110 @@ We also offer this in video format. You can check it out [here](https://github.c
To setup the server, you need to have the following installed:
- [Node.js](https://nodejs.org/en/)
- [Python 3.10](https://www.python.org/downloads/)
- [Docker](https://docs.docker.com/get-docker/)
- [Git](https://git-scm.com/downloads)
### Checking if you have Node.js and Python installed
### Checking if you have Node.js & NPM installed
You can check if you have Node.js installed by running the following command:
We use Node.js to run our frontend application.
If you need assistance installing Node.js:
https://nodejs.org/en/download/
NPM is included with Node.js, but if you need assistance installing NPM:
https://docs.npmjs.com/downloading-and-installing-node-js-and-npm
You can check if you have Node.js & NPM installed by running the following command:
```bash
node -v
npm -v
```
You can check if you have Python installed by running the following command:
Once you have Node.js installed, you can proceed to the next step.
```bash
python --version
```
Once you have node and python installed, you can proceed to the next step.
### Installing the package managers
In order to install the dependencies, you need to have the appropriate package managers installed.
- Installing Yarn
Yarn is a package manager for Node.js. You can install it by running the following command:
```bash
npm install -g yarn
```
- Installing Poetry
Poetry is a package manager for Python. You can install it by running the following command:
```bash
pip install poetry
```
- Installing Docker and Docker Compose
### Checking if you have Docker & Docker Compose installed
Docker containerizes applications, while Docker Compose orchestrates multi-container Docker applications.
You can follow the steps here:
If you need assistance installing docker:
https://docs.docker.com/desktop/
If you need assistance installing docker compose:
Docker-compose is included in Docker Desktop, but if you need assistance installing docker compose:
https://docs.docker.com/compose/install/
### Installing the dependencies
Once you have installed Yarn and Poetry, you can run the following command to install the dependencies:
You can check if you have Docker installed by running the following command:
```bash
cd autogpt_platform/backend
cp .env.example .env
poetry install
docker -v
docker-compose -v
```
**In another terminal**, run the following command to install the dependencies for the frontend:
Once you have Docker and Docker Compose installed, you can proceed to the next step.
```bash
cd autogpt_platform/frontend
yarn install
## Cloning the Repository
The first step is cloning the AutoGPT repository to your computer.
To do this, open a terminal window in a folder on your computer and run:
```
Once you have installed the dependencies, you can proceed to the next step.
### Setting up the database
In order to setup the database, you need to run the following commands, in the same terminal you ran the `poetry install` command:
```sh
docker compose up postgres redis -d
poetry run prisma migrate dev
```
After deploying the migration, to ensure that the database schema is correctly mapped to your codebase, allowing the application to interact with the database properly, you need to generate the Prisma database model:
```bash
poetry run prisma generate
git clone https://github.com/Significant-Gravitas/AutoGPT.git
```
If you get stuck, follow [this guide](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository).
Without running this command, the necessary Python modules (prisma.models) won't be available, leading to a `ModuleNotFoundError`.
Once that's complete you can close this terminal window.
### Get access to Supabase
## Running the backend services
Navigate to rnd/supabase
Run the following command:
To run the backend services, follow these steps:
```bash
git submodule update --init --recursive
```
### Running the server
* Within the repository, clone the submodules and navigate to the `autogpt_platform` directory:
```bash
git submodule update --init --recursive
cd autogpt_platform
```
This command will initialize and update the submodules in the repository. The `supabase` folder will be cloned to the root directory.
To run the server, navigate back to rnd (cd..) and run the following commands in the same terminal you ran the `poetry install` command:
* Copy the `.env.example` file available in the `supabase/docker` directory to `.env` in `autogpt_platform`:
```
cp supabase/docker/.env.example .env
```
This command will copy the `.env.example` file to `.env` in the `supabase/docker` directory. You can modify the `.env` file to add your own environment variables.
```bash
cp supabase/docker/.env.example .env
docker compose build
docker compose up -d
```
* Run the backend services:
```
docker compose up -d
```
This command will start all the necessary backend services defined in the `docker-compose.combined.yml` file in detached mode.
In the other terminal from frontend, you can run the following command to start the frontend:
```bash
cp .env.example .env
yarn dev
```
## Running the frontend application
### Checking if the server is running
To run the frontend application, follow these steps:
* Navigate to `frontend` folder within the `autogpt_platform` directory:
```
cd frontend
```
* Copy the `.env.example` file available in the `frontend` directory to `.env` in the same directory:
```
cp .env.example .env
```
You can modify the `.env` within this folder to add your own environment variables for the frontend application.
* Run the following command:
```
npm install
npm run dev
```
This command will install the necessary dependencies and start the frontend application in development mode.
## Checking if the application is running
You can check if the server is running by visiting [http://localhost:3000](http://localhost:3000) in your browser.
### Notes:
By default the daemons for different services run on the following ports:
By default the application for different services run on the following ports:
Execution Manager Daemon: 8002
Execution Scheduler Daemon: 8003
Rest Server Daemon: 8004
Frontend UI Server: 3000
Backend Websocket Server: 8001
Execution API Rest Server: 8006