Merge branch 'master' into fix-prompt-path
commit
c3288dcca8
|
@ -1,2 +1,6 @@
|
|||
OPENAI_API_KEY=your-openai-api-key
|
||||
ELEVENLABS_API_KEY=your-elevenlabs-api-key
|
||||
SMART_LLM_MODEL="gpt-4"
|
||||
FAST_LLM_MODEL="gpt-3.5-turbo"
|
||||
GOOGLE_API_KEY=
|
||||
CUSTOM_SEARCH_ENGINE_ID=
|
|
@ -7,3 +7,5 @@ package-lock.json
|
|||
scripts/auto_gpt_workspace/*
|
||||
*.mpeg
|
||||
.env
|
||||
last_run_ai_settings.yaml
|
||||
outputs/*
|
|
@ -0,0 +1,21 @@
|
|||
MIT License
|
||||
|
||||
Copyright (c) 2023 Toran Bruce Richards
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
72
README.md
72
README.md
|
@ -1,6 +1,7 @@
|
|||
# Auto-GPT: An Autonomous GPT-4 Experiment
|
||||
![GitHub Repo stars](https://img.shields.io/github/stars/Torantulino/auto-gpt?style=social)
|
||||
![Twitter Follow](https://img.shields.io/twitter/follow/siggravitas?style=social)
|
||||
[![](https://dcbadge.vercel.app/api/server/PQ7VX6TY4t?style=flat)](https://discord.gg/PQ7VX6TY4t)
|
||||
|
||||
Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, autonomously develops and manages businesses to increase net worth. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI.
|
||||
|
||||
|
@ -17,22 +18,33 @@ Your support is greatly appreciated
|
|||
|
||||
<p align="center">
|
||||
Development of this free, open-source project is made possible by all the <a href="https://github.com/Torantulino/Auto-GPT/graphs/contributors">contributors</a> and <a href="https://github.com/sponsors/Torantulino">sponsors</a>. If you'd like to sponsor this project and have your avatar or company logo appear below <a href="https://github.com/sponsors/Torantulino">click here</a>. 💖
|
||||
<p align="center">
|
||||
<p align="center">
|
||||
<a href="https://github.com/nocodeclarity"><img src="https://github.com/nocodeclarity.png" width="50px" alt="nocodeclarity" /></a> <a href="https://github.com/tjarmain"><img src="https://github.com/tjarmain.png" width="50px" alt="tjarmain" /></a> <a href="https://github.com/tekelsey"><img src="https://github.com/tekelsey.png" width="50px" alt="tekelsey" /></a> <a href="https://github.com/robinicus"><img src="https://github.com/robinicus.png" width="50px" alt="robinicus" /></a> <a href="https://github.com/digisomni"><img src="https://github.com/digisomni.png" width="50px" alt="digisomni" /></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
<a href="https://github.com/nocodeclarity"><img src="https://github.com/nocodeclarity.png" width="50px" alt="nocodeclarity" /></a> <a href="https://github.com/tjarmain"><img src="https://github.com/tjarmain.png" width="50px" alt="robjtede" /></a>
|
||||
</p>
|
||||
<a href="https://github.com/alexisneuhaus"><img src="https://github.com/alexisneuhaus.png" width="30px" alt="alexisneuhaus" /></a> <a href="https://github.com/iokode"><img src="https://github.com/iokode.png" width="30px" alt="iokode" /></a> <a href="https://github.com/jaumebalust"><img src="https://github.com/jaumebalust.png" width="30px" alt="jaumebalust" /></a>
|
||||
</p>
|
||||
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Features](#-features)
|
||||
- [Requirements](#-requirements)
|
||||
- [Installation](#-installation)
|
||||
- [Usage](#-usage)
|
||||
- [Limitations](#-limitations)
|
||||
- [Disclaimer](#-disclaimer)
|
||||
- [Connect with Us on Twitter ](#-connect-with-us-on-twitter)
|
||||
- [Auto-GPT: An Autonomous GPT-4 Experiment](#auto-gpt-an-autonomous-gpt-4-experiment)
|
||||
- [Demo (30/03/2023):](#demo-30032023)
|
||||
- [💖 Help Fund Auto-GPT's Development](#-help-fund-auto-gpts-development)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [🚀 Features](#-features)
|
||||
- [📋 Requirements](#-requirements)
|
||||
- [💾 Installation](#-installation)
|
||||
- [🔧 Usage](#-usage)
|
||||
- [🗣️ Speech Mode](#️-speech-mode)
|
||||
- [🔍 Google API Keys Configuration](#-google-api-keys-configuration)
|
||||
- [Setting up environment variables](#setting-up-environment-variables)
|
||||
- [💀 Continuous Mode ⚠️](#-continuous-mode-️)
|
||||
- [GPT3.5 ONLY Mode](#gpt35-only-mode)
|
||||
- [⚠️ Limitations](#️-limitations)
|
||||
- [🛡 Disclaimer](#-disclaimer)
|
||||
- [🐦 Connect with Us on Twitter](#-connect-with-us-on-twitter)
|
||||
|
||||
|
||||
## 🚀 Features
|
||||
|
@ -93,7 +105,37 @@ python scripts/main.py
|
|||
## 🗣️ Speech Mode
|
||||
Use this to use TTS for Auto-GPT
|
||||
```
|
||||
python scripts/main.py speak-mode
|
||||
python scripts/main.py --speak
|
||||
|
||||
```
|
||||
|
||||
## 🔍 Google API Keys Configuration
|
||||
|
||||
This section is optional, use the official google api if you are having issues with error 429 when running google search.
|
||||
To use the `google_official_search` command, you need to set up your Google API keys in your environment variables.
|
||||
|
||||
1. Go to the [Google Cloud Console](https://console.cloud.google.com/).
|
||||
2. If you don't already have an account, create one and log in.
|
||||
3. Create a new project by clicking on the "Select a Project" dropdown at the top of the page and clicking "New Project". Give it a name and click "Create".
|
||||
4. Go to the [APIs & Services Dashboard](https://console.cloud.google.com/apis/dashboard) and click "Enable APIs and Services". Search for "Custom Search API" and click on it, then click "Enable".
|
||||
5. Go to the [Credentials](https://console.cloud.google.com/apis/credentials) page and click "Create Credentials". Choose "API Key".
|
||||
6. Copy the API key and set it as an environment variable named `GOOGLE_API_KEY` on your machine. See setting up environment variables below.
|
||||
7. Go to the [Custom Search Engine](https://cse.google.com/cse/all) page and click "Add".
|
||||
8. Set up your search engine by following the prompts. You can choose to search the entire web or specific sites.
|
||||
9. Once you've created your search engine, click on "Control Panel" and then "Basics". Copy the "Search engine ID" and set it as an environment variable named `CUSTOM_SEARCH_ENGINE_ID` on your machine. See setting up environment variables below.
|
||||
|
||||
### Setting up environment variables
|
||||
For Windows Users:
|
||||
```
|
||||
setx GOOGLE_API_KEY "YOUR_GOOGLE_API_KEY"
|
||||
setx CUSTOM_SEARCH_ENGINE_ID "YOUR_CUSTOM_SEARCH_ENGINE_ID"
|
||||
|
||||
```
|
||||
For macOS and Linux users:
|
||||
```
|
||||
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
|
||||
export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID"
|
||||
|
||||
```
|
||||
|
||||
## 💀 Continuous Mode ⚠️
|
||||
|
@ -101,13 +143,19 @@ Run the AI **without** user authorisation, 100% automated.
|
|||
Continuous mode is not recommended.
|
||||
It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise.
|
||||
Use at your own risk.
|
||||
|
||||
1. Run the `main.py` Python script in your terminal:
|
||||
```
|
||||
python scripts/main.py continuous-mode
|
||||
python scripts/main.py --continuous
|
||||
|
||||
```
|
||||
2. To exit the program, press Ctrl + C
|
||||
|
||||
## GPT3.5 ONLY Mode
|
||||
If you don't have access to the GPT4 api, this mode will allow you to use Auto-GPT!
|
||||
```
|
||||
python scripts/main.py --gpt3only
|
||||
```
|
||||
|
||||
## ⚠️ Limitations
|
||||
This experiment aims to showcase the potential of GPT-4 but comes with some limitations:
|
||||
|
||||
|
|
|
@ -0,0 +1,14 @@
|
|||
# I wasn't having any luck installing the requirements.txt file in Mac or Linux
|
||||
# But this seems to work.
|
||||
# The biggest difference is docker 5 instead of 6, because of this silliness:
|
||||
#
|
||||
# The conflict is caused by:
|
||||
# The user requested requests>=2.26.0
|
||||
# docker 6.0.1 depends on requests>=2.26.0
|
||||
# googlesearch-python 1.1.0 depends on requests==2.25.1
|
||||
docker==5.0.3
|
||||
|
||||
# I'd love to fix this in a cleaner way
|
||||
|
||||
# Now go ahead and install the rest of what requirements.txt says:
|
||||
-r requirements.txt
|
|
@ -1,9 +1,13 @@
|
|||
beautifulsoup4==4.9.3
|
||||
beautifulsoup4
|
||||
colorama==0.4.6
|
||||
docker==5.0.3
|
||||
googlesearch-python==1.1.0
|
||||
openai==0.27.2
|
||||
playsound==1.3.0
|
||||
python-dotenv==1.0.0
|
||||
pyyaml==6.0
|
||||
readability-lxml==0.8.1
|
||||
requests==2.25.1
|
||||
requests
|
||||
tiktoken==0.3.3
|
||||
docker
|
||||
googlesearch-python
|
||||
google-api-python-client #(https://developers.google.com/custom-search/v1/overview)
|
||||
# Googlesearch python seems to be a bit cursed, anyone good at fixing thigns like this?
|
|
@ -1,10 +1,10 @@
|
|||
import openai
|
||||
from llm_utils import create_chat_completion
|
||||
|
||||
next_key = 0
|
||||
agents = {} # key, (task, full_message_history, model)
|
||||
|
||||
# Create new GPT agent
|
||||
|
||||
# TODO: Centralise use of create_chat_completion() to globally enforce token limit
|
||||
|
||||
def create_agent(task, prompt, model):
|
||||
global next_key
|
||||
|
@ -13,13 +13,11 @@ def create_agent(task, prompt, model):
|
|||
messages = [{"role": "user", "content": prompt}, ]
|
||||
|
||||
# Start GTP3 instance
|
||||
response = openai.ChatCompletion.create(
|
||||
agent_reply = create_chat_completion(
|
||||
model=model,
|
||||
messages=messages,
|
||||
)
|
||||
|
||||
agent_reply = response.choices[0].message["content"]
|
||||
|
||||
# Update full message history
|
||||
messages.append({"role": "assistant", "content": agent_reply})
|
||||
|
||||
|
@ -42,14 +40,11 @@ def message_agent(key, message):
|
|||
messages.append({"role": "user", "content": message})
|
||||
|
||||
# Start GTP3 instance
|
||||
response = openai.ChatCompletion.create(
|
||||
agent_reply = create_chat_completion(
|
||||
model=model,
|
||||
messages=messages,
|
||||
)
|
||||
|
||||
# Get agent response
|
||||
agent_reply = response.choices[0].message["content"]
|
||||
|
||||
# Update full message history
|
||||
messages.append({"role": "assistant", "content": agent_reply})
|
||||
|
||||
|
|
|
@ -0,0 +1,43 @@
|
|||
import yaml
|
||||
import data
|
||||
|
||||
|
||||
class AIConfig:
|
||||
def __init__(self, ai_name="", ai_role="", ai_goals=[]):
|
||||
self.ai_name = ai_name
|
||||
self.ai_role = ai_role
|
||||
self.ai_goals = ai_goals
|
||||
|
||||
# Soon this will go in a folder where it remembers more stuff about the run(s)
|
||||
SAVE_FILE = "last_run_ai_settings.yaml"
|
||||
|
||||
@classmethod
|
||||
def load(cls, config_file=SAVE_FILE):
|
||||
# Load variables from yaml file if it exists
|
||||
try:
|
||||
with open(config_file) as file:
|
||||
config_params = yaml.load(file, Loader=yaml.FullLoader)
|
||||
except FileNotFoundError:
|
||||
config_params = {}
|
||||
|
||||
ai_name = config_params.get("ai_name", "")
|
||||
ai_role = config_params.get("ai_role", "")
|
||||
ai_goals = config_params.get("ai_goals", [])
|
||||
|
||||
return cls(ai_name, ai_role, ai_goals)
|
||||
|
||||
def save(self, config_file=SAVE_FILE):
|
||||
config = {"ai_name": self.ai_name, "ai_role": self.ai_role, "ai_goals": self.ai_goals}
|
||||
with open(config_file, "w") as file:
|
||||
yaml.dump(config, file)
|
||||
|
||||
def construct_full_prompt(self):
|
||||
prompt_start = """Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications."""
|
||||
|
||||
# Construct full prompt
|
||||
full_prompt = f"You are {self.ai_name}, {self.ai_role}\n{prompt_start}\n\nGOALS:\n\n"
|
||||
for i, goal in enumerate(self.ai_goals):
|
||||
full_prompt += f"{i+1}. {goal}\n"
|
||||
|
||||
full_prompt += f"\n\n{data.load_prompt()}"
|
||||
return full_prompt
|
|
@ -1,43 +1,24 @@
|
|||
from typing import List, Optional
|
||||
import json
|
||||
import openai
|
||||
|
||||
|
||||
# This is a magic function that can do anything with no-code. See
|
||||
# https://github.com/Torantulino/AI-Functions for more info.
|
||||
def call_ai_function(function, args, description, model="gpt-4"):
|
||||
# parse args to comma seperated string
|
||||
args = ", ".join(args)
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": f"You are now the following python function: ```# {description}\n{function}```\n\nOnly respond with your `return` value.",
|
||||
},
|
||||
{"role": "user", "content": args},
|
||||
]
|
||||
|
||||
response = openai.ChatCompletion.create(
|
||||
model=model, messages=messages, temperature=0
|
||||
)
|
||||
|
||||
return response.choices[0].message["content"]
|
||||
|
||||
from config import Config
|
||||
from call_ai_function import call_ai_function
|
||||
from json_parser import fix_and_parse_json
|
||||
cfg = Config()
|
||||
|
||||
# Evaluating code
|
||||
|
||||
|
||||
def evaluate_code(code: str) -> List[str]:
|
||||
function_string = "def analyze_code(code: str) -> List[str]:"
|
||||
args = [code]
|
||||
description_string = """Analyzes the given code and returns a list of suggestions for improvements."""
|
||||
|
||||
result_string = call_ai_function(function_string, args, description_string)
|
||||
return json.loads(result_string)
|
||||
|
||||
return result_string
|
||||
|
||||
|
||||
# Improving code
|
||||
|
||||
|
||||
def improve_code(suggestions: List[str], code: str) -> str:
|
||||
function_string = (
|
||||
"def generate_improved_code(suggestions: List[str], code: str) -> str:"
|
||||
|
@ -61,3 +42,5 @@ def write_tests(code: str, focus: List[str]) -> str:
|
|||
|
||||
result_string = call_ai_function(function_string, args, description_string)
|
||||
return result_string
|
||||
|
||||
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
from googlesearch import search
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
from readability import Document
|
||||
import openai
|
||||
from config import Config
|
||||
from llm_utils import create_chat_completion
|
||||
|
||||
cfg = Config()
|
||||
|
||||
def scrape_text(url):
|
||||
response = requests.get(url)
|
||||
|
@ -99,13 +99,11 @@ def summarize_text(text, is_website=True):
|
|||
chunk},
|
||||
]
|
||||
|
||||
response = openai.ChatCompletion.create(
|
||||
model="gpt-3.5-turbo",
|
||||
summary = create_chat_completion(
|
||||
model=cfg.fast_llm_model,
|
||||
messages=messages,
|
||||
max_tokens=300,
|
||||
)
|
||||
|
||||
summary = response.choices[0].message.content
|
||||
summaries.append(summary)
|
||||
print("Summarized " + str(len(chunks)) + " chunks.")
|
||||
|
||||
|
@ -127,11 +125,10 @@ def summarize_text(text, is_website=True):
|
|||
combined_summary},
|
||||
]
|
||||
|
||||
response = openai.ChatCompletion.create(
|
||||
model="gpt-3.5-turbo",
|
||||
final_summary = create_chat_completion(
|
||||
model=cfg.fast_llm_model,
|
||||
messages=messages,
|
||||
max_tokens=300,
|
||||
)
|
||||
|
||||
final_summary = response.choices[0].message.content
|
||||
return final_summary
|
||||
|
|
|
@ -0,0 +1,25 @@
|
|||
from config import Config
|
||||
cfg = Config()
|
||||
|
||||
from llm_utils import create_chat_completion
|
||||
|
||||
# This is a magic function that can do anything with no-code. See
|
||||
# https://github.com/Torantulino/AI-Functions for more info.
|
||||
def call_ai_function(function, args, description, model=cfg.smart_llm_model):
|
||||
# For each arg, if any are None, convert to "None":
|
||||
args = [str(arg) if arg is not None else "None" for arg in args]
|
||||
# parse args to comma seperated string
|
||||
args = ", ".join(args)
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": f"You are now the following python function: ```# {description}\n{function}```\n\nOnly respond with your `return` value.",
|
||||
},
|
||||
{"role": "user", "content": args},
|
||||
]
|
||||
|
||||
response = create_chat_completion(
|
||||
model=model, messages=messages, temperature=0
|
||||
)
|
||||
|
||||
return response
|
|
@ -1,13 +1,12 @@
|
|||
import os
|
||||
import time
|
||||
import openai
|
||||
from dotenv import load_dotenv
|
||||
from config import Config
|
||||
import token_counter
|
||||
|
||||
# Load environment variables from .env file
|
||||
load_dotenv()
|
||||
cfg = Config()
|
||||
|
||||
# Initialize the OpenAI API client
|
||||
openai.api_key = os.getenv("OPENAI_API_KEY")
|
||||
from llm_utils import create_chat_completion
|
||||
|
||||
|
||||
def create_chat_message(role, content):
|
||||
|
@ -24,6 +23,8 @@ def create_chat_message(role, content):
|
|||
return {"role": role, "content": content}
|
||||
|
||||
|
||||
|
||||
# TODO: Change debug from hardcode to argument
|
||||
def chat_with_ai(
|
||||
prompt,
|
||||
user_input,
|
||||
|
@ -46,16 +47,55 @@ def chat_with_ai(
|
|||
Returns:
|
||||
str: The AI's response.
|
||||
"""
|
||||
model = cfg.fast_llm_model # TODO: Change model from hardcode to argument
|
||||
# Reserve 1000 tokens for the response
|
||||
if debug:
|
||||
print(f"Token limit: {token_limit}")
|
||||
send_token_limit = token_limit - 1000
|
||||
|
||||
current_context = [
|
||||
create_chat_message(
|
||||
"system", prompt), create_chat_message(
|
||||
"system", f"Permanent memory: {permanent_memory}")]
|
||||
current_context.extend(
|
||||
full_message_history[-(token_limit - len(prompt) - len(permanent_memory) - 10):])
|
||||
"system", f"Permanent memory: {permanent_memory}")]
|
||||
|
||||
# Add messages from the full message history until we reach the token limit
|
||||
next_message_to_add_index = len(full_message_history) - 1
|
||||
current_tokens_used = 0
|
||||
insertion_index = len(current_context)
|
||||
|
||||
# Count the currently used tokens
|
||||
current_tokens_used = token_counter.count_message_tokens(current_context, model)
|
||||
current_tokens_used += token_counter.count_message_tokens([create_chat_message("user", user_input)], model) # Account for user input (appended later)
|
||||
|
||||
while next_message_to_add_index >= 0:
|
||||
# print (f"CURRENT TOKENS USED: {current_tokens_used}")
|
||||
message_to_add = full_message_history[next_message_to_add_index]
|
||||
|
||||
tokens_to_add = token_counter.count_message_tokens([message_to_add], model)
|
||||
if current_tokens_used + tokens_to_add > send_token_limit:
|
||||
break
|
||||
|
||||
# Add the most recent message to the start of the current context, after the two system prompts.
|
||||
current_context.insert(insertion_index, full_message_history[next_message_to_add_index])
|
||||
|
||||
# Count the currently used tokens
|
||||
current_tokens_used += tokens_to_add
|
||||
|
||||
# Move to the next most recent message in the full message history
|
||||
next_message_to_add_index -= 1
|
||||
|
||||
# Append user input, the length of this is accounted for above
|
||||
current_context.extend([create_chat_message("user", user_input)])
|
||||
|
||||
# Calculate remaining tokens
|
||||
tokens_remaining = token_limit - current_tokens_used
|
||||
# assert tokens_remaining >= 0, "Tokens remaining is negative. This should never happen, please submit a bug report at https://www.github.com/Torantulino/Auto-GPT"
|
||||
|
||||
# Debug print the current context
|
||||
if debug:
|
||||
print(f"Token limit: {token_limit}")
|
||||
print(f"Send Token Count: {current_tokens_used}")
|
||||
print(f"Tokens remaining for response: {tokens_remaining}")
|
||||
print("------------ CONTEXT SENT TO AI ---------------")
|
||||
for message in current_context:
|
||||
# Skip printing the prompt
|
||||
|
@ -63,15 +103,16 @@ def chat_with_ai(
|
|||
continue
|
||||
print(
|
||||
f"{message['role'].capitalize()}: {message['content']}")
|
||||
print()
|
||||
print("----------- END OF CONTEXT ----------------")
|
||||
|
||||
response = openai.ChatCompletion.create(
|
||||
model="gpt-4",
|
||||
# TODO: use a model defined elsewhere, so that model can contain temperature and other settings we care about
|
||||
assistant_reply = create_chat_completion(
|
||||
model=model,
|
||||
messages=current_context,
|
||||
max_tokens=tokens_remaining,
|
||||
)
|
||||
|
||||
assistant_reply = response.choices[0].message["content"]
|
||||
|
||||
# Update full message history
|
||||
full_message_history.append(
|
||||
create_chat_message(
|
||||
|
@ -82,5 +123,6 @@ def chat_with_ai(
|
|||
|
||||
return assistant_reply
|
||||
except openai.error.RateLimitError:
|
||||
# TODO: WHen we switch to langchain, this is built in
|
||||
print("Error: ", "API Rate Limit Reached. Waiting 10 seconds...")
|
||||
time.sleep(10)
|
||||
|
|
|
@ -8,15 +8,30 @@ from config import Config
|
|||
import ai_functions as ai
|
||||
from file_operations import read_file, write_to_file, append_to_file, delete_file
|
||||
from execute_code import execute_python_file
|
||||
from json_parser import fix_and_parse_json
|
||||
from googlesearch import search
|
||||
from googleapiclient.discovery import build
|
||||
from googleapiclient.errors import HttpError
|
||||
|
||||
cfg = Config()
|
||||
|
||||
|
||||
def get_command(response):
|
||||
try:
|
||||
response_json = json.loads(response)
|
||||
response_json = fix_and_parse_json(response)
|
||||
|
||||
if "command" not in response_json:
|
||||
return "Error:" , "Missing 'command' object in JSON"
|
||||
|
||||
command = response_json["command"]
|
||||
|
||||
if "name" not in command:
|
||||
return "Error:", "Missing 'name' field in 'command' object"
|
||||
|
||||
command_name = command["name"]
|
||||
arguments = command["args"]
|
||||
|
||||
# Use an empty dictionary if 'args' field is not present in 'command' object
|
||||
arguments = command.get("args", {})
|
||||
|
||||
if not arguments:
|
||||
arguments = {}
|
||||
|
@ -32,9 +47,13 @@ def get_command(response):
|
|||
def execute_command(command_name, arguments):
|
||||
try:
|
||||
if command_name == "google":
|
||||
return google_search(arguments["input"])
|
||||
elif command_name == "check_notifications":
|
||||
return check_notifications(arguments["website"])
|
||||
|
||||
# Check if the Google API key is set and use the official search method
|
||||
# If the API key is not set or has only whitespaces, use the unofficial search method
|
||||
if cfg.google_api_key and (cfg.google_api_key.strip() if cfg.google_api_key else None):
|
||||
return google_official_search(arguments["input"])
|
||||
else:
|
||||
return google_search(arguments["input"])
|
||||
elif command_name == "memory_add":
|
||||
return commit_memory(arguments["string"])
|
||||
elif command_name == "memory_del":
|
||||
|
@ -52,12 +71,6 @@ def execute_command(command_name, arguments):
|
|||
return list_agents()
|
||||
elif command_name == "delete_agent":
|
||||
return delete_agent(arguments["key"])
|
||||
elif command_name == "navigate_website":
|
||||
return navigate_website(arguments["action"], arguments["username"])
|
||||
elif command_name == "register_account":
|
||||
return register_account(
|
||||
arguments["username"],
|
||||
arguments["website"])
|
||||
elif command_name == "get_text_summary":
|
||||
return get_text_summary(arguments["url"])
|
||||
elif command_name == "get_hyperlinks":
|
||||
|
@ -99,11 +112,45 @@ def get_datetime():
|
|||
|
||||
def google_search(query, num_results=8):
|
||||
search_results = []
|
||||
for j in browse.search(query, num_results=num_results):
|
||||
for j in search(query, num_results=num_results):
|
||||
search_results.append(j)
|
||||
|
||||
return json.dumps(search_results, ensure_ascii=False, indent=4)
|
||||
|
||||
def google_official_search(query, num_results=8):
|
||||
from googleapiclient.discovery import build
|
||||
from googleapiclient.errors import HttpError
|
||||
import json
|
||||
|
||||
try:
|
||||
# Get the Google API key and Custom Search Engine ID from the config file
|
||||
api_key = cfg.google_api_key
|
||||
custom_search_engine_id = cfg.custom_search_engine_id
|
||||
|
||||
# Initialize the Custom Search API service
|
||||
service = build("customsearch", "v1", developerKey=api_key)
|
||||
|
||||
# Send the search query and retrieve the results
|
||||
result = service.cse().list(q=query, cx=custom_search_engine_id, num=num_results).execute()
|
||||
|
||||
# Extract the search result items from the response
|
||||
search_results = result.get("items", [])
|
||||
|
||||
# Create a list of only the URLs from the search results
|
||||
search_results_links = [item["link"] for item in search_results]
|
||||
|
||||
except HttpError as e:
|
||||
# Handle errors in the API call
|
||||
error_details = json.loads(e.content.decode())
|
||||
|
||||
# Check if the error is related to an invalid or missing API key
|
||||
if error_details.get("error", {}).get("code") == 403 and "invalid API key" in error_details.get("error", {}).get("message", ""):
|
||||
return "Error: The provided Google API key is invalid or missing."
|
||||
else:
|
||||
return f"Error: {e}"
|
||||
|
||||
# Return the list of search result URLs
|
||||
return search_results_links
|
||||
|
||||
def browse_website(url):
|
||||
summary = get_text_summary(url)
|
||||
|
@ -147,7 +194,7 @@ def delete_memory(key):
|
|||
|
||||
|
||||
def overwrite_memory(key, string):
|
||||
if key >= 0 and key < len(mem.permanent_memory):
|
||||
if int(key) >= 0 and key < len(mem.permanent_memory):
|
||||
_text = "Overwriting memory with key " + \
|
||||
str(key) + " and string " + string
|
||||
mem.permanent_memory[key] = string
|
||||
|
@ -163,7 +210,7 @@ def shutdown():
|
|||
quit()
|
||||
|
||||
|
||||
def start_agent(name, task, prompt, model="gpt-3.5-turbo"):
|
||||
def start_agent(name, task, prompt, model=cfg.fast_llm_model):
|
||||
global cfg
|
||||
|
||||
# Remove underscores from name
|
||||
|
@ -205,23 +252,4 @@ def delete_agent(key):
|
|||
result = agents.delete_agent(key)
|
||||
if not result:
|
||||
return f"Agent {key} does not exist."
|
||||
return f"Agent {key} deleted."
|
||||
|
||||
|
||||
def navigate_website(action, username):
|
||||
_text = "Navigating website with action " + action + " and username " + username
|
||||
print(_text)
|
||||
return "Command not implemented yet."
|
||||
|
||||
|
||||
def register_account(username, website):
|
||||
_text = "Registering account with username " + \
|
||||
username + " and website " + website
|
||||
print(_text)
|
||||
return "Command not implemented yet."
|
||||
|
||||
|
||||
def check_notifications(website):
|
||||
_text = "Checking notifications from " + website
|
||||
print(_text)
|
||||
return "Command not implemented yet."
|
||||
return f"Agent {key} deleted."
|
|
@ -1,3 +1,9 @@
|
|||
import os
|
||||
import openai
|
||||
from dotenv import load_dotenv
|
||||
# Load environment variables from .env file
|
||||
load_dotenv()
|
||||
|
||||
class Singleton(type):
|
||||
"""
|
||||
Singleton metaclass for ensuring only one instance of a class.
|
||||
|
@ -21,9 +27,47 @@ class Config(metaclass=Singleton):
|
|||
def __init__(self):
|
||||
self.continuous_mode = False
|
||||
self.speak_mode = False
|
||||
# TODO - make these models be self-contained, using langchain, so we can configure them once and call it good
|
||||
self.fast_llm_model = os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo")
|
||||
self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4")
|
||||
self.fast_token_limit = int(os.getenv("FAST_TOKEN_LIMIT", 4000))
|
||||
self.smart_token_limit = int(os.getenv("SMART_TOKEN_LIMIT", 8000))
|
||||
|
||||
self.openai_api_key = os.getenv("OPENAI_API_KEY")
|
||||
self.elevenlabs_api_key = os.getenv("ELEVENLABS_API_KEY")
|
||||
|
||||
self.google_api_key = os.getenv("GOOGLE_API_KEY")
|
||||
self.custom_search_engine_id = os.getenv("CUSTOM_SEARCH_ENGINE_ID")
|
||||
|
||||
# Initialize the OpenAI API client
|
||||
openai.api_key = self.openai_api_key
|
||||
|
||||
def set_continuous_mode(self, value: bool):
|
||||
self.continuous_mode = value
|
||||
|
||||
def set_speak_mode(self, value: bool):
|
||||
self.speak_mode = value
|
||||
|
||||
def set_fast_llm_model(self, value: str):
|
||||
self.fast_llm_model = value
|
||||
|
||||
def set_smart_llm_model(self, value: str):
|
||||
self.smart_llm_model = value
|
||||
|
||||
def set_fast_token_limit(self, value: int):
|
||||
self.fast_token_limit = value
|
||||
|
||||
def set_smart_token_limit(self, value: int):
|
||||
self.smart_token_limit = value
|
||||
|
||||
def set_openai_api_key(self, value: str):
|
||||
self.openai_api_key = value
|
||||
|
||||
def set_elevenlabs_api_key(self, value: str):
|
||||
self.elevenlabs_api_key = value
|
||||
|
||||
def set_google_api_key(self, value: str):
|
||||
self.google_api_key = value
|
||||
|
||||
def set_custom_search_engine_id(self, value: str):
|
||||
self.custom_search_engine_id = value
|
|
@ -1,12 +1,17 @@
|
|||
import os
|
||||
from pathlib import Path
|
||||
|
||||
SRC_DIR = Path(__file__).parent
|
||||
|
||||
|
||||
def load_prompt():
|
||||
try:
|
||||
# get directory of this file:
|
||||
file_dir = Path(os.path.dirname(os.path.realpath(__file__)))
|
||||
data_dir = file_dir / "data"
|
||||
prompt_file = data_dir / "prompt.txt"
|
||||
# Load the promt from data/prompt.txt
|
||||
|
||||
with open(SRC_DIR/ "data/prompt.txt", "r") as prompt_file:
|
||||
|
||||
prompt = prompt_file.read()
|
||||
|
||||
return prompt
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
CONSTRAINTS:
|
||||
|
||||
1. 6000-word count limit for memory
|
||||
1. ~4000 word limit for memory. Your memory is short, so immediately save important information to long term memory and code to files.
|
||||
2. No user assistance
|
||||
|
||||
COMMANDS:
|
||||
|
@ -18,9 +18,9 @@ COMMANDS:
|
|||
11. Read file: "read_file", args: "file": "<file>"
|
||||
12. Append to file: "append_to_file", args: "file": "<file>", "text": "<text>"
|
||||
13. Delete file: "delete_file", args: "file": "<file>"
|
||||
14. Evaluate Code: "evaluate_code", args: "code": "<code>"
|
||||
15. Get Improved Code: "improve_code", args: "suggestions": "<list_of_suggestions>", "code": "<string>"
|
||||
16. Write Tests: "write_tests", args: "code": "<string>", "focus": "<list_of_focus_areas>"
|
||||
14. Evaluate Code: "evaluate_code", args: "code": "<full _code_string>"
|
||||
15. Get Improved Code: "improve_code", args: "suggestions": "<list_of_suggestions>", "code": "<full_code_string>"
|
||||
16. Write Tests: "write_tests", args: "code": "<full_code_string>", "focus": "<list_of_focus_areas>"
|
||||
17. Execute Python File: "execute_python_file", args: "file": "<file>"
|
||||
18. Task Complete (Shutdown): "task_complete", args: "reason": "<reason>"
|
||||
|
||||
|
@ -34,26 +34,28 @@ RESOURCES:
|
|||
PERFORMANCE EVALUATION:
|
||||
|
||||
1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.
|
||||
2. Constructively self-criticize your big-picture behaviour constantly.
|
||||
2. Constructively self-criticize your big-picture behavior constantly.
|
||||
3. Reflect on past decisions and strategies to refine your approach.
|
||||
4. Every command has a cost, so be smart and efficent. Aim to complete tasks in the least number of steps.
|
||||
4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.
|
||||
|
||||
You should only respond in JSON format as described below
|
||||
|
||||
RESPONSE FORMAT:
|
||||
{
|
||||
"command":
|
||||
{
|
||||
"name": "command name",
|
||||
"args":
|
||||
{
|
||||
"arg name": "value"
|
||||
"command": {
|
||||
"name": "command name",
|
||||
"args":{
|
||||
"arg name": "value"
|
||||
}
|
||||
},
|
||||
"thoughts":
|
||||
{
|
||||
"text": "thought",
|
||||
"reasoning": "reasoning",
|
||||
"plan": "- short bulleted\n- list that conveys\n- long-term plan",
|
||||
"criticism": "constructive self-criticism",
|
||||
"speak": "thoughts summary to say to user"
|
||||
}
|
||||
}
|
||||
},
|
||||
"thoughts":
|
||||
{
|
||||
"text": "thought",
|
||||
"reasoning": "reasoning",
|
||||
"plan": "short bulleted long-term plan",
|
||||
"criticism": "constructive self-criticism"
|
||||
"speak": "thoughts summary to say to user"
|
||||
}
|
||||
}
|
||||
|
||||
Ensure the response can be parsed by Python json.loads
|
||||
|
|
|
@ -5,6 +5,8 @@ import os
|
|||
def execute_python_file(file):
|
||||
workspace_folder = "auto_gpt_workspace"
|
||||
|
||||
print (f"Executing file '{file}' in workspace '{workspace_folder}'")
|
||||
|
||||
if not file.endswith(".py"):
|
||||
return "Error: Invalid file type. Only .py files are allowed."
|
||||
|
||||
|
@ -20,7 +22,7 @@ def execute_python_file(file):
|
|||
# You can find available Python images on Docker Hub:
|
||||
# https://hub.docker.com/_/python
|
||||
container = client.containers.run(
|
||||
'python:3.8',
|
||||
'python:3.10',
|
||||
f'python {file}',
|
||||
volumes={
|
||||
os.path.abspath(workspace_folder): {
|
||||
|
@ -36,6 +38,9 @@ def execute_python_file(file):
|
|||
logs = container.logs().decode('utf-8')
|
||||
container.remove()
|
||||
|
||||
# print(f"Execution complete. Output: {output}")
|
||||
# print(f"Logs: {logs}")
|
||||
|
||||
return logs
|
||||
|
||||
except Exception as e:
|
||||
|
|
|
@ -31,6 +31,9 @@ def read_file(filename):
|
|||
def write_to_file(filename, text):
|
||||
try:
|
||||
filepath = safe_join(working_directory, filename)
|
||||
directory = os.path.dirname(filepath)
|
||||
if not os.path.exists(directory):
|
||||
os.makedirs(directory)
|
||||
with open(filepath, "w") as f:
|
||||
f.write(text)
|
||||
return "File written to successfully."
|
||||
|
|
|
@ -0,0 +1,76 @@
|
|||
import json
|
||||
from call_ai_function import call_ai_function
|
||||
from config import Config
|
||||
cfg = Config()
|
||||
|
||||
def fix_and_parse_json(json_str: str, try_to_fix_with_gpt: bool = True):
|
||||
json_schema = """
|
||||
{
|
||||
"command": {
|
||||
"name": "command name",
|
||||
"args":{
|
||||
"arg name": "value"
|
||||
}
|
||||
},
|
||||
"thoughts":
|
||||
{
|
||||
"text": "thought",
|
||||
"reasoning": "reasoning",
|
||||
"plan": "- short bulleted\n- list that conveys\n- long-term plan",
|
||||
"criticism": "constructive self-criticism",
|
||||
"speak": "thoughts summary to say to user"
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
try:
|
||||
return json.loads(json_str)
|
||||
except Exception as e:
|
||||
# Let's do something manually - sometimes GPT responds with something BEFORE the braces:
|
||||
# "I'm sorry, I don't understand. Please try again."{"text": "I'm sorry, I don't understand. Please try again.", "confidence": 0.0}
|
||||
# So let's try to find the first brace and then parse the rest of the string
|
||||
try:
|
||||
brace_index = json_str.index("{")
|
||||
json_str = json_str[brace_index:]
|
||||
last_brace_index = json_str.rindex("}")
|
||||
json_str = json_str[:last_brace_index+1]
|
||||
return json.loads(json_str)
|
||||
except Exception as e:
|
||||
if try_to_fix_with_gpt:
|
||||
print(f"Warning: Failed to parse AI output, attempting to fix.\n If you see this warning frequently, it's likely that your prompt is confusing the AI. Try changing it up slightly.")
|
||||
# Now try to fix this up using the ai_functions
|
||||
ai_fixed_json = fix_json(json_str, json_schema, False)
|
||||
if ai_fixed_json != "failed":
|
||||
return json.loads(ai_fixed_json)
|
||||
else:
|
||||
print(f"Failed to fix ai output, telling the AI.") # This allows the AI to react to the error message, which usually results in it correcting its ways.
|
||||
return json_str
|
||||
else:
|
||||
raise e
|
||||
|
||||
def fix_json(json_str: str, schema: str, debug=False) -> str:
|
||||
# Try to fix the JSON using gpt:
|
||||
function_string = "def fix_json(json_str: str, schema:str=None) -> str:"
|
||||
args = [json_str, schema]
|
||||
description_string = """Fixes the provided JSON string to make it parseable and fully complient with the provided schema.\n If an object or field specifed in the schema isn't contained within the correct JSON, it is ommited.\n This function is brilliant at guessing when the format is incorrect."""
|
||||
|
||||
# If it doesn't already start with a "`", add one:
|
||||
if not json_str.startswith("`"):
|
||||
json_str = "```json\n" + json_str + "\n```"
|
||||
result_string = call_ai_function(
|
||||
function_string, args, description_string, model=cfg.fast_llm_model
|
||||
)
|
||||
if debug:
|
||||
print("------------ JSON FIX ATTEMPT ---------------")
|
||||
print(f"Original JSON: {json_str}")
|
||||
print("-----------")
|
||||
print(f"Fixed JSON: {result_string}")
|
||||
print("----------- END OF FIX ATTEMPT ----------------")
|
||||
try:
|
||||
return json.loads(result_string)
|
||||
except:
|
||||
# Get the call stack:
|
||||
# import traceback
|
||||
# call_stack = traceback.format_exc()
|
||||
# print(f"Failed to fix JSON: '{json_str}' "+call_stack)
|
||||
return "failed"
|
|
@ -0,0 +1,16 @@
|
|||
import openai
|
||||
from config import Config
|
||||
cfg = Config()
|
||||
|
||||
openai.api_key = cfg.openai_api_key
|
||||
|
||||
# Overly simple abstraction until we create something better
|
||||
def create_chat_completion(messages, model=None, temperature=None, max_tokens=None)->str:
|
||||
response = openai.ChatCompletion.create(
|
||||
model=model,
|
||||
messages=messages,
|
||||
temperature=temperature,
|
||||
max_tokens=max_tokens
|
||||
)
|
||||
|
||||
return response.choices[0].message["content"]
|
255
scripts/main.py
255
scripts/main.py
|
@ -11,16 +11,11 @@ import speak
|
|||
from enum import Enum, auto
|
||||
import sys
|
||||
from config import Config
|
||||
from dotenv import load_dotenv
|
||||
|
||||
|
||||
# Load environment variables from .env file
|
||||
load_dotenv()
|
||||
|
||||
|
||||
class Argument(Enum):
|
||||
CONTINUOUS_MODE = "continuous-mode"
|
||||
SPEAK_MODE = "speak-mode"
|
||||
from json_parser import fix_and_parse_json
|
||||
from ai_config import AIConfig
|
||||
import traceback
|
||||
import yaml
|
||||
import argparse
|
||||
|
||||
|
||||
def print_to_console(
|
||||
|
@ -35,6 +30,8 @@ def print_to_console(
|
|||
speak.say_text(f"{title}. {content}")
|
||||
print(title_color + title + " " + Style.RESET_ALL, end="")
|
||||
if content:
|
||||
if isinstance(content, list):
|
||||
content = " ".join(content)
|
||||
words = content.split()
|
||||
for i, word in enumerate(words):
|
||||
print(word, end="", flush=True)
|
||||
|
@ -53,61 +50,144 @@ def print_assistant_thoughts(assistant_reply):
|
|||
global cfg
|
||||
try:
|
||||
# Parse and print Assistant response
|
||||
assistant_reply_json = json.loads(assistant_reply)
|
||||
assistant_reply_json = fix_and_parse_json(assistant_reply)
|
||||
|
||||
# Check if assistant_reply_json is a string and attempt to parse it into a JSON object
|
||||
if isinstance(assistant_reply_json, str):
|
||||
try:
|
||||
assistant_reply_json = json.loads(assistant_reply_json)
|
||||
except json.JSONDecodeError as e:
|
||||
print_to_console("Error: Invalid JSON\n", Fore.RED, assistant_reply)
|
||||
assistant_reply_json = {}
|
||||
|
||||
assistant_thoughts_reasoning = None
|
||||
assistant_thoughts_plan = None
|
||||
assistant_thoughts_speak = None
|
||||
assistant_thoughts_criticism = None
|
||||
assistant_thoughts = assistant_reply_json.get("thoughts", {})
|
||||
assistant_thoughts_text = assistant_thoughts.get("text")
|
||||
|
||||
assistant_thoughts = assistant_reply_json.get("thoughts")
|
||||
if assistant_thoughts:
|
||||
assistant_thoughts_text = assistant_thoughts.get("text")
|
||||
assistant_thoughts_reasoning = assistant_thoughts.get("reasoning")
|
||||
assistant_thoughts_plan = assistant_thoughts.get("plan")
|
||||
assistant_thoughts_criticism = assistant_thoughts.get("criticism")
|
||||
assistant_thoughts_speak = assistant_thoughts.get("speak")
|
||||
else:
|
||||
assistant_thoughts_text = None
|
||||
assistant_thoughts_reasoning = None
|
||||
assistant_thoughts_plan = None
|
||||
assistant_thoughts_criticism = None
|
||||
assistant_thoughts_speak = None
|
||||
|
||||
print_to_console(
|
||||
f"{ai_name.upper()} THOUGHTS:",
|
||||
Fore.YELLOW,
|
||||
assistant_thoughts_text)
|
||||
print_to_console(
|
||||
"REASONING:",
|
||||
Fore.YELLOW,
|
||||
assistant_thoughts_reasoning)
|
||||
print_to_console(f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, assistant_thoughts_text)
|
||||
print_to_console("REASONING:", Fore.YELLOW, assistant_thoughts_reasoning)
|
||||
|
||||
if assistant_thoughts_plan:
|
||||
print_to_console("PLAN:", Fore.YELLOW, "")
|
||||
if assistant_thoughts_plan:
|
||||
# If it's a list, join it into a string
|
||||
if isinstance(assistant_thoughts_plan, list):
|
||||
assistant_thoughts_plan = "\n".join(assistant_thoughts_plan)
|
||||
elif isinstance(assistant_thoughts_plan, dict):
|
||||
assistant_thoughts_plan = str(assistant_thoughts_plan)
|
||||
|
||||
# Split the input_string using the newline character and dash
|
||||
lines = assistant_thoughts_plan.split('\n')
|
||||
|
||||
# Iterate through the lines and print each one with a bullet
|
||||
# point
|
||||
for line in lines:
|
||||
# Remove any "-" characters from the start of the line
|
||||
line = line.lstrip("- ")
|
||||
print_to_console("- ", Fore.GREEN, line.strip())
|
||||
print_to_console(
|
||||
"CRITICISM:",
|
||||
Fore.YELLOW,
|
||||
assistant_thoughts_criticism)
|
||||
# Split the input_string using the newline character and dashes
|
||||
lines = assistant_thoughts_plan.split('\n')
|
||||
for line in lines:
|
||||
line = line.lstrip("- ")
|
||||
print_to_console("- ", Fore.GREEN, line.strip())
|
||||
|
||||
print_to_console("CRITICISM:", Fore.YELLOW, assistant_thoughts_criticism)
|
||||
# Speak the assistant's thoughts
|
||||
if cfg.speak_mode and assistant_thoughts_speak:
|
||||
speak.say_text(assistant_thoughts_speak)
|
||||
|
||||
except json.decoder.JSONDecodeError:
|
||||
print_to_console("Error: Invalid JSON\n", Fore.RED, assistant_reply)
|
||||
|
||||
# All other errors, return "Error: + error message"
|
||||
except Exception as e:
|
||||
print_to_console("Error: \n", Fore.RED, str(e))
|
||||
call_stack = traceback.format_exc()
|
||||
print_to_console("Error: \n", Fore.RED, call_stack)
|
||||
|
||||
|
||||
def load_variables(config_file="config.yaml"):
|
||||
# Load variables from yaml file if it exists
|
||||
try:
|
||||
with open(config_file) as file:
|
||||
config = yaml.load(file, Loader=yaml.FullLoader)
|
||||
ai_name = config.get("ai_name")
|
||||
ai_role = config.get("ai_role")
|
||||
ai_goals = config.get("ai_goals")
|
||||
except FileNotFoundError:
|
||||
ai_name = ""
|
||||
ai_role = ""
|
||||
ai_goals = []
|
||||
|
||||
# Prompt the user for input if config file is missing or empty values
|
||||
if not ai_name:
|
||||
ai_name = input("Name your AI: ")
|
||||
if ai_name == "":
|
||||
ai_name = "Entrepreneur-GPT"
|
||||
|
||||
if not ai_role:
|
||||
ai_role = input(f"{ai_name} is: ")
|
||||
if ai_role == "":
|
||||
ai_role = "an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth."
|
||||
|
||||
if not ai_goals:
|
||||
print("Enter up to 5 goals for your AI: ")
|
||||
print("For example: \nIncrease net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'")
|
||||
print("Enter nothing to load defaults, enter nothing when finished.")
|
||||
ai_goals = []
|
||||
for i in range(5):
|
||||
ai_goal = input(f"Goal {i+1}: ")
|
||||
if ai_goal == "":
|
||||
break
|
||||
ai_goals.append(ai_goal)
|
||||
if len(ai_goals) == 0:
|
||||
ai_goals = ["Increase net worth", "Grow Twitter Account", "Develop and manage multiple businesses autonomously"]
|
||||
|
||||
# Save variables to yaml file
|
||||
config = {"ai_name": ai_name, "ai_role": ai_role, "ai_goals": ai_goals}
|
||||
with open(config_file, "w") as file:
|
||||
documents = yaml.dump(config, file)
|
||||
|
||||
prompt = data.load_prompt()
|
||||
prompt_start = """Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications."""
|
||||
|
||||
# Construct full prompt
|
||||
full_prompt = f"You are {ai_name}, {ai_role}\n{prompt_start}\n\nGOALS:\n\n"
|
||||
for i, goal in enumerate(ai_goals):
|
||||
full_prompt += f"{i+1}. {goal}\n"
|
||||
|
||||
full_prompt += f"\n\n{prompt}"
|
||||
return full_prompt
|
||||
|
||||
|
||||
def construct_prompt():
|
||||
config = AIConfig.load()
|
||||
if config.ai_name:
|
||||
print_to_console(
|
||||
f"Welcome back! ",
|
||||
Fore.GREEN,
|
||||
f"Would you like me to return to being {config.ai_name}?",
|
||||
speak_text=True)
|
||||
should_continue = input(f"""Continue with the last settings?
|
||||
Name: {config.ai_name}
|
||||
Role: {config.ai_role}
|
||||
Goals: {config.ai_goals}
|
||||
Continue (y/n): """)
|
||||
if should_continue.lower() == "n":
|
||||
config = AIConfig()
|
||||
|
||||
if not config.ai_name:
|
||||
config = prompt_user()
|
||||
config.save()
|
||||
|
||||
# Get rid of this global:
|
||||
global ai_name
|
||||
ai_name = config.ai_name
|
||||
|
||||
full_prompt = config.construct_full_prompt()
|
||||
return full_prompt
|
||||
|
||||
|
||||
def prompt_user():
|
||||
ai_name = ""
|
||||
# Construct the prompt
|
||||
print_to_console(
|
||||
"Welcome to Auto-GPT! ",
|
||||
|
@ -143,7 +223,7 @@ def construct_prompt():
|
|||
print_to_console(
|
||||
"Enter up to 5 goals for your AI: ",
|
||||
Fore.GREEN,
|
||||
"For example: \nIncrease net worth \nGrow Twitter Account \nDevelop and manage multiple businesses autonomously'")
|
||||
"For example: \nIncrease net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'")
|
||||
print("Enter nothing to load defaults, enter nothing when finished.", flush=True)
|
||||
ai_goals = []
|
||||
for i in range(5):
|
||||
|
@ -155,47 +235,50 @@ def construct_prompt():
|
|||
ai_goals = ["Increase net worth", "Grow Twitter Account",
|
||||
"Develop and manage multiple businesses autonomously"]
|
||||
|
||||
prompt = data.load_prompt()
|
||||
prompt_start = """Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications."""
|
||||
|
||||
# Construct full prompt
|
||||
full_prompt = f"You are {ai_name}, {ai_role}\n{prompt_start}\n\nGOALS:\n\n"
|
||||
for i, goal in enumerate(ai_goals):
|
||||
full_prompt += f"{i+1}. {goal}\n"
|
||||
|
||||
full_prompt += f"\n\n{prompt}"
|
||||
return full_prompt
|
||||
|
||||
# Check if the python script was executed with arguments, get those arguments
|
||||
|
||||
config = AIConfig(ai_name, ai_role, ai_goals)
|
||||
return config
|
||||
|
||||
def parse_arguments():
|
||||
global cfg
|
||||
cfg.set_continuous_mode(False)
|
||||
cfg.set_speak_mode(False)
|
||||
for arg in sys.argv[1:]:
|
||||
if arg == Argument.CONTINUOUS_MODE.value:
|
||||
print_to_console("Continuous Mode: ", Fore.RED, "ENABLED")
|
||||
print_to_console(
|
||||
"WARNING: ",
|
||||
Fore.RED,
|
||||
"Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk.")
|
||||
cfg.set_continuous_mode(True)
|
||||
elif arg == Argument.SPEAK_MODE.value:
|
||||
print_to_console("Speak Mode: ", Fore.GREEN, "ENABLED")
|
||||
cfg.set_speak_mode(True)
|
||||
|
||||
parser = argparse.ArgumentParser(description='Process arguments.')
|
||||
parser.add_argument('--continuous', action='store_true', help='Enable Continuous Mode')
|
||||
parser.add_argument('--speak', action='store_true', help='Enable Speak Mode')
|
||||
parser.add_argument('--debug', action='store_true', help='Enable Debug Mode')
|
||||
parser.add_argument('--gpt3only', action='store_true', help='Enable GPT3.5 Only Mode')
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.continuous:
|
||||
print_to_console("Continuous Mode: ", Fore.RED, "ENABLED")
|
||||
print_to_console(
|
||||
"WARNING: ",
|
||||
Fore.RED,
|
||||
"Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk.")
|
||||
cfg.set_continuous_mode(True)
|
||||
|
||||
if args.speak:
|
||||
print_to_console("Speak Mode: ", Fore.GREEN, "ENABLED")
|
||||
cfg.set_speak_mode(True)
|
||||
|
||||
if args.gpt3only:
|
||||
print_to_console("GPT3.5 Only Mode: ", Fore.GREEN, "ENABLED")
|
||||
cfg.set_smart_llm_model(cfg.fast_llm_model)
|
||||
|
||||
|
||||
# TODO: fill in llm values here
|
||||
|
||||
cfg = Config()
|
||||
|
||||
parse_arguments()
|
||||
ai_name = ""
|
||||
prompt = construct_prompt()
|
||||
# print(prompt)
|
||||
# Initialize variables
|
||||
full_message_history = []
|
||||
token_limit = 6000 # The maximum number of tokens allowed in the API call
|
||||
result = None
|
||||
user_input = "NEXT COMMAND"
|
||||
# Make a constant:
|
||||
user_input = "Determine which next command to use, and respond using the format specified above:"
|
||||
|
||||
# Interaction Loop
|
||||
while True:
|
||||
|
@ -206,8 +289,9 @@ while True:
|
|||
user_input,
|
||||
full_message_history,
|
||||
mem.permanent_memory,
|
||||
token_limit)
|
||||
cfg.fast_token_limit) # TODO: This hardcodes the model to use GPT3.5. Make this an argument
|
||||
|
||||
# print("assistant reply: "+assistant_reply)
|
||||
# Print Assistant thoughts
|
||||
print_assistant_thoughts(assistant_reply)
|
||||
|
||||
|
@ -227,27 +311,29 @@ while True:
|
|||
Fore.CYAN,
|
||||
f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}")
|
||||
print(
|
||||
"Enter 'y' to authorise command or 'n' to exit program...",
|
||||
f"Enter 'y' to authorise command or 'n' to exit program, or enter feedback for {ai_name}...",
|
||||
flush=True)
|
||||
while True:
|
||||
console_input = input(Fore.MAGENTA + "Input:" + Style.RESET_ALL)
|
||||
if console_input.lower() == "y":
|
||||
user_input = "NEXT COMMAND"
|
||||
user_input = "GENERATE NEXT COMMAND JSON"
|
||||
break
|
||||
elif console_input.lower() == "n":
|
||||
user_input = "EXIT"
|
||||
break
|
||||
else:
|
||||
continue
|
||||
user_input = console_input
|
||||
command_name = "human_feedback"
|
||||
break
|
||||
|
||||
if user_input != "NEXT COMMAND":
|
||||
print("Exiting...", flush=True)
|
||||
break
|
||||
|
||||
print_to_console(
|
||||
if user_input == "GENERATE NEXT COMMAND JSON":
|
||||
print_to_console(
|
||||
"-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=",
|
||||
Fore.MAGENTA,
|
||||
"")
|
||||
elif user_input == "EXIT":
|
||||
print("Exiting...", flush=True)
|
||||
break
|
||||
else:
|
||||
# Print command
|
||||
print_to_console(
|
||||
|
@ -255,11 +341,13 @@ while True:
|
|||
Fore.CYAN,
|
||||
f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}")
|
||||
|
||||
# Exectute command
|
||||
if command_name.lower() != "error":
|
||||
result = f"Command {command_name} returned: {cmd.execute_command(command_name, arguments)}"
|
||||
else:
|
||||
# Execute command
|
||||
if command_name.lower() == "error":
|
||||
result = f"Command {command_name} threw the following error: " + arguments
|
||||
elif command_name == "human_feedback":
|
||||
result = f"Human feedback: {user_input}"
|
||||
else:
|
||||
result = f"Command {command_name} returned: {cmd.execute_command(command_name, arguments)}"
|
||||
|
||||
# Check if there's a result from the command append it to the message
|
||||
# history
|
||||
|
@ -271,3 +359,4 @@ while True:
|
|||
chat.create_chat_message(
|
||||
"system", "Unable to execute command"))
|
||||
print_to_console("SYSTEM: ", Fore.YELLOW, "Unable to execute command")
|
||||
|
||||
|
|
|
@ -1,20 +1,17 @@
|
|||
import os
|
||||
from playsound import playsound
|
||||
import requests
|
||||
from dotenv import load_dotenv
|
||||
|
||||
|
||||
# Load environment variables from .env file
|
||||
load_dotenv()
|
||||
from config import Config
|
||||
cfg = Config()
|
||||
|
||||
# TODO: Nicer names for these ids
|
||||
voices = ["ErXwobaYiN019PkySvjV", "EXAVITQu4vr4xnSDxMaL"]
|
||||
|
||||
tts_headers = {
|
||||
"Content-Type": "application/json",
|
||||
"xi-api-key": os.getenv("ELEVENLABS_API_KEY")
|
||||
"xi-api-key": cfg.elevenlabs_api_key
|
||||
}
|
||||
|
||||
|
||||
def say_text(text, voice_index=0):
|
||||
tts_url = "https://api.elevenlabs.io/v1/text-to-speech/{voice_id}".format(
|
||||
voice_id=voices[voice_index])
|
||||
|
|
|
@ -0,0 +1,57 @@
|
|||
import tiktoken
|
||||
from typing import List, Dict
|
||||
|
||||
def count_message_tokens(messages : List[Dict[str, str]], model : str = "gpt-3.5-turbo-0301") -> int:
|
||||
"""
|
||||
Returns the number of tokens used by a list of messages.
|
||||
|
||||
Args:
|
||||
messages (list): A list of messages, each of which is a dictionary containing the role and content of the message.
|
||||
model (str): The name of the model to use for tokenization. Defaults to "gpt-3.5-turbo-0301".
|
||||
|
||||
Returns:
|
||||
int: The number of tokens used by the list of messages.
|
||||
"""
|
||||
try:
|
||||
encoding = tiktoken.encoding_for_model(model)
|
||||
except KeyError:
|
||||
print("Warning: model not found. Using cl100k_base encoding.")
|
||||
encoding = tiktoken.get_encoding("cl100k_base")
|
||||
if model == "gpt-3.5-turbo":
|
||||
# !Node: gpt-3.5-turbo may change over time. Returning num tokens assuming gpt-3.5-turbo-0301.")
|
||||
return count_message_tokens(messages, model="gpt-3.5-turbo-0301")
|
||||
elif model == "gpt-4":
|
||||
# !Note: gpt-4 may change over time. Returning num tokens assuming gpt-4-0314.")
|
||||
return count_message_tokens(messages, model="gpt-4-0314")
|
||||
elif model == "gpt-3.5-turbo-0301":
|
||||
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
|
||||
tokens_per_name = -1 # if there's a name, the role is omitted
|
||||
elif model == "gpt-4-0314":
|
||||
tokens_per_message = 3
|
||||
tokens_per_name = 1
|
||||
else:
|
||||
raise NotImplementedError(f"""num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.""")
|
||||
num_tokens = 0
|
||||
for message in messages:
|
||||
num_tokens += tokens_per_message
|
||||
for key, value in message.items():
|
||||
num_tokens += len(encoding.encode(value))
|
||||
if key == "name":
|
||||
num_tokens += tokens_per_name
|
||||
num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
|
||||
return num_tokens
|
||||
|
||||
def count_string_tokens(string: str, model_name: str) -> int:
|
||||
"""
|
||||
Returns the number of tokens in a text string.
|
||||
|
||||
Args:
|
||||
string (str): The text string.
|
||||
model_name (str): The name of the encoding to use. (e.g., "gpt-3.5-turbo")
|
||||
|
||||
Returns:
|
||||
int: The number of tokens in the text string.
|
||||
"""
|
||||
encoding = tiktoken.encoding_for_model(model_name)
|
||||
num_tokens = len(encoding.encode(string))
|
||||
return num_tokens
|
|
@ -0,0 +1,115 @@
|
|||
import unittest
|
||||
import os
|
||||
import sys
|
||||
# Probably a better way:
|
||||
sys.path.append(os.path.abspath('../scripts'))
|
||||
from json_parser import fix_and_parse_json
|
||||
|
||||
class TestParseJson(unittest.TestCase):
|
||||
def test_valid_json(self):
|
||||
# Test that a valid JSON string is parsed correctly
|
||||
json_str = '{"name": "John", "age": 30, "city": "New York"}'
|
||||
obj = fix_and_parse_json(json_str)
|
||||
self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"})
|
||||
|
||||
def test_invalid_json_minor(self):
|
||||
# Test that an invalid JSON string can be fixed with gpt
|
||||
json_str = '{"name": "John", "age": 30, "city": "New York",}'
|
||||
self.assertEqual(fix_and_parse_json(json_str, try_to_fix_with_gpt=False), {"name": "John", "age": 30, "city": "New York"})
|
||||
|
||||
def test_invalid_json_major_with_gpt(self):
|
||||
# Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False
|
||||
json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END'
|
||||
self.assertEqual(fix_and_parse_json(json_str, try_to_fix_with_gpt=True), {"name": "John", "age": 30, "city": "New York"})
|
||||
|
||||
def test_invalid_json_major_without_gpt(self):
|
||||
# Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False
|
||||
json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END'
|
||||
# Assert that this raises an exception:
|
||||
with self.assertRaises(Exception):
|
||||
fix_and_parse_json(json_str, try_to_fix_with_gpt=False)
|
||||
|
||||
def test_invalid_json_leading_sentence_with_gpt(self):
|
||||
# Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False
|
||||
json_str = """I suggest we start by browsing the repository to find any issues that we can fix.
|
||||
|
||||
{
|
||||
"command": {
|
||||
"name": "browse_website",
|
||||
"args":{
|
||||
"url": "https://github.com/Torantulino/Auto-GPT"
|
||||
}
|
||||
},
|
||||
"thoughts":
|
||||
{
|
||||
"text": "I suggest we start browsing the repository to find any issues that we can fix.",
|
||||
"reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.",
|
||||
"plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes",
|
||||
"criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.",
|
||||
"speak": "I will start browsing the repository to find any issues we can fix."
|
||||
}
|
||||
}"""
|
||||
good_obj = {
|
||||
"command": {
|
||||
"name": "browse_website",
|
||||
"args":{
|
||||
"url": "https://github.com/Torantulino/Auto-GPT"
|
||||
}
|
||||
},
|
||||
"thoughts":
|
||||
{
|
||||
"text": "I suggest we start browsing the repository to find any issues that we can fix.",
|
||||
"reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.",
|
||||
"plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes",
|
||||
"criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.",
|
||||
"speak": "I will start browsing the repository to find any issues we can fix."
|
||||
}
|
||||
}
|
||||
# Assert that this raises an exception:
|
||||
self.assertEqual(fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj)
|
||||
|
||||
|
||||
|
||||
def test_invalid_json_leading_sentence_with_gpt(self):
|
||||
# Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False
|
||||
json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this.
|
||||
|
||||
{
|
||||
"command": {
|
||||
"name": "browse_website",
|
||||
"args":{
|
||||
"url": "https://github.com/Torantulino/Auto-GPT"
|
||||
}
|
||||
},
|
||||
"thoughts":
|
||||
{
|
||||
"text": "Browsing the repository to identify potential bugs",
|
||||
"reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.",
|
||||
"plan": "- Analyze the repository for potential bugs and areas of improvement",
|
||||
"criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.",
|
||||
"speak": "I am browsing the repository to identify potential bugs."
|
||||
}
|
||||
}"""
|
||||
good_obj = {
|
||||
"command": {
|
||||
"name": "browse_website",
|
||||
"args":{
|
||||
"url": "https://github.com/Torantulino/Auto-GPT"
|
||||
}
|
||||
},
|
||||
"thoughts":
|
||||
{
|
||||
"text": "Browsing the repository to identify potential bugs",
|
||||
"reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.",
|
||||
"plan": "- Analyze the repository for potential bugs and areas of improvement",
|
||||
"criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.",
|
||||
"speak": "I am browsing the repository to identify potential bugs."
|
||||
}
|
||||
}
|
||||
# Assert that this raises an exception:
|
||||
self.assertEqual(fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj)
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
Loading…
Reference in New Issue