Merge branch 'master' into master

pull/242/head
Dylan N 2023-04-06 14:04:26 +02:00 committed by GitHub
commit 89853cf3b8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
21 changed files with 416 additions and 72 deletions

View File

@ -1,6 +1,12 @@
PINECONE_API_KEY=your-pinecone-api-key
PINECONE_ENV=your-pinecone-region
OPENAI_API_KEY=your-openai-api-key
ELEVENLABS_API_KEY=your-elevenlabs-api-key
SMART_LLM_MODEL="gpt-4"
FAST_LLM_MODEL="gpt-3.5-turbo"
GOOGLE_API_KEY=
CUSTOM_SEARCH_ENGINE_ID=
USE_AZURE=False
OPENAI_API_BASE=your-base-url-for-azure
OPENAI_API_VERSION=api-version-for-azure
OPENAI_DEPLOYMENT_ID=deployment-id-for-azure

39
.github/ISSUE_TEMPLATE/1.bug.yml vendored Normal file
View File

@ -0,0 +1,39 @@
name: Bug report 🐛
description: Create a bug report for Auto-GPT.
labels: ['status: needs triage']
body:
- type: markdown
attributes:
value: |
Please provide a searchable summary of the issue in the title above ⬆️.
Thanks for contributing by creating an issue! ❤️
- type: checkboxes
attributes:
label: Duplicates
description: Please [search the history](https://github.com/Torantulino/Auto-GPT/issues) to see if an issue already exists for the same problem.
options:
- label: I have searched the existing issues
required: true
- type: textarea
attributes:
label: Steps to reproduce 🕹
description: |
**⚠️ Issues that we can't reproduce will be closed.**
- type: textarea
attributes:
label: Current behavior 😯
description: Describe what happens instead of the expected behavior.
- type: textarea
attributes:
label: Expected behavior 🤔
description: Describe what should happen.
- type: textarea
attributes:
label: Your prompt 📝
description: |
Please provide the prompt you are using. You can find your last-used prompt in last_run_ai_settings.yaml.
value: |
```yaml
# Paste your prompt here
```

29
.github/ISSUE_TEMPLATE/2.feature.yml vendored Normal file
View File

@ -0,0 +1,29 @@
name: Feature request 🚀
description: Suggest a new idea for Auto-GPT.
labels: ['status: needs triage']
body:
- type: markdown
attributes:
value: |
Please provide a searchable summary of the issue in the title above ⬆️.
Thanks for contributing by creating an issue! ❤️
- type: checkboxes
attributes:
label: Duplicates
description: Please [search the history](https://github.com/Torantulino/Auto-GPT/issues) to see if an issue already exists for the same problem.
options:
- label: I have searched the existing issues
required: true
- type: textarea
attributes:
label: Summary 💡
description: Describe how it should work.
- type: textarea
attributes:
label: Examples 🌈
description: Provide a link to other implementations, or screenshots of the expected behavior.
- type: textarea
attributes:
label: Motivation 🔦
description: What are you trying to accomplish? How has the lack of this feature affected you? Providing context helps us come up with a solution that is more useful in the real world.

4
.gitignore vendored
View File

@ -4,8 +4,8 @@ scripts/node_modules/
scripts/__pycache__/keys.cpython-310.pyc
package-lock.json
*.pyc
scripts/auto_gpt_workspace/*
auto_gpt_workspace/*
*.mpeg
.env
last_run_ai_settings.yaml
outputs/*
ai_settings.yaml

56
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,56 @@
To contribute to this GitHub project, you can follow these steps:
1. Fork the repository you want to contribute to by clicking the "Fork" button on the project page.
2. Clone the repository to your local machine using the following command:
```
git clone https://github.com/Torantulino/Auto-GPT
```
3. Create a new branch for your changes using the following command:
```
git checkout -b "branch-name"
```
4. Make your changes to the code or documentation.
- Example: Improve User Interface or Add Documentation.
5. Add the changes to the staging area using the following command:
```
git add .
```
6. Commit the changes with a meaningful commit message using the following command:
```
git commit -m "your commit message"
```
7. Push the changes to your forked repository using the following command:
```
git push origin branch-name
```
8. Go to the GitHub website and navigate to your forked repository.
9. Click the "New pull request" button.
10. Select the branch you just pushed to and the branch you want to merge into on the original repository.
11. Add a description of your changes and click the "Create pull request" button.
12. Wait for the project maintainer to review your changes and provide feedback.
13. Make any necessary changes based on feedback and repeat steps 5-12 until your changes are accepted and merged into the main project.
14. Once your changes are merged, you can update your forked repository and local copy of the repository with the following commands:
```
git fetch upstream
git checkout master
git merge upstream/master
```
Finally, delete the branch you created with the following command:
```
git branch -d branch-name
```
That's it you made it 🐣⭐⭐

View File

@ -92,6 +92,7 @@ pip install -r requirements.txt
4. Rename `.env.template` to `.env` and fill in your `OPENAI_API_KEY`. If you plan to use Speech Mode, fill in your `ELEVEN_LABS_API_KEY` as well.
- Obtain your OpenAI API key from: https://platform.openai.com/account/api-keys.
- Obtain your ElevenLabs API key from: https://elevenlabs.io. You can view your xi-api-key using the "Profile" tab on the website.
- If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and provide the `OPENAI_API_BASE`, `OPENAI_API_VERSION` and `OPENAI_DEPLOYMENT_ID` values as explained here: https://pypi.org/project/openai/ in the `Microsoft Azure Endpoints` section
## 🔧 Usage
@ -139,6 +140,35 @@ export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID"
```
## 🌲 Pinecone API Key Setup
Pinecone enable a vector based memory so a vast memory can be stored and only relevant memories
are loaded for the agent at any given time.
1. Go to app.pinecone.io and make an account if you don't already have one.
2. Choose the `Starter` plan to avoid being charged.
3. Find your API key and region under the default project in the left sidebar.
### Setting up environment variables
For Windows Users:
```
setx PINECONE_API_KEY "YOUR_GOOGLE_API_KEY"
export PINECONE_ENV="Your region" # something like: us-east4-gcp
```
For macOS and Linux users:
```
export PINECONE_API_KEY="YOUR_GOOGLE_API_KEY"
export PINECONE_ENV="Your region" # something like: us-east4-gcp
```
Or you can set them in the `.env` file.
## View Memory Usage
1. View memory usage by using the `--debug` flag :)
## 💀 Continuous Mode ⚠️
Run the AI **without** user authorisation, 100% automated.
Continuous mode is not recommended.

7
ai_settings.yaml Normal file
View File

@ -0,0 +1,7 @@
ai_goals:
- Increase net worth.
- Develop and manage multiple businesses autonomously.
- Play to your strengths as a Large Language Model.
ai_name: Entrepreneur-GPT
ai_role: an AI designed to autonomously develop and run businesses with the sole goal
of increasing your net worth.

View File

@ -11,3 +11,4 @@ gTTS==2.3.1
docker
duckduckgo-search
google-api-python-client #(https://developers.google.com/custom-search/v1/overview)
pinecone-client==2.2.1

View File

@ -9,7 +9,7 @@ class AIConfig:
self.ai_goals = ai_goals
# Soon this will go in a folder where it remembers more stuff about the run(s)
SAVE_FILE = "last_run_ai_settings.yaml"
SAVE_FILE = "../ai_settings.yaml"
@classmethod
def load(cls, config_file=SAVE_FILE):

View File

@ -6,7 +6,7 @@ from llm_utils import create_chat_completion
cfg = Config()
def scrape_text(url):
response = requests.get(url)
response = requests.get(url, headers=cfg.user_agent_header)
# Check if the response contains an HTTP error
if response.status_code >= 400:
@ -40,7 +40,7 @@ def format_hyperlinks(hyperlinks):
def scrape_links(url):
response = requests.get(url)
response = requests.get(url, headers=cfg.user_agent_header)
# Check if the response contains an HTTP error
if response.status_code >= 400:

View File

@ -23,6 +23,19 @@ def create_chat_message(role, content):
return {"role": role, "content": content}
def generate_context(prompt, relevant_memory, full_message_history, model):
current_context = [
create_chat_message(
"system", prompt), create_chat_message(
"system", f"Permanent memory: {relevant_memory}")]
# Add messages from the full message history until we reach the token limit
next_message_to_add_index = len(full_message_history) - 1
insertion_index = len(current_context)
# Count the currently used tokens
current_tokens_used = token_counter.count_message_tokens(current_context, model)
return next_message_to_add_index, current_tokens_used, insertion_index, current_context
# TODO: Change debug from hardcode to argument
def chat_with_ai(
@ -41,7 +54,7 @@ def chat_with_ai(
prompt (str): The prompt explaining the rules to the AI.
user_input (str): The input from the user.
full_message_history (list): The list of all messages sent between the user and the AI.
permanent_memory (list): The list of items in the AI's permanent memory.
permanent_memory (Obj): The memory object containing the permanent memory.
token_limit (int): The maximum number of tokens allowed in the API call.
Returns:
@ -53,18 +66,20 @@ def chat_with_ai(
print(f"Token limit: {token_limit}")
send_token_limit = token_limit - 1000
current_context = [
create_chat_message(
"system", prompt), create_chat_message(
"system", f"Permanent memory: {permanent_memory}")]
relevant_memory = permanent_memory.get_relevant(str(full_message_history[-5:]), 10)
# Add messages from the full message history until we reach the token limit
next_message_to_add_index = len(full_message_history) - 1
current_tokens_used = 0
insertion_index = len(current_context)
if debug:
print('Memory Stats: ', permanent_memory.get_stats())
next_message_to_add_index, current_tokens_used, insertion_index, current_context = generate_context(
prompt, relevant_memory, full_message_history, model)
while current_tokens_used > 2500:
# remove memories until we are under 2500 tokens
relevant_memory = relevant_memory[1:]
next_message_to_add_index, current_tokens_used, insertion_index, current_context = generate_context(
prompt, relevant_memory, full_message_history, model)
# Count the currently used tokens
current_tokens_used = token_counter.count_message_tokens(current_context, model)
current_tokens_used += token_counter.count_message_tokens([create_chat_message("user", user_input)], model) # Account for user input (appended later)
while next_message_to_add_index >= 0:

View File

@ -1,12 +1,12 @@
import browse
import json
import memory as mem
from memory import PineconeMemory
import datetime
import agent_manager as agents
import speak
from config import Config
import ai_functions as ai
from file_operations import read_file, write_to_file, append_to_file, delete_file
from file_operations import read_file, write_to_file, append_to_file, delete_file, search_files
from execute_code import execute_python_file
from json_parser import fix_and_parse_json
from duckduckgo_search import ddg
@ -16,6 +16,13 @@ from googleapiclient.errors import HttpError
cfg = Config()
def is_valid_int(value):
try:
int(value)
return True
except ValueError:
return False
def get_command(response):
try:
response_json = fix_and_parse_json(response)
@ -45,6 +52,7 @@ def get_command(response):
def execute_command(command_name, arguments):
memory = PineconeMemory()
try:
if command_name == "google":
@ -55,11 +63,7 @@ def execute_command(command_name, arguments):
else:
return google_search(arguments["input"])
elif command_name == "memory_add":
return commit_memory(arguments["string"])
elif command_name == "memory_del":
return delete_memory(arguments["key"])
elif command_name == "memory_ovr":
return overwrite_memory(arguments["key"], arguments["string"])
return memory.add(arguments["string"])
elif command_name == "start_agent":
return start_agent(
arguments["name"],
@ -83,6 +87,8 @@ def execute_command(command_name, arguments):
return append_to_file(arguments["file"], arguments["text"])
elif command_name == "delete_file":
return delete_file(arguments["file"])
elif command_name == "search_files":
return search_files(arguments["directory"])
elif command_name == "browse_website":
return browse_website(arguments["url"], arguments["question"])
# TODO: Change these to take in a file rather than pasted code, if
@ -194,14 +200,28 @@ def delete_memory(key):
def overwrite_memory(key, string):
if int(key) >= 0 and key < len(mem.permanent_memory):
_text = "Overwriting memory with key " + \
str(key) + " and string " + string
# Check if the key is a valid integer
if is_valid_int(key):
key_int = int(key)
# Check if the integer key is within the range of the permanent_memory list
if 0 <= key_int < len(mem.permanent_memory):
_text = "Overwriting memory with key " + str(key) + " and string " + string
# Overwrite the memory slot with the given integer key and string
mem.permanent_memory[key_int] = string
print(_text)
return _text
else:
print(f"Invalid key '{key}', out of range.")
return None
# Check if the key is a valid string
elif isinstance(key, str):
_text = "Overwriting memory with key " + key + " and string " + string
# Overwrite the memory slot with the given string key and string
mem.permanent_memory[key] = string
print(_text)
return _text
else:
print("Invalid key, cannot overwrite memory.")
print(f"Invalid key '{key}', must be an integer or a string.")
return None
@ -235,13 +255,20 @@ def start_agent(name, task, prompt, model=cfg.fast_llm_model):
def message_agent(key, message):
global cfg
# Check if the key is a valid integer
if is_valid_int(key):
agent_response = agents.message_agent(int(key), message)
# Check if the key is a valid string
elif isinstance(key, str):
agent_response = agents.message_agent(key, message)
else:
return "Invalid key, must be an integer or a string."
# Speak response
if cfg.speak_mode:
speak.say_text(agent_response, 1)
return f"Agent {key} responded: {agent_response}"
return agent_response
def list_agents():

View File

@ -4,6 +4,7 @@ from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
class Singleton(type):
"""
Singleton metaclass for ensuring only one instance of a class.
@ -34,11 +35,28 @@ class Config(metaclass=Singleton):
self.smart_token_limit = int(os.getenv("SMART_TOKEN_LIMIT", 8000))
self.openai_api_key = os.getenv("OPENAI_API_KEY")
self.use_azure = False
self.use_azure = os.getenv("USE_AZURE") == 'True'
if self.use_azure:
self.openai_api_base = os.getenv("OPENAI_API_BASE")
self.openai_api_version = os.getenv("OPENAI_API_VERSION")
self.openai_deployment_id = os.getenv("OPENAI_DEPLOYMENT_ID")
openai.api_type = "azure"
openai.api_base = self.openai_api_base
openai.api_version = self.openai_api_version
self.elevenlabs_api_key = os.getenv("ELEVENLABS_API_KEY")
self.google_api_key = os.getenv("GOOGLE_API_KEY")
self.custom_search_engine_id = os.getenv("CUSTOM_SEARCH_ENGINE_ID")
self.pinecone_api_key = os.getenv("PINECONE_API_KEY")
self.pinecone_region = os.getenv("PINECONE_ENV")
# User agent headers to use when browsing web
# Some websites might just completely deny request with an error code if no user agent was found.
self.user_agent_header = {"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"}
# Initialize the OpenAI API client
openai.api_key = self.openai_api_key
@ -71,3 +89,9 @@ class Config(metaclass=Singleton):
def set_custom_search_engine_id(self, value: str):
self.custom_search_engine_id = value
def set_pinecone_api_key(self, value: str):
self.pinecone_api_key = value
def set_pinecone_region(self, value: str):
self.pinecone_region = value

View File

@ -1,15 +1,14 @@
import os
from pathlib import Path
SRC_DIR = Path(__file__).parent
def load_prompt():
try:
# get directory of this file:
file_dir = Path(os.path.dirname(os.path.realpath(__file__)))
data_dir = file_dir / "data"
prompt_file = data_dir / "prompt.txt"
# Load the promt from data/prompt.txt
with open(SRC_DIR/ "data/prompt.txt", "r") as prompt_file:
file_dir = Path(__file__).parent
prompt_file_path = file_dir / "data" / "prompt.txt"
# Load the prompt from data/prompt.txt
with open(prompt_file_path, "r") as prompt_file:
prompt = prompt_file.read()
return prompt

View File

@ -1,17 +1,15 @@
CONSTRAINTS:
1. ~4000 word limit for memory. Your memory is short, so immediately save important information to long term memory and code to files.
2. No user assistance
3. Exclusively use the commands listed in double quotes e.g. "command name"
1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.
2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.
3. No user assistance
4. Exclusively use the commands listed in double quotes e.g. "command name"
COMMANDS:
1. Google Search: "google", args: "input": "<search>"
2. Memory Add: "memory_add", args: "string": "<string>"
3. Memory Delete: "memory_del", args: "key": "<key>"
4. Memory Overwrite: "memory_ovr", args: "key": "<key>", "string": "<string>"
5. Browse Website: "browse_website", args: "url": "<url>", "question": "<what_you_want_to_find_on_website>"
6. Start GPT Agent: "start_agent", args: "name": <name>, "task": "<short_task_desc>", "prompt": "<prompt>"
6. Start GPT Agent: "start_agent", args: "name": "<name>", "task": "<short_task_desc>", "prompt": "<prompt>"
7. Message GPT Agent: "message_agent", args: "key": "<key>", "message": "<message>"
8. List GPT Agents: "list_agents", args: ""
9. Delete GPT Agent: "delete_agent", args: "key": "<key>"
@ -19,11 +17,12 @@ COMMANDS:
11. Read file: "read_file", args: "file": "<file>"
12. Append to file: "append_to_file", args: "file": "<file>", "text": "<text>"
13. Delete file: "delete_file", args: "file": "<file>"
14. Evaluate Code: "evaluate_code", args: "code": "<full _code_string>"
15. Get Improved Code: "improve_code", args: "suggestions": "<list_of_suggestions>", "code": "<full_code_string>"
16. Write Tests: "write_tests", args: "code": "<full_code_string>", "focus": "<list_of_focus_areas>"
17. Execute Python File: "execute_python_file", args: "file": "<file>"
18. Task Complete (Shutdown): "task_complete", args: "reason": "<reason>"
14. Search Files: "search_files", args: "directory": "<directory>"
15. Evaluate Code: "evaluate_code", args: "code": "<full _code_string>"
16. Get Improved Code: "improve_code", args: "suggestions": "<list_of_suggestions>", "code": "<full_code_string>"
17. Write Tests: "write_tests", args: "code": "<full_code_string>", "focus": "<list_of_focus_areas>"
18. Execute Python File: "execute_python_file", args: "file": "<file>"
19. Task Complete (Shutdown): "task_complete", args: "reason": "<reason>"
RESOURCES:
@ -43,12 +42,6 @@ You should only respond in JSON format as described below
RESPONSE FORMAT:
{
"command": {
"name": "command name",
"args":{
"arg name": "value"
}
},
"thoughts":
{
"text": "thought",
@ -56,6 +49,12 @@ RESPONSE FORMAT:
"plan": "- short bulleted\n- list that conveys\n- long-term plan",
"criticism": "constructive self-criticism",
"speak": "thoughts summary to say to user"
},
"command": {
"name": "command name",
"args":{
"arg name": "value"
}
}
}

View File

@ -58,3 +58,20 @@ def delete_file(filename):
return "File deleted successfully."
except Exception as e:
return "Error: " + str(e)
def search_files(directory):
found_files = []
if directory == "" or directory == "/":
search_directory = working_directory
else:
search_directory = safe_join(working_directory, directory)
for root, _, files in os.walk(search_directory):
for file in files:
if file.startswith('.'):
continue
relative_path = os.path.relpath(os.path.join(root, file), working_directory)
found_files.append(relative_path)
return found_files

View File

@ -24,6 +24,7 @@ def fix_and_parse_json(json_str: str, try_to_fix_with_gpt: bool = True):
"""
try:
json_str = json_str.replace('\t', '')
return json.loads(json_str)
except Exception as e:
# Let's do something manually - sometimes GPT responds with something BEFORE the braces:
@ -51,7 +52,7 @@ def fix_and_parse_json(json_str: str, try_to_fix_with_gpt: bool = True):
def fix_json(json_str: str, schema: str, debug=False) -> str:
# Try to fix the JSON using gpt:
function_string = "def fix_json(json_str: str, schema:str=None) -> str:"
args = [json_str, schema]
args = [f"'''{json_str}'''", f"'''{schema}'''"]
description_string = """Fixes the provided JSON string to make it parseable and fully complient with the provided schema.\n If an object or field specifed in the schema isn't contained within the correct JSON, it is ommited.\n This function is brilliant at guessing when the format is incorrect."""
# If it doesn't already start with a "`", add one:
@ -67,7 +68,8 @@ def fix_json(json_str: str, schema: str, debug=False) -> str:
print(f"Fixed JSON: {result_string}")
print("----------- END OF FIX ATTEMPT ----------------")
try:
return json.loads(result_string)
json.loads(result_string) # just check the validity
return result_string
except:
# Get the call stack:
# import traceback

View File

@ -6,6 +6,15 @@ openai.api_key = cfg.openai_api_key
# Overly simple abstraction until we create something better
def create_chat_completion(messages, model=None, temperature=None, max_tokens=None)->str:
if cfg.use_azure:
response = openai.ChatCompletion.create(
deployment_id=cfg.openai_deployment_id,
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens
)
else:
response = openai.ChatCompletion.create(
model=model,
messages=messages,

View File

@ -1,7 +1,7 @@
import json
import random
import commands as cmd
import memory as mem
from memory import PineconeMemory
import data
import chat
from colorama import Fore, Style
@ -277,9 +277,17 @@ prompt = construct_prompt()
# Initialize variables
full_message_history = []
result = None
next_action_count = 0
# Make a constant:
user_input = "Determine which next command to use, and respond using the format specified above:"
# Initialize memory and make sure it is empty.
# this is particularly important for indexing and referencing pinecone memory
memory = PineconeMemory()
memory.clear()
print('Using memory of type: ' + memory.__class__.__name__)
# Interaction Loop
while True:
# Send message to AI, get response
@ -288,10 +296,9 @@ while True:
prompt,
user_input,
full_message_history,
mem.permanent_memory,
memory,
cfg.fast_token_limit) # TODO: This hardcodes the model to use GPT3.5. Make this an argument
# print("assistant reply: "+assistant_reply)
# Print Assistant thoughts
print_assistant_thoughts(assistant_reply)
@ -301,7 +308,7 @@ while True:
except Exception as e:
print_to_console("Error: \n", Fore.RED, str(e))
if not cfg.continuous_mode:
if not cfg.continuous_mode and next_action_count == 0:
### GET USER AUTHORIZATION TO EXECUTE COMMAND ###
# Get key press: Prompt the user to press enter to continue or escape
# to exit
@ -311,13 +318,21 @@ while True:
Fore.CYAN,
f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}")
print(
f"Enter 'y' to authorise command or 'n' to exit program, or enter feedback for {ai_name}...",
f"Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for {ai_name}...",
flush=True)
while True:
console_input = input(Fore.MAGENTA + "Input:" + Style.RESET_ALL)
if console_input.lower() == "y":
user_input = "GENERATE NEXT COMMAND JSON"
break
elif console_input.lower().startswith("y -"):
try:
next_action_count = abs(int(console_input.split(" ")[1]))
user_input = "GENERATE NEXT COMMAND JSON"
except ValueError:
print("Invalid input format. Please enter 'y -n' where n is the number of continuous tasks.")
continue
break
elif console_input.lower() == "n":
user_input = "EXIT"
break
@ -348,6 +363,14 @@ while True:
result = f"Human feedback: {user_input}"
else:
result = f"Command {command_name} returned: {cmd.execute_command(command_name, arguments)}"
if next_action_count > 0:
next_action_count -= 1
memory_to_add = f"Assistant Reply: {assistant_reply} " \
f"\nResult: {result} " \
f"\nHuman Feedback: {user_input} "
memory.add(memory_to_add)
# Check if there's a result from the command append it to the message
# history

View File

@ -1 +1,61 @@
permanent_memory = []
from config import Config, Singleton
import pinecone
import openai
cfg = Config()
def get_ada_embedding(text):
text = text.replace("\n", " ")
return openai.Embedding.create(input=[text], model="text-embedding-ada-002")["data"][0]["embedding"]
def get_text_from_embedding(embedding):
return openai.Embedding.retrieve(embedding, model="text-embedding-ada-002")["data"][0]["text"]
class PineconeMemory(metaclass=Singleton):
def __init__(self):
pinecone_api_key = cfg.pinecone_api_key
pinecone_region = cfg.pinecone_region
pinecone.init(api_key=pinecone_api_key, environment=pinecone_region)
dimension = 1536
metric = "cosine"
pod_type = "p1"
table_name = "auto-gpt"
# this assumes we don't start with memory.
# for now this works.
# we'll need a more complicated and robust system if we want to start with memory.
self.vec_num = 0
if table_name not in pinecone.list_indexes():
pinecone.create_index(table_name, dimension=dimension, metric=metric, pod_type=pod_type)
self.index = pinecone.Index(table_name)
def add(self, data):
vector = get_ada_embedding(data)
# no metadata here. We may wish to change that long term.
resp = self.index.upsert([(str(self.vec_num), vector, {"raw_text": data})])
_text = f"Inserting data into memory at index: {self.vec_num}:\n data: {data}"
self.vec_num += 1
return _text
def get(self, data):
return self.get_relevant(data, 1)
def clear(self):
self.index.delete(deleteAll=True)
return "Obliviated"
def get_relevant(self, data, num_relevant=5):
"""
Returns all the data in the memory that is relevant to the given data.
:param data: The data to compare to.
:param num_relevant: The number of relevant data to return. Defaults to 5
"""
query_embedding = get_ada_embedding(data)
results = self.index.query(query_embedding, top_k=num_relevant, include_metadata=True)
sorted_results = sorted(results.matches, key=lambda x: x.score)
return [str(item['metadata']["raw_text"]) for item in sorted_results]
def get_stats(self):
return self.index.describe_index_stats()

View File

@ -46,6 +46,7 @@ def gtts_speech(text):
os.remove("speech.mp3")
def say_text(text, voice_index=0):
def speak():
if not cfg.elevenlabs_api_key:
gtts_speech(text)