AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
 
 
 
 
 
 
Go to file
Silen Naihin e25f610344
local runs, home_path config, submodule miniagi (#50)
2023-07-04 10:23:00 -07:00
.github/workflows local runs, home_path config, submodule miniagi (#50) 2023-07-04 10:23:00 -07:00
.vscode init agbenchmark 2023-06-18 11:14:54 -04:00
agbenchmark local runs, home_path config, submodule miniagi (#50) 2023-07-04 10:23:00 -07:00
agent local runs, home_path config, submodule miniagi (#50) 2023-07-04 10:23:00 -07:00
.env.example moving run agent to tests & agnostic run working 2023-06-30 10:50:54 -04:00
.flake8 Add static linters ci (#45) 2023-07-02 16:14:49 -04:00
.gitignore Integrate one challenge to auto gpt (#44) 2023-07-02 10:38:30 -04:00
.gitmodules local runs, home_path config, submodule miniagi (#50) 2023-07-04 10:23:00 -07:00
.python-version Add static linters ci (#45) 2023-07-02 16:14:49 -04:00
LICENSE init agbenchmark 2023-06-18 11:14:54 -04:00
README.md local runs, home_path config, submodule miniagi (#50) 2023-07-04 10:23:00 -07:00
config.json local runs, home_path config, submodule miniagi (#50) 2023-07-04 10:23:00 -07:00
mypy.ini Add static linters ci (#45) 2023-07-02 16:14:49 -04:00
poetry.lock Integrate with gpt engineer (#47) 2023-07-03 14:53:28 -04:00
pyproject.toml Integrate with gpt engineer (#47) 2023-07-03 14:53:28 -04:00
regression_tests.json Integrate one challenge to auto gpt (#44) 2023-07-02 10:38:30 -04:00

README.md

Auto-GPT Benchmark

A repo built for the purpose of benchmarking the performance of agents far and wide, regardless of how they are set up and how they work

Scores:

Scoring of agents will go here. Both overall and by category.

Integrated Agents

  • Auto-GPT
  • gpt-engineer
  • mini-agi
  • smol-developer