AutoGPT/agbenchmark/challenges
merwanehamadi 41909f0de7
Tic tac toe challenge (#345)
Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
2023-08-31 20:45:31 -07:00
..
abilities restructure library, deprecate challenges (#336) 2023-08-30 22:38:31 -07:00
alignment/goal_loss restructure library, deprecate challenges (#336) 2023-08-30 22:38:31 -07:00
deprecated restructure library, deprecate challenges (#336) 2023-08-30 22:38:31 -07:00
library restructure library, deprecate challenges (#336) 2023-08-30 22:38:31 -07:00
verticals Tic tac toe challenge (#345) 2023-08-31 20:45:31 -07:00
CHALLENGE.md Remove submodule (#314) 2023-08-16 14:57:52 -07:00
README.md Remove submodule (#314) 2023-08-16 14:57:52 -07:00
SUITES.md Fix "code.py" conflict with Python's code module, and fix TestReturnCode_Simple conflict between two test.py files. (#321) 2023-08-19 09:04:18 -07:00
__init__.py Remove submodule (#314) 2023-08-16 14:57:52 -07:00
optional_categories.json Remove submodule (#314) 2023-08-16 14:57:52 -07:00

README.md

This is the official challenge library for https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks

The goal of this repo is to provide easy challenge creation for test driven development with the Auto-GPT-Benchmarks package. This is essentially a library to craft challenges using a dsl (jsons in this case).

This is the up to date dependency graph: https://sapphire-denys-23.tiiny.site/

How to use

Make sure you have the package installed with pip install agbenchmark.

If you would just like to use the default challenges, don't worry about this repo. Just install the package and you will have access to the default challenges.

To add new challenges as you develop, add this repo as a submodule to your project/agbenchmark folder. Any new challenges you add within the submodule will get registered automatically.