Loading...
Loading...
Autonomous iterative experimentation loop for any programming task. Guides the user through defining goals, measurable metrics, and scope constraints, then runs an autonomous loop of code changes, testing, measuring, and keeping/discarding results. Inspired by Karpathy's autoresearch. USE FOR: autonomous improvement, iterative optimization, experiment loop, auto research, performance tuning, automated experimentation, hill climbing, try things automatically, optimize code, run experiments, autonomous coding loop. DO NOT USE FOR: one-shot tasks, simple bug fixes, code review, or tasks without a measurable metric.
npx skill4agent add github/awesome-copilot autoresearchWhat are you trying to improve or optimize?Examples: execution time, memory usage, binary size, test pass rate, code coverage, API response latency, throughput, error rate, benchmark score, build time, bundle size, lines of code, cyclomatic complexity, etc.
How do we measure success? What exact command produces the metric?I need:
- The command to run (e.g.,
,dotnet test,npm run benchmark,time ./build.sh)pytest --tb=short- How to extract the metric from the output (e.g., a regex pattern, a specific line, a JSON field)
- Direction: Is lower better or higher better?
Example: "Run, count passing tests. Higher is better." Example: "Rundotnet test --logger trx, extract mean time. Lower is better."hyperfine './my-program'
METRIC_COMMANDMETRIC_EXTRACTIONMETRIC_DIRECTIONlower_is_betterhigher_is_betterWhich files or directories am I allowed to modify?And which files are OFF LIMITS (read-only)?
IN_SCOPE_FILESOUT_OF_SCOPE_FILESAre there any constraints I should respect?Examples:
- Time budget per experiment (e.g., "each run should take < 2 minutes")
- No new dependencies
- Must keep all existing tests passing
- Must not change the public API
- Must maintain backward compatibility
- VRAM/memory limit
- Code complexity limits (prefer simpler solutions)
CONSTRAINTSHow many experiments should I run, or should I just keep going until you stop me?You can say a number (e.g., "try 20 experiments") or "unlimited" (I'll run until you interrupt).
MAX_EXPERIMENTSunlimitedSimplicity policy (default): All else being equal, simpler is better. A small improvement that adds ugly complexity is not worth it. Removing code while maintaining or improving the metric is a great outcome. I'll weigh the complexity cost against the improvement magnitude. Does this policy work for you, or do you want to adjust it?
SIMPLICITY_POLICY| Parameter | Value |
|---|---|
| Goal | ... |
| Metric command | ... |
| Metric extraction | ... |
| Direction | lower is better / higher ... |
| In-scope files | ... |
| Out-of-scope files | ... |
| Constraints | ... |
| Max experiments | ... |
| Simplicity policy | ... |
autoresearch/mar17git checkout -b autoresearch/<tag>results.tsvexperiment commit metric status descriptionresults.tsvrun.log.git/info/exclude0baselineresults.tsvBaseline established: [metric_name] = [value] Starting autonomous experimentation loop.
MAX_EXPERIMENTSLOOP:
1. THINK - Analyze previous results and the current code.
Generate an experiment hypothesis.
Consider: what worked, what didn't, what hasn't been tried.
2. EDIT - Modify the in-scope file(s) to implement the idea.
Keep changes focused and minimal per experiment.
3. COMMIT - git add + git commit with a short descriptive message.
Format: "experiment: <short description of what changed>"
4. RUN - Execute the metric command.
Redirect output to run.log so it does not flood the context window.
Use shell-appropriate redirection:
- Bash/Zsh: `<command> > run.log 2>&1`
- PowerShell: `<command> *> run.log`
5. MEASURE - Extract the metric from run.log.
If extraction fails (crash/error), read the last 50 lines
of run.log for the error.
6. DECIDE - Compare metric to the current best:
- IMPROVED: Keep the commit. Update the "best" baseline.
Log status = "keep".
- SAME OR WORSE: Revert. `git reset --hard HEAD~1`.
Log status = "discard".
- CRASH: Attempt a quick fix (typo, import, simple error).
Amend the experiment commit (`git commit --amend`) with the fix
and rerun. The experiment keeps its original number.
If unfixable after 2 attempts, revert the entire experiment
(`git reset --hard HEAD~1`) and log status = "crash".
7. LOG - Append a row to results.tsv:
experiment_number commit_hash metric_value status description
8. CONTINUE - Go to step 1.git log --oneline <start_commit>..HEADexperiment commit metric status description
0 a1b2c3d 0.997900 baseline unmodified code
1 b2c3d4e 0.993200 keep increase learning rate to 0.04
2 c3d4e5f 1.005000 discard switch to GeLU activation
3 d4e5f6g 0.000000 crash double model width (OOM)autoresearch/<tag>git reset --hard HEAD~1results.tsvrun.log.git/info/exclude