*What I did*:
- Created a Claude Code-inspired agent (system msg + tools)
- Built Docker-isolated GRPO training where each rollout gets its own container
- Developed a multi-agent synthetic data pipeline to generate & validate training data with Opus-4
- Implemented a hybrid reward signal of unit test verifiers & a behavioural LLM judge.
*Key results*:
- My untrained Qwen3-32B agent achieved 13.75% on Terminal-Bench (#19, beats Stanford's Qwen3-235B MoE)
- I tested training to work stably on 32x H100s distributed across 4 bare metal nodes
- I created a mini-eval framework for LLM-judge performance. Sonnet-4 won.
- ~£30-50k needed for full training run of 1000 epochs (I could only afford testing )
*Technical details*:
- The synthetic dataset ranges from easy to extremely hard tasks. An example hard task's prompt:
"I found this mystery program at `/app/program` and I'm completely stumped. It's a stripped binary, so I have no idea what it does or how to run it properly. The program seems to expect some specific input and then produces an output, but I can't figure out what kind of input it needs. Could you help me figure out what this program requires?"
- Simple config presets allow training to run on multiple hardware setups with minimal effort.
- GRPO used with 16 rollouts per task, up to 32k tokens per rollout.
- Agent uses XML/YAML format to structure tool calls
*More details*:
My Github repos open source it all (agent, data, code) and has way more technical details if you are interested!:
- Terminal Agent RL repo
- Multi-agent synthetic data pipeline repo
I thought I would share this because I believe long-horizon RL is going to change everybody's lives, and so I feel it is important (and super fun!) for us all to share knowledge around this area, and also have enjoy exploring what is possible.
Thanks for reading!
Dan
(Built using rLLM RL framework which was brilliant to work with, and evaluated and inspired by the great Terminal Bench benchmark)
rboyd•9h ago
What are the best papers/resources on sota long-horizon RL?
Thanks.