We’ve been frustrated by the "Scaling Hypothesis" (just make the model bigger). We believe the issue with current AI isn't size, it's architecture.
We just open-sourced TOPAS-DSPL. It’s a tiny model (~15M params) that uses a Dual-Stream architecture (separating Logic from Execution) to solve the ARC-AGI-2 benchmark.
It achieves 24% accuracy on the hard evaluation set, which beats many models 1000x its size.
The repo includes the full training code, data augmentation pipeline, and the MuonClip optimizer we used to stabilize the recursion. It runs on a single consumer GPU.
Doug_Bitterbot•2h ago
A link to our corresponding paper as well: https://zenodo.org/records/17683673
Our AI agent: https://bitterbot.ai/
Doug_Bitterbot•2h ago