Not sure I really understand the comparisons though. They emphasize the cost savings relative to Haiku, but Haiku kinda sucks at this task, and Leanstral is worse? If you're optimizing for correctness, why would "yeah it sucks but it's 10 times cheaper" be relevant? Or am I misunderstanding something?
On the promising side, Opus doesn't look great at this benchmark either — maybe we can get better than Opus results by scaling this up. I guess that's the takeaway here.
I actively use gemini-3.1-pro-preview, claude-4.6-opus-high, and gpt-5.3-codex as well. I prefer them all for different reasons, however I usually _start_ with mistral if it's an option.
It does actually significantly boost performance. There was an article on here about it recently, I'll see if I can find it.
Edit: https://news.ycombinator.com/item?id=44630724
They found the more different the models were (the less overlap in correctly solved problems), the more it boosted the score.
This model is specifically trained on this task and significantly[1] underperforms opus.
Opus costs about 6x more.
Which seems... totally worth it based on the task at hand.
[1]: based on the total spread of tested models
Still, the more interesting comparison would be against something such as Codex.
That would also help to reduce our dependency on American Hyperscalers, which is much needed given how untrustworthy the US is right now. (And also hostile towards Europe as their new security strategy lays out)
Could definitely be interesting for having another model run over the codebase when looking for improvements
> Instead of taking a stab in the dark, Leanstral rolled up its sleeves. It successfully built test code to recreate the failing environment and diagnosed the underlying issue with definitional equality. The model correctly identified that because def creates a rigid definition requiring explicit unfolding, it was actively blocking the rw tactic from seeing the underlying structure it needed to match.
blurbleblurble•1h ago