Like, seriously, how come all these agents are beating Claude Code? In practice, they are shitty and not even close. Yes. I tried them.
https://www.oracle.com/news/announcement/blog/oracle-cloud-c...
It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!".
Proper research means interrogating the traces, like these researchers did (the Gist shows Claude 4 Sonnet): https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...
Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969
This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.
This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.
piskov•1h ago
https://arxiv.org/html/2506.12286v3
stefan_•1h ago
I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing? At least back in the day grad students doing an easy meta-paper understood it meant doing some repetitive manual work. Now we got benchmarks by hype vendors who think they can use the thing they are benchmarking to .. mark the bench.
jsheard•56m ago
Seems on-brand for an LLM-related thing to claim that it has verified something without actually checking.
geekymartian•35m ago
yorwba•9m ago
Data contamination stemming from the fact that it's based on already-solved problems in public repositories is a different issue that cannot be addressed by verifying the benchmark questions harder, but only by putting stricter limits on the model under test.
fine_tune•1h ago
So kinda neat to see this paper!
[0]https://github.blog/news-insights/octoverse/octoverse-2024/#...
yieldcrv•30m ago
I don't see that contradicting your assumption
teaearlgraycold•50m ago