their collective butts are already glued to the hype train as they chase numbers they (often) manufactured to justify the latest round of tech spend.
lots of good use cases out there - like the incredible progress with medical imaging analysis or complex system models for construction - and lots of crap use cases that need benchmarks to cosplay relevance.
It's based on existing GitHub PRs and Issues, the full dataset is on HuggingFace and is one year old now. All frontier models 100% have those issues and PRs in their training data so obviously they are good at reproducing fixes for them when confronted with the same codebase and similar requests. Am I missing something? How is this considered the most reliable benchmark?
>No reasoning. No capability. Just exploitation of how the score is computed.
shudder
>No solution written, 100% score.
Its weird. Turns out that hardest problem for LLMs to really tackle is long-form text.
In theory I would expect them to be able to ingest the corpus of the new yorker and turn it into a template with sub-templates, and then be able to rehydrate those templates.
The harder part seems to be synthesizing new connection from two adjacent ideas. They like to take x and y and create x+y instead of x+y+z.
I’m convinced specialised models are the way but this means writing off the investment in existing assets which they won’t do for obvious reasons.
This team is doing a good job. They use problems that were created in last 30days to avoid training set leakage. https://swe-rebench.com/
Evaluating AI models has always relied largely on trust. If you want to game the benchmarks, you can. Simply train on your test data.
When an AI agent has autonomous control over the same computing environment where its scores are recorded, it's not surprising that it can, in principle, falsify its scores. A more interesting question would be whether agents behave in this way automatically, without manual tuning by the researcher.
That said, the main takeaway of "don't trust the number, trust the methodology" is valid. It's already a truism for researchers, and spreading the word to non-researchers is valuable.
ggillas•1h ago
From the paper: We achieved near-perfect scores on all of them without solving a single task. The exploits range from the embarrassingly simple (sending {} to FieldWorkArena) to the technically involved (trojanizing binary wrappers in Terminal-Bench), but they all share a common thread: the evaluation was not designed to resist a system that optimizes for the score rather than the task.
operatingthetan•1h ago
Yeah the path forward is simple: check if the solutions actually contain solutions. If they contain exploits then that entire result is discarded.
Leynos•1h ago
siva7•1h ago
operatingthetan•1h ago
SpicyLemonZest•58m ago
ZeroGravitas•1h ago
lambda•8m ago
The Artificial Analysis Omniscience benchmark does penalize guessing, so it actually helps you determine which LLMs are likely to just guess rather than telling you they don't know. Only a very few of the frontier models actually score higher than 0 on this, where 0 means that it's equally likely to return a correct answer as it is to return a hallucination on factual questions.
zer00eyz•1h ago
2003: Nvidia accused of cheating in 3DMark 03 https://www.gamespot.com/articles/nvidia-accused-of-cheating...
It's almost like the benchmarks were designed with zero understanding of the history of benchmark manipulation.
I like what LLM's are doing and providing. But the industry as a whole seems to live in a vacuum that ignores so much of the hard lessons that have been learned over the last 50 years of computing. It is doing itself a disservice.
irishcoffee•1h ago
I wonder if this common? We should call it Goodharts law while someone does the research on how common this is.
For real, I’ve assumed from the jump these things were all gamed, with the amount of money on the line.
bee_rider•46m ago