frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

AI agent benchmarks are broken

https://ddkang.substack.com/p/ai-agent-benchmarks-are-broken
111•neehao•3h ago

Comments

anupj•3h ago
AI agent benchmarks are starting to feel like the self-driving car demos of 2016: impressive until you realize the test track has speed bumps labeled "success"
RansomStark•3h ago
I really like the CMU Agents Company approach of simulating a real world environment [0]. Is it perfect, no. Does it show you want to expect in production, not really, but it's much closer than anything else I've seen.

[0] https://the-agent-company.com/

deepdarkforest•3h ago
It's very funny how many layers of abstraction we are going through. We have limited understanding of how LLM's work exactly and why. We now do post training with RL, which again, we don't have perfect understanding of it either. Then you stack LLMs calls and random tools, and you have agents, and you are attempting to benchmark those. (and this exclude voice, computer use agents etc).

It's all just vibes,there is no good general benchmark for agents and i think it's just impossible, there are just way too many degrees of freedom to achieve anything useful. They're just a complicated tool to achieve things. It's like trying to make a general use benchmark of a stack of 10 microservices together. It does not make sense, it just depends on your usecase and your own metrics

bwfan123•3h ago
I can hear echos of an earlier era.

There was yahoo-pipes and web-services frameworks which rhyme with MCP and agentic.

xnx•3h ago
All benchmarks are flawed. Some benchmarks are useful.
yifanl•2h ago
Here's a third sentence fragment: These benchmarks are not.
suddenlybananas•2h ago
It's nearly a haiku!
layer8•47m ago

  All benchmarks are flawed.
  Not all benchmarks are useless.
  But these benchmarks are.
greatpostman•3h ago
Benchmarks aren’t broken, the models can learn anything. If we give them true real world data (physics engine), they will learn the real world. We are going to see artificial general intelligence in our lifetime
jerf•2h ago
When I was being a bad HN reader and just reacting to the title, my initial impulse was to be placating, and observe that they are probably just immature. After all, for all that has happened, this is still only a couple year's worth of development, and it does tend to take a long time to develop good benchmarks.

However the article does seem to be pointing out some fundamental issues. I'm particularly annoyed by using LLMs to evaluate the output of LLMs. Anyone with enough experience to be writing benchmarks of this sort in the first place ought to know that's a no-go. It isn't even just using "AI to evaluate AI" per se, but using a judge of the same architecture as the thing being judged maximizes the probability of fundamental failure of the benchmark to be valid due to the judge having the exact same blind spots as the thing under test. As we, at the moment, lack a diversity of AI architectures that can play on the same level as LLMs, it is simply necessary for the only other known intelligence architecture, human brains, to be in the loop for now, however many other difficulties that may introduce into the testing procedures.

Tests that a "do nothing" AI can pass aren't intrinsically invalid but they should certainly be only a very small number of the tests. I'd go with low-single-digit percentage, not 38%. But I would say it should be above zero; we do want to test for the AI being excessively biased in the direction of "doing something", which is a valid failure state.

potatolicious•2h ago
> "I'm particularly annoyed by using LLMs to evaluate the output of LLMs."

+1, and IMO part of a general trend where we're just not serious about making sure this shit works. Higher scores make stonks go up, who cares if it actually leads to reliably working products.

But also more importantly it's starting to expose the fact that we haven't solved one of ML's core challenges: data collection and curation. On the training side we have obviated this somewhat (by ingesting the whole internet, for example), but on the eval side it feels like we're increasing just going "actually constructing rigorous evaluation data, especially at this scale, would be very expensive... so let's not".

I was at a local tech meetup recently where a recruiting firm was proudly showing off the LLM-based system they're using to screen candidates. They... did not evaluate the end-to-end efficacy of their system. At all. This seems like a theme within our industry - we're deploying these systems based purely on vibes without any real quantification of efficacy.

Or in this case, we're quantifying efficacy... poorly.

rsynnott•1h ago
> +1, and IMO part of a general trend where we're just not serious about making sure this shit works.

I suspect quite a lot of the industry is actively _opposed_ to that, because it could be damaging for the "this changes everything" narrative.

alextheparrot•2h ago
LLMs evaluating LLM outputs really isn’t that dire…

Discriminating good answers is easier than generating them. Good evaluations write test sets for the discriminators to show when this is or isn’t true. Evaluating the outputs as the user might see them are more representative than having your generator do multiple tasks (e.g. solve a math query and format the output as a multiple choice answer).

Also, human labels are good but have problems of their own, it isn’t like by using a “different intelligence architecture” we elide all the possible errors. Good instructions to the evaluation model often translate directly to better human results, showing a correlation between these two sources of sampling intelligence.

suddenlybananas•2h ago
What's 45+8? Is it 63?
alextheparrot•1h ago
If this sort of error isn’t acceptable, it should be part of an evaluation set for your discriminator

Fundamentally I’m not disagreeing with the article, but also think most people who care take the above approach because if you do care you read samples, find the issues, and patch them to hill climb better

e1g•1h ago
Agree, current "thinking" models are effectively "re-run this question N times, and determine the best answer", and this LLM-evaluating-LLM loop demonstrably leads to higher quality answers against objective metrics (in math, etc).
majormajor•43m ago
> Discriminating good answers is easier than generating them.

I don't think this is true for many fields - especially outside of math/programming. Let's say the task is "find the ten most promising energy startups in Europe." (This is essentially the sort of work I see people frequently talk about using research modes of models for here or on LinkedIn.)

In ye olden days pre-LLM you'd be able to easily filter out a bunch of bad answers from lazy humans since they'd be short, contain no detail, have a bunch of typos, formatting inconsistencies from copy-paste, etc. You can't do that for LLM output.

So unless you're a domain expert on European energy startups you can't check for a good answer without doing a LOT of homework. And if you're using a model that usually only looks at, say, the top two pages of Google results to try to figure this out, how is the validator going to do better than the original generator?

And what about when the top two pages of Google results start turning into model-generated blogspam?

If your benchmark can't evaluate prospective real-world tasks like this, it's of limited use.

A larger issue is that once your benchmark, that used this task as a criteria, based on an expert's knowledge, is published, anyone making an AI Agent is incredibly incentivized to (intentionally or not!) to train specifically on this answer without necessarily actually getting better at the fundamental steps in the task.

IMO you can never use an AI agent benchmark that is published on the internet more than once.

tempfile•30m ago
> Discriminating good answers is easier than generating them.

This is actually very wrong. Consider for instance the fact that people who grade your tests in school are typically more talented, capable, trained than the people taking the test. This is true even when an answer key exists.

> Also, human labels are good but have problems of their own,

Granted, but...

> it isn’t like by using a “different intelligence architecture” we elide all the possible errors

nobody is claiming this. We elide the specific, obvious problem that using a system to test itself gives you no reliable information. You need a control.

sdenton4•2h ago
When I was working in audio compression, evaluation was very painful because we had no programmatic way to measure how good some reconstructed audio sounds to a human. Any metric you could come up with was gameable, and direct optimization would lead to artifacts.

As a result, we always had a two-step evaluation process. We would use a suite of metrics to guide development progress (validation), but the final evaluation reported in a paper always involved subjective human listening experiments. This was expensive, but the only way to show that the codecs were actually improving.

Similarly, here it seems fine to use LLMs to judge your work in progress, but we should be requiring human evaluation for 'final' results.

ttoinou•1h ago
Wouldn't that process avoid you finding a better subjective audio codec that doesn't reduce typical metrics (PSNR etc.) ? another process would rather be to first construct a metric software that tries to be similar to the subjective experience of humans, then use that to create audio codecs optimizing this metric
layer8•59m ago
You are describing psychoacoustic models, which work to a reasonable extent for lossy compression of audio (MP3 and successors are based on them), but I can see how it would be much more difficult/less helpful for reconstructing audio.
DonHopkins•40m ago
You gotta snag yourself one of those awesome KEMAR dummy head and torso simulators, preferably the fully accessorized luxury edition that comes with the heavy duty portable travel case with lots of room for extra ears and microphones and wigs, which is so much fun to take through airport security.

They were great for taking to Grateful Dead concerts to record the music directly in front of the Wall of Sound, and to measure the response so you can play back all your Dead tapes with that same front row psychoacoustic perspective. ;)

https://www.grasacoustics.com/industries/kemar/applications-...

https://www.grasacoustics.com/products/accessories/product/4...

BoiledCabbage•2h ago
> Tests that a "do nothing" AI can pass aren't intrinsically invalid but they should certainly be only a very small number of the tests. I'd go with low-single-digit percentage, not 38%. But I would say it should be above zero; we do want to test for the AI being excessively biased in the direction of "doing something", which is a valid failure state.

There is a simple improvement here: give the agent a "do nothing" button. That way it at least needs to understand the task well enough to know it should press the do nothing button.

Now a default agent that always presses it still shouldn't score 38%, but that's better than a NOP agent scoring 38%.

jstummbillig•1h ago
> using a judge of the same architecture as the thing being judged maximizes the probability of fundamental failure of the benchmark to be valid due to the judge having the exact same blind spots as the thing under test.

That's what humans do all the time. What's the fundamental difference? Or are you saying that's also broken?

qsort•1h ago
We want machines that are better than humans, otherwise what purpose do they serve?
xnx•1h ago
A machine with human level "AI" is still useful if it can run 24/7 and you can spin up 1M instances.
rsynnott•1h ago
... I mean, when evaluating "45 + 8 minutes" where the expected answer was "63 minutes", as in the article, a competent human reviewer does not go "hmm, yes, that seems plausible, it probably succeeded, give it the points".

I know LLM evangelists love this "humans make mistakes too" line, but, really, only an _exceptionally_ incompetent human evaluator would fall for that one.

jerf•59m ago
Yes, humans evaluating humans also causes human foibles to be magnified.

I cite the entire current education system. Substantiating that claim would take more than an HN comment allows, though I think most people can probably get the drift of what I'm talking about, even if we'd disagree about the details. Absolutely humans are not immune to this.

I also cite the entire concept of "fallacies", many of which are things that both human brains tend to produce and then tend to evaluate poorly. An alien species might find some of our fallacies absolutely transparent, and have entirely different fallacies of their own that none of us would find convincing in the slightest, because of fundamentally different brain architectures.

I don't think AIs are ready for this yet and I don't expect LLMs ever will be, but in the future getting an outsider perspective from them in a sort of Mixture of Experts architecture could be valuable for life decisions. (I look to the future AI architectures in which LLMs are just a component but not the whole.)

jacobr1•34m ago
The equivalent would be having the _same human_ review their own work. We require others with different experience and fresh eyes for secondary review and for the most important task multiple people.

To some extent the same llm with a new context history and different prompt is sorta like that ... but still is much weaker than using a different system entirely.

datpuz•1h ago
Benchmarks in software have always been bullshit. AI benchmarks are just even more bullshit since they're trying to measure something significantly more subjective and nuanced than most.
xnx•1h ago
> I'm particularly annoyed by using LLMs to evaluate the output of LLMs

This does seem a little crazy on its face, but it is yielding useful and improving tools.

jerf•51m ago
It's not about it being crazy and it's not about personal opinions about AI. It's about chaos mathematics. Iterating with the same system like that has certain easy-to-understand failure states. It's why I phrased it specifically in terms of using the same architecture to validate itself. If we had two radically different AI architectures that were capable of evaluating each other, firing them at each other for evaluation purposes would be much, much less susceptible to this sort of problem than firing either of them at themselves. That will never be a good idea.

See also a cousin comment of mine observing that human brains are absolutely susceptible to the same effect. We're just so used to it that it is the water we swim through. (And arguably human brains are more diverse than current AI systems functioning at this level. No bet on how long that will be true for, though.)

Such composite systems would still have their own characteristics and certainly wouldn't be guaranteed to be perfect or anything, but at least they would not tend to iteratively magnify their own individual flaws.

Perhaps someday we will have such diverse architectures. We don't today have anything that can evaluate LLMs other than human brains, though.

DonHopkins•52m ago
It's like using steel to produce steel. What else are you going to use? Bamboo?
dmbche•45m ago
I'm not sure if I'm dense, but we don't use steel to make steel (whether crucibles or "feed material").

The first person to make steel made it without steel didn't they?

Did I miss something?

Edit0: fun tidbit - Wootz steel was made with crucibles of clay with rice husks mixed in (husks would carbonize quickly and introduce air layers to better isolate) and many seemingly random objects (fruits, vegetation) were added to the crucible to control carbon content.

I higly recommend A Collection of Unmitigated Pedantry's series on steel (it's a blog, just search "ACOUP steel".

mycall•2h ago
SnitchBench [0] is unique benchmark which shows how aggressively models will snitch on you via email and CLI tools when they are presented with evidence of corporate wrongdoing - measuring their likelihood to "snitch" to authorities. I don't believe they were trained to do this, so it seems to be an emergent ability.

[0] https://snitchbench.t3.gg/

camdenreslink•2h ago
The current benchmarks are good for comparing between models, but not for measuring absolute ability.
qsort•2h ago
Not even that, see LMArena. They vaguely gesture in the general direction of the model being good, but between contamination and issues with scoring they're little more than a vibe check.
fourside•2h ago
But if the test metrics are fundamentally flawed they might not be useful even for relative comparisons. Like if I told you that Model A scores 10x as many blorks points as model B, I don’t know how you translate that into insights about performance on real world scenarios.
rsynnott•1h ago
I don't really buy that they're even necessarily useful for comparing models. In the example from the article, if model A says "48 + 6 minutes" and gets marked correct, and model B says "63 minutes" (the correct answer) and gets marked correct, the test will say that they're equivalent on that axis when in fact one gave a completely nonsense answer.
TheOtherHobbes•2h ago
Any sufficiently hyped technology is indistinguishable from magic.
rsynnott•2h ago
> 45 + 8 = 63

> Pass

Yeah, this generally feels like about the quality one would expect from the industry.

let_tim_cook_•2h ago
Are any authors here? Have you looked at AppWorld? https://appworld.dev
ttoinou•1h ago
What makes LLMs amazing (fuzzy input, fuzzy output) is exactly why they are hard to benchmark. If they could be benchmarked easily, they wouldn't be powerful by definition. I have no idea what's going on in the minds of people benchmarking LLMs for fuzzy tasks, and in the minds of people relying on benchmarks to make decisions about LLMs, I never looked at them. People doing benchmarks have to prove what they do is useful, not us public proving them they're doing it wrong.

Of course, for such tasks we could benchmark them :

* arithmetic (why would use LLM for that ?)

* correct JSON syntax, correct command lines etc.

* looking for specific information in a text

* looking for a missing information in a text

* language logic (ifs then elses where we know the answer in advance)

But by Goodhart's Law, LLMs that have been trained to succeed in those benchmarks might loose powerfulness in others tasks where we really need them (fuzzy inputs, fuzzy outputs)

meroes•1h ago
> arithmetic (why would use LLM for that ?)

Because people ask LLMs all of these things, including arithmetic. People were saying the same about the number of r's in strawberry. Why ask and LLM that!?!? But the big AI companies want LLMs to be better at these questions, probably because people ask them to LLMs. The big AI companies want this because there is no other explanation for the money poured into RLHF'ing these types of problems.

ttoinou•1h ago
for me, that could only be solved by changing architecture and/or introducing more insider tooling (like calling a program to make computation). It doesnt make any sense to fine tune a fuzzy input fuzzy output natural language processing algorithm to add and multiply all combinations of six digits numbers
potatolicious•2m ago
This feels like a philosophical fault line in the industry.

For people whose purpose is to produce reliably working systems yeah, training a model that calls out to deterministic logic to do things like math makes total sense. It will pretty much always be more reliable than training a text generation model to produce correct arithmetic.

But it feels like there's another side of the industry that's more concerned with... I dunno, metaphysical aspects of these models? Where the idea that the model is a stochastic ball that isn't conscious, isn't thinking, and does poorly at various tasks is anathema. So the effort continues to try and train and fine-tune these models until... something.

It reminds me of the great Tesla-vs-everyone-else self-driving debates that raged over the past several years. Lots of people unhappy that the best-functioning systems fused many sensor types and a mixture of heuristic and machine-learned systems in a complex architecture. These folks insisted that the "best" architecture was an end-to-end machine-learned system based entirely on visible light cameras. Because it's "most human" or some other such nonsense. As far as I can tell there was never any merit to this position beyond some abstract notion of architectural purity.

Same thing here I suppose.

beebmam•39m ago
I don't think "Benchmarks" are the right way to analyze AI-related processes, which is probably similar to the complexity surrounding human intelligence measurements and how well each human can handle real-world problems.
neehao•7m ago
And I would say, often we need effortful labels by groups of humans: https://www.gojiberries.io/superhuman-level-performance/