Hey HN, Anders and Tom from Magnitude (YC S25) here. On our last Show HN post about our open-source browser agent, someone left a comment - "there are multiple similar projects like this posted here daily, and this one likely isn't the best". So we asked ourselves, are they right? We decided to run on WebVoyager (a well known benchmark for browser agents) to test ourselves. We scored 94%, beating all other browser agents and making Magnitude state-of-the-art.
The original WebVoyager benchmark was meant to demonstrate a new technique for interacting with the browser by annotating the DOM. Since then, vision models have come a long way in terms of accuracy and visual understanding. Our pure-vision approach with our framework and today's models surpasses the hybrid DOM strategies used by the original WebVoyager paper and other agents like browser-use.
So why does pure-vision beat hybrid DOM approaches?
- Generalizes far better - handles canvas elements, iframes, drag-and-drop, precise text selection, and many other scenarios elegantly where hybrid DOM would struggle and need to implement hacks for those cases to work
- Easier for the LLM - we think LLM performance is roughly proportional to prompt clarity. If the prompt contains a crowded screenshot with loads of colored boxes + a long list of element labels and is asked to pick one, vs given a clean screenshot + where do you want to click - the latter seems far easier
We believe another reason for our success is that we can still hook into the browser as needed. We can use browser-native actions like tab switching, can look at network traffic to know when a page is ready, or use the DOM for other purposes like data extraction. Computer use agents like Operator or Claude Computer Use on the other hand are limited to generic mouse and keyboard controls.
It's worth mentioning that WebVoyager is a strange and flawed benchmark. It contains many tasks that depend on the current date (and need their dates updated), tasks that depend on the time of day, and some tasks that are impossible or too ambiguous to properly evaluate. In the repo we detailed exactly the patches we made to the original WebVoyager benchmark such that each task is at least theoretically possible.
Why does this all matter? People are trying to adopt agents for real use cases, but they often fail to make it to production. We want to enable developers to build with production-ready browser agents - which is why it's important to get the fundamental interaction paradigm right. We think this benchmark is a step in the right direction, showing that pure-vision has best-in-class performance in the browser domain. Curious to hear what others think about this, would love to get your feedback!
anerli•7h ago
You can view the entire run here: https://magnitude-webvoyager.vercel.app/
The original WebVoyager benchmark was meant to demonstrate a new technique for interacting with the browser by annotating the DOM. Since then, vision models have come a long way in terms of accuracy and visual understanding. Our pure-vision approach with our framework and today's models surpasses the hybrid DOM strategies used by the original WebVoyager paper and other agents like browser-use.
So why does pure-vision beat hybrid DOM approaches?
- Generalizes far better - handles canvas elements, iframes, drag-and-drop, precise text selection, and many other scenarios elegantly where hybrid DOM would struggle and need to implement hacks for those cases to work
- Easier for the LLM - we think LLM performance is roughly proportional to prompt clarity. If the prompt contains a crowded screenshot with loads of colored boxes + a long list of element labels and is asked to pick one, vs given a clean screenshot + where do you want to click - the latter seems far easier
We believe another reason for our success is that we can still hook into the browser as needed. We can use browser-native actions like tab switching, can look at network traffic to know when a page is ready, or use the DOM for other purposes like data extraction. Computer use agents like Operator or Claude Computer Use on the other hand are limited to generic mouse and keyboard controls.
It's worth mentioning that WebVoyager is a strange and flawed benchmark. It contains many tasks that depend on the current date (and need their dates updated), tasks that depend on the time of day, and some tasks that are impossible or too ambiguous to properly evaluate. In the repo we detailed exactly the patches we made to the original WebVoyager benchmark such that each task is at least theoretically possible.
Why does this all matter? People are trying to adopt agents for real use cases, but they often fail to make it to production. We want to enable developers to build with production-ready browser agents - which is why it's important to get the fundamental interaction paradigm right. We think this benchmark is a step in the right direction, showing that pure-vision has best-in-class performance in the browser domain. Curious to hear what others think about this, would love to get your feedback!