We are Winston, Edward, and James, and we built Meka Agent, an open-source framework that lets vision-based LLMs execute tasks directly on a computer, just like a person would.
Backstory:
In the last few months, we've been building computer-use agents that have been used by various teams for QA testing, but realized that the underlying browsing frameworks aren't quite good enough yet.
As such, we've been working on a browsing agent.
We achieved 72.7% on WebArena compared to the previous state of the art set by OpenAI's new ChatGPT agent at 65.4%. You can read more about it here: https://github.com/trymeka/webarena_evals.
Today, we are open sourcing Meka, our state of the art agent, to allow anyone to build their own powerful, vision-based agents from scratch. We provide the groundwork for the hard parts, so you don't have to:
* True vision-based control: Meka doesn't just read HTML. It looks at the screen, identifies interactive elements, and decides where to click, type, and scroll.
* Full computer access: It's not sandboxed in a browser. Meka operates with OS-level controls, allowing it to handle system dialogues, file uploads, and other interactions that browser-only automation tools can't.
* Extensible by design: We've made it easy to plug in your own LLMs and computer providers.
* State-of-the-art performance: 72.7% on WebArena
Our goal is to enable developers to create repeatable, robust tasks on any computer just by prompting an agent, without worrying about the implementation details.
We’d love to get your feedback on how this tool could fit into your automation workflows. Try it out and let us know what you think.
You can find the repo on GitHub and get started quickly with our hosted platform, https://app.withmeka.com/.
Thanks, Winston, Edward, and James
cahoodle•19h ago
hugs•13h ago
cahoodle•13h ago
I did YC back in S16 and was just reminiscing with a friend about how startups felt so different back then.
phsource•12h ago
Out of curiosity, what do you think contributed to this working better than even OpenAI agent or some of the other tools out there?
I'm not that familiar with how OpenAI and other agents like Browser Use currently work, but is this, in your opinion, the most important factor?
> An infrastructure provider that exposes OS-level controls, not just a browser layer with Playwright screenshots. This is important for performance as a number of common web elements are rendered at the system level, invisible to the browser page
tcwd•12h ago
IMO, the combination of having an "evaluator model" at the end to verify if the intent of the task was complete, and using multiple models that look over each other's work in every step was helpful - lots of human organization analogies there, like "trust but verify" and pair programming. Memory management was also very key.