frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: ProofShot – Give AI coding agents eyes to verify the UI they build

https://proofshot.argil.io/
32•jberthom•2h ago
I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors.

So I built a CLI that lets the agent open a browser, interact with the page, record what happens, and collect any errors. Then it bundles everything — video, screenshots, logs — into a self-contained HTML file I can review in seconds.

proofshot start --run "npm run dev" --port 3000 # agent navigates, clicks, takes screenshots proofshot stop

It works with whatever agent you use (Claude Code, Cursor, Codex, etc.) — it’s just shell commands. It's packaged as a skill so your AI coding agent knows exactly how it works. It's built on agent-browser from Vercel Labs which is far better and faster than Playwright MCP.

It’s not a testing framework. The agent doesn’t decide pass/fail. It just gives me the evidence so I don’t have to open the browser myself every time.

Open source and completely free.

https://github.com/AmElmo/proofshot

Comments

Imustaskforhelp•1h ago
Great to see this but exe.dev (not sponsored but they are pretty cool and I use them quite often, if they wish to sponsor me that would be awesome haha :-]) actually has this functionality natively built in.

but its great to see some other open source alternatives within this space as well.

Horos•1h ago
what about mcp cdp ?

my claude drive his own brave autonomously, even for ui ?

VadimPR•1h ago
Looks nice! Does it work for desktop applications as well, or is this only web dev?
zkmon•1h ago
Taking screenshots and recording is not quite the same as "seeing". A camera doesn't see things. If the tool can identify issues and improvements to make, by analyzing the screenshot, that's I think useful.
jofzar•1h ago
> It’s not a testing framework. The agent doesn’t decide pass/fail. It just gives me the evidence so I don’t have to open the browser myself every time.

From the OP, i don't think this is what is meant for what you are saying.

falcor84•1h ago
I read it in the same vein as saying that a sub's sonar enables "seeing" its surroundings. The focus is on having a spatial sensor rather than on the qualia of how that sensation is afterwards processed/felt.
philipp-gayret•58m ago
> If the tool can identify issues and improvements (...)

Tools like Claude and the like can, and do. This is just a utility to make the process easier.

theshrike79•1h ago
What does this do that playwright-cli doesn't?

https://github.com/microsoft/playwright-cli

jofzar•1h ago
These aren't really comparable, OP's is something that records, captures and reproduces with steps.
mohsen1•1h ago
playwright can do all of that too. I'm confused why this is necessary.

If coding agents are given the Playwright access they can do it better actually because using Chrome Developer Tools Protocol they can interact with the browser and experiment with things without having to wait for all of this to complete before making moves. For instance I've seen Claude Code captures console messages from a running Chrome instance and uses that to debug things...

theshrike79•25m ago
I've also had Claude run javascript code on a page using playwright-cli to figure out why a button wasn't working as it should.
onion2k•46m ago
That's exactly what Playwright does, but also something you don't really need in order to debug a problem.
lastdong•1h ago
This is basically what antigravity (Google’s Windsurf) ships with. Having more options to add this functionality to Open code / Claude code for local models is really awesome. MIT license too!
can16358p•1h ago
How would this play with mobile apps?

I'd love to see an agent doing work, then launching app on iOS sim or Android emu to visually "use" the app to inspect whether things work as expected or not.

jillesvangurp•45m ago
Something like OpenAIs agent mode where it drives a mouse and keyboard but against an emulator should be doable. That agent mode is BTW super useful for doing QA and executing elaborate test plans and reporting issues and UX problems. I've been meaning to do more with that after some impressive report I got with minimal prompting when I tried this a few months ago.

That's very different from scripting together what is effectively a whitebox test against document ids which is what people do with things like playwright. Replacing manual QA like that could be valuable.

m00dy•43m ago
try deepwalker, https://deepwalker.xyz
jofzar•1h ago
I'm going the opposite of everyone else is saying.

This is sick OP based on what's in the document, it looks really useful when you need to quickly fix something and need to validate the changes to make sure nothing has changed in the UI/workflow except what you have asked.

Also looks useful for PR's, have a before and after changed.

jillesvangurp•50m ago
Exactly. We need more tools like this. With the right model, picking apart images and videos isn't that hard. Adding vision to your testing removes a lot of guess work from ai coding when it comes to fixing layout bugs.

A few days ago I had a interaction with codex that roughly went as follows, "this chat window is scrolling off screen, fix", "I've fixed it", "No you didn't", "You are totally right, I'm fixing it now", "still broken", "please use a headless browser to look at the thing and then fix it", "....", "I see the problem now, I'm implementing a fix and verifying the fix with the browser", etc. This took a few tries and it eventually nailed it. And added the e2e test of course.

I usually prompt codex with screenshots for layout issues as well. One of the nice things of their desktop app relative to the cli is that pasting screenshots works.

A lot of our QA practices are still rooted in us checking stuff manually. We need to get ourselves out of the loop as much as possible. Tools like this make that easier.

I think I recall Mozilla pioneering regression testing of their layout engine using screenshots about a quarter century ago. They had a lot of stuff landing in their browser that could trigger all sorts of weird regressions. If screenshots changed without good reason, that was a bug. Very simple mechanism and very effective. We can do better these days.

z3t4•1h ago
I'm currently experimenting with running a web app "headless" in Node.JS by implementing some of the DOM JS functions myself. Then write mocks for keyboard input, etc. Then have the code agent run the headless client which also starts the tests. In my experience the coding agents are very bad at detecting UX issues, they can however write the tests for me if I explain what's wrong. So I'm the eye's and it's my taste, the agent writes the tests and the code.
onion2k•47m ago
I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors.

I give agent either a simple browser or Playwright access to proper browsers to do this. It works quite well, to the point where I can ask Claude to debug GLSL shaders running in WebGL with it.

m00dy•44m ago
Gemini on Antigravity is already doing this.
boomskats•42m ago
I find the official Chrome DevTools MCP excellent for this. Lighter than Playwright, the loop is shorter, and easy to jam into Electron too.
alkonaut•41m ago
This would be _extremely_ valuable for desktop dev when you don't have a DOM, no "accessibility" layer to interrogate. Think e.g. a drawing application. You want to test that after the user starts the "draw circle" command and clicks two points, there is actually a circle on the screen. No matter how many abstractions you make over your domain model, rendering you can't actually test that "the user sees a circle". You can verify your drawing contains a circle object. You can verify your renderer was told to draw a circle. But fifty things can go wrong before the user actually agrees he saw a circle (the color was set to transparent, the layer was hidden, the transform was incorrect, the renderer didn't swap buffers, ...).
bartwaardenburg•26m ago
This is a good point. For anything without a DOM, screenshot diffing is basically your only option. Mozilla did this for Gecko layout regression testing 20+ years ago and it was remarkably effective. The interesting part now is that you can feed those screenshots to a vision model and get semantic analysis instead of just pixel diffing.
EruditeCoder108•25m ago
I see
dbdoskey•5m ago
This is really cool. Have you thought of maybe accessing the screen through accessibility APIs? For Android mobile devices I have a skill I created that accesses the screen xml dump as part of feature development and it seems to work much better than screenshots / videos. Is this scalable to other OS's?

Box of Secrets: Discreetly modding an apartment intercom to work with Apple Home

https://www.jackhogan.me/blog/box-of-secrets/
136•jackhogan11•21h ago•42 comments

Log File Viewer for the Terminal

https://lnav.org/
127•wiradikusuma•4h ago•18 comments

Opera: Rewind The Web to 1996 (Opera at 30)

https://www.web-rewind.com
28•thushanfernando•2h ago•26 comments

Show HN: ProofShot – Give AI coding agents eyes to verify the UI they build

https://proofshot.argil.io/
33•jberthom•2h ago•26 comments

BIO – The Bao I/O Co-Processor

https://www.crowdsupply.com/baochip/dabao/updates/bio-the-bao-i-o-co-processor
41•hasheddan•2d ago•8 comments

Autoresearch on an old research idea

https://ykumar.me/blog/eclip-autoresearch/
362•ykumards•15h ago•79 comments

iPhone 17 Pro Demonstrated Running a 400B LLM

https://twitter.com/anemll/status/2035901335984611412
617•anemll•19h ago•277 comments

No-build, no-NPM, SSR-first JavaScript framework if you hate React, love HTML

https://qitejs.qount25.dev
17•usrbinenv•4d ago•2 comments

FCC updates covered list to include foreign-made consumer routers

https://www.fcc.gov/document/fcc-updates-covered-list-include-foreign-made-consumer-routers
320•moonka•12h ago•218 comments

Ripgrep is faster than {grep, ag, Git grep, ucg, pt, sift}

https://burntsushi.net/ripgrep/
19•jxmorris12•3h ago•10 comments

Show HN: Cq – Stack Overflow for AI coding agents

https://blog.mozilla.ai/cq-stack-overflow-for-agents/
145•peteski22•18h ago•55 comments

Gerd Faltings, who proved the Mordell conjecture, wins the Abel Prize

https://www.scientificamerican.com/article/gerd-faltings-mathematician-who-proved-the-mordell-con...
38•digital55•4d ago•4 comments

A 6502 disassembler with a TUI: A modern take on Regenerator

https://github.com/ricardoquesada/regenerator2000
36•wslh•3d ago•3 comments

Claude Code Cheat Sheet

https://cc.storyfox.cz
407•phasE89•12h ago•120 comments

Dune3d: A parametric 3D CAD application

https://github.com/dune3d/dune3d
164•luu•2d ago•59 comments

Abusing Customizable Selects

https://css-tricks.com/abusing-customizable-selects/
119•speckx•5d ago•5 comments

The Resolv hack: How one compromised key printed $23M

https://www.chainalysis.com/blog/lessons-from-the-resolv-hack/
94•timbowhite•12h ago•127 comments

Microservices and the First Law of Distributed Objects (2014)

https://martinfowler.com/articles/distributed-objects-microservices.html
19•pjmlp•3d ago•15 comments

Finding all regex matches has always been O(n²)

https://iev.ee/blog/the-quadratic-problem-nobody-fixed/
210•lalitmaganti•4d ago•56 comments

Pompeii's battle scars linked to an ancient 'machine gun'

https://phys.org/news/2026-03-pompeii-scars-linked-ancient-machine.html
77•pseudolus•3d ago•19 comments

IRIX 3dfx Voodoo driver and glide2x IRIX port

https://sdz-mods.com/index.php/2026/03/23/irix-3dfx-voodoo-driver-glide2x-irix-port/
70•zdw•11h ago•8 comments

An incoherent Rust

https://www.boxyuwu.blog/posts/an-incoherent-rust/
192•emschwartz•19h ago•98 comments

Trivy under attack again: Widespread GitHub Actions tag compromise secrets

https://socket.dev/blog/trivy-under-attack-again-github-actions-compromise
206•jicea•2d ago•70 comments

Ju Ci: The Art of Repairing Porcelain

https://thesublimeblog.org/2025/03/13/ju-ci-the-ancient-art-of-repairing-porcelain/
90•lawrenceyan•2d ago•10 comments

I built an AI receptionist for a mechanic shop

https://www.itsthatlady.dev/blog/building-an-ai-receptionist-for-my-brother/
283•mooreds•23h ago•282 comments

Epoch confirms GPT5.4 Pro solved a frontier math open problem

https://epoch.ai/frontiermath/open-problems/ramsey-hypergraphs
340•in-silico•8h ago•392 comments

A retro terminal music player inspired by Winamp

https://github.com/bjarneo/cliamp
94•mkagenius•13h ago•27 comments

Sunsetting the Techempower Framework Benchmarks

https://github.com/TechEmpower/FrameworkBenchmarks/issues/10932
40•nbrady•8h ago•8 comments

Local Stack Archived their GitHub repo and requires an account to run

https://github.com/localstack/localstack
196•ecshafer•15h ago•113 comments

BIO: The Bao I/O Coprocessor

https://www.bunniestudios.com/blog/2026/bio-the-bao-i-o-coprocessor/
154•zdw•3d ago•36 comments