frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: Record manual QA flows, get E2E test code that fits your repo

https://quretests.com
12•mihau•1h ago
TLDR: Desktop app for E2E web test generation, built at JetBrains (closed beta). Record the flow in a built-in browser - the agent matches it with your existing codebase, then writes a test that passes, not a draft to debug.

Devs use AI to ship more code. That code still needs testing. If your team writes E2E tests by hand, you have a problem - same QA capacity, way more surface to cover.

AI agents can write E2E test code, but you're stuck describing flows in text - the agent clicks around via Playwright MCP, takes wrong turns, you re-prompt, retry. 30 minutes for a flow you could click through in 30 seconds.

Qure works differently. You record the scenario in Qure's built-in browser by just using your product. The AI turns that recording into code. No prompt engineering, no MCP setup, no explaining your repo in chat - point it at your project and go. Beyond recording, you can also refactor tests, update them, or write new ones from a description.

What keeps the AI output grounded:

- We match the recording against your codebase - find your page objects, helpers, constants and feed them to the agent instead of hoping it figures out your repo

- When agent runs the test, it reads real failure output, fixes with actual error and app context

This is a closed beta of an experimental product. Web only, works best with Playwright. If your project has a few dozen tests - Claude Code will honestly get you there. Qure makes a difference on larger codebases with existing test infrastructure.

5-min demo: https://www.youtube.com/watch?v=4CZw4bSSDCE

Try the beta: https://quretests.com

Happy to answer any questions about the approach, product, or where it breaks - I'm the dev on the Qure team. Egor (@250xp), who leads the project, is in the thread too.

Comments

marksully•1h ago
interesting approach with the recording, wondering how well it does in comparison with Selenium recorder / Playwright test generator
250xp•57m ago
Basically it's about the quality of the code output and scenarios flexibility. Recorders that don't have any AI layer on top produce:

- Brittle selectors

- Dump entire scenario into a single file

- Do not allow any logic besides replaying your steps

Qure writes clean code with properly reusable abstractions like Page Object (or whatever do you have in your existing automation repository). For the team it means less flakiness and less time spent on maintenance

And if we consider AI-powered tools, they require much more steering from the human perspective: describing test in text, forcing agent to follow specific guidelines (e.g. for selectors), asking agent to refactor stuff after main scenario is working. Qure agent handles that out of the box itself