I hated writing tests, my end to end's never worked, and dealing with browsers was brutal. AI made this 10x worse because it'll change random stuff across your app that will pass unit tests but break something obvious. So I wanted and then built a system that handles everything for you and then updates you with results after each commit or PR (configurable). LMK what you all think!
The Long:
I was writing a ton of code, particularly with AI for a previous startup idea and kept feeling like things were moving really quick until I tried to go and use pieces of it. This isn't uncommon in general, but i think the new thing with AI is that stuff you didn't think you touched would start breaking too cause I wasn't watching or making every single edit.
Granted AI has gotten much better since then, but in general my view is that everyone (even AI) needs someone to give a second set of eyes on something and then send back the results. That's what debugg.ai attempts (cough cough - to be seen if you think we do) to do. Currently that feedback is in the form of PR review comments or email updates & our app, but our nearterm plan is to offer the ability to pull that right back into whatever AI you use so that it can get its own feedback and iterate until it's done.
I'm tired of opening an app that Claude Code said was 'working perfectly' only to find the main page won't even load or has some react hydration problem. The cool thing about this is that even though it may not be best - yet - for super complex and detailed test flows, most people wouldn't write an E2e just to make sure the main page loads cause that's a 'manual' test thing. Because this removes all the browser handling, building, ci / cd, etc setup you can have really simple and a lot of quick tests that reign in you AI and also just give you peace of mind as you're making changes.
On the tech side i def had some fun:
Built a use-specific crawler agent that sequences and learns your application from top down. Think of it like a sitemap, but actually useful – it knows "login button on homepage → takes you to /login → which has a form → which posts to /api/auth" and includes files from /auth/components/... etc.
The above improved our ability to track github code changes and associate them with tests that could be impacted & create new ones for stuff that hasn't been seen.
Ultimately my goal is to build myself out a job a bit so i can just prompt Claude to make changes then have a hook that sends debugg's test results (failures) back to Claude to keep making changes until it actually works :).
Open to all feedback & thoughts on whether you've felt this pain as well!