Hey HN! I'm Huy, leading product at Katalon in Vietnam. We built Scout QA because the testing tools we had weren't keeping up with how fast we (and our users) are shipping with AI coding tools.
The problem: When you're using Cursor, Replit, or Lovable to generate entire features in minutes, traditional test automation feels backwards. You'd spend more time writing Selenium scripts than it took to build the feature.
Scout takes a different approach. You give it a URL, and it uses AWS Bedrock AgentCore with Amazon Nova Act to figure out what to test. It explores your app, generates checks automatically, and gives you Traffic Light Reports (red/yellow/green) showing what works. When something breaks, it suggests prompts you can feed back to your AI coding tool to fix it.
We're using it internally on our own AI-generated features and it's caught several regressions we would have missed. Still early—there are definitely edge cases where it gets confused, and we're working on better context awareness.
We launched with a freemium model. Magic link auth gets you started in about 30 seconds.
Technical folks: We're especially interested in feedback on our approach to autonomous test generation. Are we thinking about this problem the right way?
Happy to answer questions about the architecture, our reasoning model, or why we think testing needs to evolve for the AI-native development era.
Comments
davydm•2h ago
fix the slop with more slop?
no thanks, i'll keep crafting code by hand, test-first.
davydm•2h ago
no thanks, i'll keep crafting code by hand, test-first.