Here's a demo: https://www.loom.com/share/e56faa07d90b417db09bb4454dce8d5a
Security testing today is increasingly challenging. Traditional scanners generate 30-50% false positives, drowning engineering teams in noise. Manual penetration testing happens quarterly at best, costs tens of thousands per assessment, and takes weeks to complete. Meanwhile, teams are shipping code faster than ever with AI assistance, but security reviews have become an even bigger bottleneck.
All three of us encountered this problem from different angles. Brandon worked at ProjectDiscovery building the Nuclei scanner, then at NetSPI (one of the largest pen testing firms) building AI tools for testers. Sam was a senior engineer at Salesforce leading security for Tableau. He dealt firsthand with juggling security findings and managing remediations. Akul did his master's on AI and security, co-authored papers on using LLMs for ecurity attacks, and participated in red-teams at OpenAI and Anthropic.
We all realized that AI agents were going to fundamentally change security testing, and that the wave of AI-generated code would need an equally powerful solution to keep it secure.
We've built AI agents that perform reconnaissance, exploit vulnerabilities, and suggest patches—similar to how a human penetration tester works. The key difference from traditional scanners is that our agents validate exploits in runtime environments before reporting them, reducing false positives.
We use multiple foundational models orchestrated together. The agents perform recon to understand the attack surface, then use that context to inform testing strategies. When they find potential vulnerabilities, they spin up isolated environments to validate exploitation. If successful, they analyze the codebase to generate contextual patches.
What makes this different from existing tools? Validation through exploitation: We don't just pattern-match—we exploit vulnerabilities to prove they're real; - Codebase integration: The agents understand your code structure to find complex logic bugs and suggest appropriate fixes; - Continuous operation: Instead of point-in-time assessments, we're constantly testing as your code evolves; - Attack chain discovery: The agents can find multi-step vulnerabilities that require chaining different issues together.
We're currently in early access, working with initial partners to refine the platform. Our agents are already finding vulnerabilities that other tools miss and scoring well on penetration testing benchmarks.
Looking forward to your thoughts and comments!
blibble•1d ago
bko•1d ago
blibble•1d ago
icedchai•1d ago
blibble•1d ago
even the booters market themselves as as "legitimate stress testing tools for enterprise"
Sohcahtoa82•9h ago
MindFort is not the first and won't be the last. There are plenty of DAST tools offered as a SaaS that are the same thing.
chatmasta•4h ago
Your argument is straight out of the 1990s. We’ve moved beyond this as an industry, as you can see from the proliferation of bug bounty programs, responsible disclosure policies, CVE transparency, etc…
Sohcahtoa82•9h ago
blibble•7h ago
bveiseh•1d ago