The machine was hosting a web service built using Next.js. The first sign of trouble was unusually high CPU usage. Even during low traffic periods, the server was consistently running near 100% utilization. After inspecting running processes and network activity, we found a background process downloading and executing a mining binary.
ROOT CAUSE
The entry point was CVE-2025-29927, a vulnerability in Next.js that allows middleware protections to be bypassed. This enabled an attacker to reach internal endpoints that were assumed to be protected. Once they hit the exposed endpoint, they executed a script that pulled down the miner.
HOW "VIBE CODING" FAILED US
This application was largely generated using AI-assisted tools (Claude Code and OpenAI Codex). This workflow—often called "vibe coding"—involved describing the desired functionality and letting the AI assemble the codebase.
The project worked perfectly, but the AI pinned a vulnerable dependency version in the package.json. Because the app ran normally and passed functional tests, we missed the audit step. Automated scanners found the vulnerability within hours.
THE BROADER LESSON
AI increases development speed, but it also increases the "security debt" of every deployment. In traditional development, you review versions carefully. With AI-generated scaffolding, that step is easy to overlook.
The attack chain:
AI-generated project -> Vulnerable dependency -> Middleware bypass -> Automated scan -> Cryptominer
HOW WE FIXED IT
We realized that if we are using AI to speed up development, we need automated "brakes" to match that speed. We moved our apps onto Containarium ( https://github.com/FootprintAI/Containarium ), an open-source platform that uses ZFS-backed, unprivileged LXC containers to consolidate 100+ isolated environments onto a single VM with integrated security monitoring and vulnerability scanning.
This ensures that even if a developer accidentally deploys a vulnerable dependency, the breach is isolated from the host and flagged by runtime monitoring.
OPEN QUESTION
I’m curious how others are handling the "AI audit" problem. Are you adding automated security gates to your dev environments, or are you strictly relying on traditional dependency scanning and hoping the "vibe" doesn't miss anything?
hsin003•1h ago
This incident showed how AI-generated code can inadvertently introduce vulnerabilities. The cryptominer ran because a dependency version chosen by an AI coding agent had a known CVE.
Containarium now runs centralized pentests and vulnerability checks for all applications on the platform to prevent similar attacks.
Curious if others have similar workflows or lessons learned with AI-generated projects.