Over the span of a week — mostly while I was doing my day job — I let a custom AI agent (built with Dragonscale) design, write, and document a full static analysis engine from scratch.
No human-written code. No human-designed features. Just the AI, asking itself questions like:
"What tools would help me understand a repo?"
"How should I explain a function’s purpose?"
"What format should I use to talk to another AI?"
It came up with 18+ tools for explaining symbols, tracing data flows, detecting patterns, and analyzing complexity. It writes natural-language summaries and exposes a full JSON-RPC 2.0 interface via the Model Context Protocol (MCP).
The result: CodePrism — a fully AI-generated, LLM-integrated static analysis server.
I’ve been using it inside Cursor, Copilot, and VS Code — and surprisingly, it works. It gives me ~10x faster insights into unfamiliar Python codebases, and often surfaces subtle structure I would've missed.
Links: Homepage + blog: https://rustic-ai.github.io/codeprism GitHub repo: https://github.com/rustic-ai/codeprism
This is still an experiment. No guarantees. It might break. But it’s also kind of fun. If you're curious about the boundaries of AI-autonomous tooling, check it out — and feel free to get involved (no code PRs, please ).
Happy to answer questions and share more about the setup, agents, or architecture.
milliondreams•1h ago