I've been experimenting with turning principles from classic software engineering books (Clean Code, DDIA, etc.) into structured "skill" files that AI agents can use during code review. Each skill is an opinionated instruction set grounded in known engineering wisdom — not a summary or excerpt.
Repo:
https://github.com/ZLStas/skills
I'm trying to figure out the best way to wire this into a practical workflow — whether as a review layer or as a tool to iteratively refactor a legacy codebase into something clean and well-structured. A few open questions I'd love input on:
Does it make sense to use book-based principles as a structured lens for AI-driven code review?
How would you set up sub-agents to iteratively review LLM output — one agent creates, another evaluates — without the review becoming shallow or repetitive? Has anyone tried a different approach that worked better?
How do you maintain project context across multiple review passes so the agent doesn't lose sight of the bigger picture?