I’m working on a draft about AI-native software engineering.
The core idea is simple but uncomfortable: most failures in AI-assisted coding are not capability problems, but governance problems. We routinely allow AI systems to make high-impact architectural decisions without bearing responsibility for their consequences.
The VPC Principle proposes a governance model: - Verdict: high-responsibility decisions that only humans can make - Permission: explicit decision boundaries delegated to AI - Boundary Control: mechanisms to audit whether AI execution stayed within its authority
This is not a finished book. The principles are stated, but the formalization and examples are still evolving. I’m sharing this early because I’m explicitly looking for critique, counterexamples, and failure cases—especially from people who disagree.