This is not a wrapper around models. Not a prompt injection trick. Not a framework that hides complexity.
It is a structured workflow enforced through repository level instructions.
The goal is behavioral discipline, not magic.
It works because most AI coding failures are not model failures. They are workflow failures.
Jumping to implementation before shared understanding.
The tradeoff is real:
You add friction upfront. You reduce rework later.
It will not make a weak engineer strong. It makes strong engineers more consistent.
That is the intent.
miguelaxcar•1h ago
Sometimes faster than your understanding.
They jump to implementation before reading the codebase. They assume things. They produce confident drafts that look right.
Then you pay in rewrites.
If you are the person who gets pinged when something breaks, you know this pattern.
So I built a small open source protocol. It is not a plugin. Not a wrapper. Just structured instructions you drop into your repo.
The idea is simple, following RPI idea from Human Layer folks:
First, the assistant explores the codebase. Then it surfaces assumptions and tradeoffs. Only after explicit approval does it implement.
The goal is not to slow AI down. The goal is to improve decision quality before code lands.
Speed without checkpoints is credit card speed. You ship quickly. You pay later.
This protocol acts like an exoskeleton. It amplifies discipline. It does not replace judgment.
I have been using it to reduce rewrites, clarify intent, and make AI assisted work more predictable.
Curious how others building daily with AI think about guardrails like this.
https://github.com/MiguelAxcar/ai-rpi-protocol