Hi HN! I built this after watching AI assistants confidently ship mocked data, break working contracts, and create the illusion of progress for hours.
The core insight: AI sessions fail not from bad models, but from missing structure. The DRS (Deployability Rating Score) gives you a single number (0-100) that answers "can I actually ship this?"
Key components:
* Contract freezing (no silent interface changes)
* Mock expiration (30-minute max)
* Scope limits (5 files, 200 LOC)
* Time-based convergence gatesIt's MIT licensed.
Curious what patterns you've seen in your AI coding sessions.
ktg0215•18h ago
This addresses a real problem. I've seen too many impressive AI demos that fell apart when trying to ship to production. The "30-minute mock timeout" is a clever forcing function - it's easy to let mocks linger forever.
The DRS scoring could be useful for teams struggling to answer "is this ready to deploy?" Currently trying this out with my own Claude Code workflow.
sgharlow•7h ago
great--let me know how it goes and if you have any suggested improvements.
sgharlow•19h ago
The core insight: AI sessions fail not from bad models, but from missing structure. The DRS (Deployability Rating Score) gives you a single number (0-100) that answers "can I actually ship this?"
Key components:
* Contract freezing (no silent interface changes)
* Mock expiration (30-minute max)
* Scope limits (5 files, 200 LOC)
* Time-based convergence gatesIt's MIT licensed.
Curious what patterns you've seen in your AI coding sessions.