I built MartinLoop because AI coding agents are becoming useful enough to touch real repos, but most agent runs still lack basic operational controls.
The core question I’m trying to answer is:
“Can another engineer audit this run later?”
MartinLoop is open source and currently focuses on:
- hard budget caps
- JSONL run records
- audit trails
- failure classification
- test-verified completion
The harder problem is not just stopping an agent when it hits a limit. It’s stopping cleanly at a safe halt boundary with an actionable diagnostic, so the repo is not left in a half-finished state.
I’d love feedback from people using Claude Code, Codex, Cursor, Devin-style agents, or custom coding loops.
martinloop•1h ago
The core question I’m trying to answer is:
“Can another engineer audit this run later?”
MartinLoop is open source and currently focuses on:
- hard budget caps - JSONL run records - audit trails - failure classification - test-verified completion
The harder problem is not just stopping an agent when it hits a limit. It’s stopping cleanly at a safe halt boundary with an actionable diagnostic, so the repo is not left in a half-finished state.
I’d love feedback from people using Claude Code, Codex, Cursor, Devin-style agents, or custom coding loops.
GitHub: https://github.com/Keesan12/Martin-Loop Site: https://martinloop.com