Hey HN, I'm Fred. I designed this from an iPhone during a
6-week digital detox in Vietnam — my partner banned my laptop.
Without a computer I couldn't chase the weekly model releases. I was
stuck with a harder question: not "how to code faster with AI" but
"how to make agentic coding reliable." I read papers on separating
planning from execution in agent systems — GoalAct (Chen 2025) shows
+12.2% success when you split them [1]. That became the core design.
GAAI is a .gaai/ folder — markdown, YAML, bash. No SDK. You drop it
into any project. It gives you two agents with a hard gate between
them:
- Discovery structures what to build (stories, acceptance criteria).
Never writes code.
- Delivery ships validated stories autonomously (plan, implement, QA).
Never decides scope.
- The backlog is the contract. Nothing gets built that isn't in it.
- Memory persists across sessions — session 47 knows what session 1
decided.
I stress-tested it by building a full SaaS — 7 Cloudflare Workers,
auth, billing, matching engine, analytics. First 4 days: 39 stories
shipped, 260 tests passing, 16K LOC. Today: 176 stories, 84K LOC.
96.9% cache read rate — agents build forward, not from scratch.
vs. AGENTS.md: solves one session, not cross-session memory or scope
drift. vs. LangGraph/CrewAI: code-first orchestration for building AI
systems. GAAI governs using AI tools. Different level.
Honest trade-offs: no programmatic enforcement (agents follow the
files), even trivial tasks need a backlog item, freshly open-sourced.
Install: git clone + bash install.sh --wizard. 30 seconds.
ELv2 — use it freely in your projects. Only restriction: can't offer
it as a competing hosted service.
[1] https://arxiv.org/abs/2504.16563
Fr-e-d•2h ago