I realized I was expecting AI to be a partner with memory while it is more of a stateless CPU. Even the latest models reset between sessions.
So I built a protocol where:
Your files are the memory (plain markdown)
- Git is the version control
- You inject context at session start
- AI proposes, you ratify, files record
The system has:
- 5 commands: CHECKPOINT, SCOPE LOCK, HARD STOP, MODE STRATEGY, MODE EXPLORATION
- 3 constraint tags for locked decisions, rejected ideas, and hard constraints
- Works on Claude, ChatGPT, and Gemini with the same files and same behavior
- No vector DBs, LangChain or cloud dependencies
The core insight: LLMs are excellent stateless processors whereas decision memory, auditability, and long-horizon state are human responsibilities. This protocol makes that division explicit.
I tested it across the latest versions of all three platforms and it passed constraint enforcement, rejected idea protection, scope lock compliance, and checkpoint format consistency.
This is intentionally manual and opinionated. It's not for fully autonomous workflows. Friction is the feature.
Repo: https://github.com/zohaibus/context-protocol
Would love feedback, especially from anyone who's tried managing context across long projects with LLMs. What's worked for you? What's failed?