Problem: AI-assisted analyses are often throwaway chats. A month later, you can’t trace why you reached a conclusion.
Solution: It enforces a 5-stage loop (ASK → LOOK → INVESTIGATE → VOICE → EVOLVE) with checklists, and saves analyses as Git-tracked markdown files. Quick mode: 1 file. Full mode: 5 files.
Works in Claude Code and Cursor.
I’d love feedback on:
1. Does the ALIVE loop match how you do investigations / experiment reviews?
2. Which checklist items feel missing or unnecessary?
3. What would make this usable in a team setting?
with_geun•1h ago
• Output: markdown files tracked by git (reviewable, diffable, searchable)
• Quick mode: 1 file for fast investigations
• Full mode: staged docs + checklists when complexity grows
• Looking for feedback: do the stage checklists match real-world workflows?
(Optional) Quick start:
• /analysis-init –quick
• /analysis-new
• /analysis-next