Motif is a small step toward that. It reads Cursor's internal SQLite database and Claude Code's project files, extracts your conversations locally, and generates a report. I incorporated a few different psychology frameworks and AI coding assessments.
The conversations for Claude, Cursor, etc. are stored differently and Motif can extract from different/multiple sources (e.g. SQLite for Cursor). I plan to add support for more tools.
What it measures: autonomy ratio (agent actions per human message ), agent concurrency, prompt verbosity. It will give you an AI coding score based on a framework I built.
It also uses some research-based psychology frameworks: Bloom's taxonomy (critical thinking), Pennebaker's LIWC (language use patterns), Epistemic Stance Analysis (Certainty in language). E.g., I involuntarily use "we" and "let's" a lot when working with AI, and that has implications under these frameworks.
There's also a real-time dashboard (motif live) inspired by APM counters in StarCraft. I used to play SC2 semi-pro (Terran GM) so I like measuring these things. It tracks AI tokens per minute and concurrent agents, summarizes every session, and saves your personal bests. It's a vanity metric but fun to look at. The dashboard will detect idle sessions and restart sessions.
Everything is local and Apache 2.0. Motif will install a skill in Cursor/Claude so you can ask them to run it after installing.