The insight: all the information is already in Git. Commits, PRs, authors, timestamps. It's just unreadable for anyone who isn't a developer.
So I built an AI layer on top that:
1. Connects to GitHub/GitLab/Bitbucket via OAuth (reads only metadata, never code) 2. Captures commits and PRs in real-time via webhooks 3. Uses Claude to transform raw Git activity into human-readable summaries 4. Delivers automatically via email or Slack on whatever schedule you want
Example transformation:
Before: "fix: rm deprecated api calls, refactor: extract auth middleware"
After: "Fixed API timeouts by updating deprecated endpoints. Improved
security by centralizing authentication logic."
Technical stack: Next.js 15, MongoDB, Bull queues for async report generation,
Claude API for summarization. Webhooks for real-time data, not polling.Some things I learned building this:
- Commit messages follow strict patterns (73% start with feat:/fix:/refactor:) but contain almost no "why" context - Teams spend ~78 hours/year/person writing status reports manually - The question "what did we ship this week?" accounts for 62% of queries about repositories
Other features that emerged from the same data layer: - AI agents you can chat with ("What did Sarah work on last week?") - Developer leaderboards with contribution scoring - Auto-generated public changelogs
Free tier: 1 repo, 1 automation. Pro ($15/mo): 5 repos. Enterprise ($49/mo): 20 repos + custom branded reports.
https://gitmore.io
Happy to answer technical questions about the architecture, AI prompting strategy, or webhook handling. Also curious - how do other teams handle the "what did we ship" problem?
vinckr•7h ago
I think gitmore could be improved if it used the conventional commits specification, there is a reason almost everyone uses them.