Hey everyone,
I have a bad habit: the later it gets, the worse my commit messages become. I’ve reached a point where my git history is just a long list of "fix", "update", and "oops".
I built MateCommit to solve this for myself. It’s a CLI tool written in Go that uses LLMs to analyze git diffs and generate meaningful, conventional commits. But once I got the commits working, I realized I could use that same context to handle the "paperwork" I hate: PR summaries, test plans, Jira linking, and release notes.
A few things I focused on to make it actually usable:
Privacy & Providers: The core is decoupled from the AI provider. It currently uses Gemini because it's fast and has a great free tier, but I’m refactoring it to support Ollama and local models. I know many of us don't want to send code to a remote API.
No surprise bills: I added real-time cost tracking. It calculates token usage and actual USD cost for every call based on the model's pricing. You can set daily budgets so you don't wake up to a $50 bill from an accidental loop.
Better context: Instead of just dumping a raw git diff (which is often noisy), the tool tries to be smart about staged vs. unstaged changes and untracked files.
DX matters: I used urfave/cli for the interface, added shell autocompletion (bash/zsh), and a doctor command to help debug the setup. No complex dependencies, just a single Go binary.
It's fully Open Source. I’m honestly looking for technical critiques on the architecture and, more importantly, how to make the AI output feel more like a human dev and less like a marketing bot.
tomasvilte•2h ago
I built MateCommit to solve this for myself. It’s a CLI tool written in Go that uses LLMs to analyze git diffs and generate meaningful, conventional commits. But once I got the commits working, I realized I could use that same context to handle the "paperwork" I hate: PR summaries, test plans, Jira linking, and release notes.
A few things I focused on to make it actually usable: Privacy & Providers: The core is decoupled from the AI provider. It currently uses Gemini because it's fast and has a great free tier, but I’m refactoring it to support Ollama and local models. I know many of us don't want to send code to a remote API.
No surprise bills: I added real-time cost tracking. It calculates token usage and actual USD cost for every call based on the model's pricing. You can set daily budgets so you don't wake up to a $50 bill from an accidental loop. Better context: Instead of just dumping a raw git diff (which is often noisy), the tool tries to be smart about staged vs. unstaged changes and untracked files.
DX matters: I used urfave/cli for the interface, added shell autocompletion (bash/zsh), and a doctor command to help debug the setup. No complex dependencies, just a single Go binary. It's fully Open Source. I’m honestly looking for technical critiques on the architecture and, more importantly, how to make the AI output feel more like a human dev and less like a marketing bot.
Repo: https://github.com/thomas-vilte/matecommit I’d love to hear your thoughts