Think 'git blame' but for AI code. There's a lot about how it works in the post, but wanted to share how it's been impacting me + my team:
- I find I review AI code very differently than human code. Being able to see the prompts my colleagues used, what the AI wrote, and where they stepped in to override has been extraordinarily helpful. This is still very manual today, but hope to build more UI around it soon.
- “Why is this here?” — more than once I’ve giving my coding agent access to the past prompts that generated code I’m looking at, which lets the Agent know what my colleague was thinking when they made the change. Engineers talk to AI all day now…their prompts are sort of like a log of thoughts :)
- I pay a lot of attention to the lines generated for every 1 accepted ratio. If it gets up over 4 or 5 it means I’m well outside the AI’s distribution or prompting poorly — either way, it’s a good cause for reflection and I’ve learned a lot about collaborating with LLMs.
This has been really fun to build, especially because some amazing contributors who were working on similar projects came together and directed their efforts towards Git AI shine. We hope you like it.
brene•1h ago
addcn•1h ago