Why do we need a plugin or new tools to accomplish this?
Don't know why this has been resubmitted and placed on the front of HN. (See 2day old peer comment) What's the feature of this post that warrants special treatment?
On one hand, I would imagine companies like GitHub will not charge for agent accounts because they want to encourage their use and see the cost recouped by token usage. On the other hand, Microslop is greedy af and struggling to sell their ai products
What are you guessing / basing this on?
I have many commits with zero human editing. The relative split is def well away from a 99% vs 1% at this point for any edits, most remaining edits for me are only minor, not "significant"
I was curious which path this post took, OP answered in a peer comment
That’s a reasonable idea and something I considered. The issue is that AI assistance is often inline and mixed with human edits within a single commit (tab completion, partial rewrites, refactors). Treating AI as a separate Git author would require artificial commit boundaries or constant context switching. That quickly becomes tedious and produces noisy or misleading history, especially once commits are squashed.
> Why do we need a plugin or new tools to accomplish this?
There’s currently no friction‑less way to attribute AI‑assisted code, especially for non–turn‑based workflows like Copilot or Cursor completions. In those cases, human and machine edits are interleaved at the line level and collapse into a single author at commit time. Existing Git and blame tooling can’t express that distinction. This is an experiment to complement—not replace—existing contributor workflows.
PS: I asked for a resubmission and was encouraged to try again :)
Thanks! I wanted to see if I could get someone else's submission the special treatment. I'll reach out to dang
Humans are pretty terrible at reliable high quality choice review. The only thing worse is all the other things we've tried.
This is a good call out. Ai really excels at making things which are coherent, but nonsensical. It's almost as if its a higher-order of Chomsky's "green ideas sleep furiously"
It's just disrespectful. Why would anyone want to review the output of an LLM without any more context? If you really want to help, submit the prompt, the llm thinking tokens along with the final code. There are only nefarious reasons not to.
This would make sure this data is part of repository history (and commit SHA). Additional tooling can be still used to visualize it.
> A computer can never be held accountable, therefore a computer must never make a management decision. - IBM Training Manual, 1979
Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes or the responsibility for the code to be good. That lies up to the human "co-author", if you want to use that phrase.
It doesn't matter how true this should be in principle, in practice there are significant slop issues on the ground that we can't ignore and have to deal with. Context and subtext matter. It's already reasonable in some cases to trust contributions from different people differently based on who they are.
> Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes
The old rules of reputation and shame are gone. The door is open to people who will generate and spam bad PRs and have nothing to lose from it.
Isolating the AI is the next best thing. It's still an account that's facing consequences, even if it's anonymous. Yes there are issues but there's no perfect solution in a world where we can't have good things anymore.
The important part here is that reputation creates an incentive to be conscious of what you're submitting in the first place, not that it grants you some free pass from review.
There's been an unfortunate uptick in people submitting garbage they spent no time on and then whining about feedback because they trust what the AI put together more than their own skills and don't think it could be wrong.
Good luck enforcing that.
This extension is solving for the wrong problem and is actually only useful as some kind of ideology cudgel, it literally can only create friction. Nobody important cares if code is ai generated, they care if it solves problems correctly.
rbbydotdev•3d ago