frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

GitHub Browser Plugin for AI Contribution Blame in Pull Requests

https://blog.rbby.dev/posts/github-ai-contribution-blame-for-pull-requests/
24•rbbydotdev•2h ago

Comments

rbbydotdev•2d ago
repo: https://github.com/rbbydotdev/refined-github-with-ai-pr
verdverm•1h ago
Wouldn't the thing to do to give them their own account id / email so we can use standard git blame tools?

Why do we need a plugin or new tools to accomplish this?

Don't know why this has been resubmitted and placed on the front of HN. (See 2day old peer comment) What's the feature of this post that warrants special treatment?

jayd16•1h ago
That would cost a seat, I'm guessing.
verdverm•1h ago
I'm using '(human)' and '(agent)' prefix as a poor mans alternative
verdverm•45m ago
how much is this solution like this going to cost you per current seat?

On one hand, I would imagine companies like GitHub will not charge for agent accounts because they want to encourage their use and see the cost recouped by token usage. On the other hand, Microslop is greedy af and struggling to sell their ai products

Anonbrit•1h ago
Giving it its own id doesn't store all the useful metadata this tool preserves, like the model and prompt that generated the code
verdverm•1h ago
ADK does that for me in a database, which I've extended to use Dagger for complete environment and history in OCI
maartin0•1h ago
I guess because 99% of generated code will likely need significant edits, so you'd never want to commit direct "AI contributions" - you don't commit every time you take something from StackOverflow, likewise I wonder if people might start adding credit comments to LLMs?
nightpool•1h ago
Many posts get resubmitted if someone finds them interesting and, if it's been a few days, they generally get "second-chance" treatment. That means they'll be able to make it to the front-page based on upvotes, if they didn't make it the first time.
verdverm•50m ago
There are a couple of paths to resubmission, the auto dedup if close enough in time vs fresh post / id. There are also instances where the HN team tilts the scale a bit (typically placing it on the front iirc)

I was curious which path this post took, OP answered in a peer comment

rbbydotdev•59m ago
> Wouldn't the thing to do be to give AI its own account id / email so we can use standard git blame tools?

That’s a reasonable idea and something I considered. The issue is that AI assistance is often inline and mixed with human edits within a single commit (tab completion, partial rewrites, refactors). Treating AI as a separate Git author would require artificial commit boundaries or constant context switching. That quickly becomes tedious and produces noisy or misleading history, especially once commits are squashed.

> Why do we need a plugin or new tools to accomplish this?

There’s currently no friction‑less way to attribute AI‑assisted code, especially for non–turn‑based workflows like Copilot or Cursor completions. In those cases, human and machine edits are interleaved at the line level and collapse into a single author at commit time. Existing Git and blame tooling can’t express that distinction. This is an experiment to complement—not replace—existing contributor workflows.

PS: I asked for a resubmission and was encouraged to try again :)

verdverm•51m ago
> PS: I asked for a resubmission and was encouraged to try again :)

Thanks! I wanted to see if I could get someone else's submission the special treatment. I'll reach out to dang

weaksauce•36m ago
I think the special feature is that it tracks on a per line basis in a blended commit what AI is doing vs. whole commits. not sure the utility of it.
nilespotter•1h ago
Why not just look at the code and see if it's good or not?
Anonbrit•1h ago
Because AI is really good at generating code that looks good on its own, on both first and second glance. It's only when you notice the cumulative effects of layers if such PRs that the cracks really show.

Humans are pretty terrible at reliable high quality choice review. The only thing worse is all the other things we've tried.

rbbydotdev•55m ago
> Because AI is really good at generating code that looks good on its own, on both first and second glance.

This is a good call out. Ai really excels at making things which are coherent, but nonsensical. It's almost as if its a higher-order of Chomsky's "green ideas sleep furiously"

monsieurbanana•55m ago
Because they can produce magnitude more code than you can review. And personally I don't want to review _any_ submitted AI code if I don't have a guarantee that the person who prompted it has reviewed it before.

It's just disrespectful. Why would anyone want to review the output of an LLM without any more context? If you really want to help, submit the prompt, the llm thinking tokens along with the final code. There are only nefarious reasons not to.

shayief•1h ago
It seems like something like this should be added to the commit object/message itself, instead of git notes. Maybe as addition to Co-Authored-By trailer.

This would make sure this data is part of repository history (and commit SHA). Additional tooling can be still used to visualize it.

dec0dedab0de•56m ago
I think this is what aider/cecli does
operator-name•5m ago
I'm not sold at the idea - for most projects it makes sense that the author of the PR should ultimately have ownership in the code that they're submitting. It doesn't matter if that's AI generated, generated with the help of other humans or typed up by a monkey.

> A computer can never be held accountable, therefore a computer must never make a management decision. - IBM Training Manual, 1979

Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes or the responsibility for the code to be good. That lies up to the human "co-author", if you want to use that phrase.

Qwen3-Coder-Next

https://qwen.ai/blog?id=qwen3-coder-next
106•danielhanchen•50m ago•28 comments

Agent Skills

https://agentskills.io/home
163•mooreds•2h ago•124 comments

What's up with all those equals signs anyway?

https://lars.ingebrigtsen.no/2026/02/02/whats-up-with-all-those-equals-signs-anyway/
403•todsacerdoti•7h ago•125 comments

Heritability of intrinsic human life span is about 50%

https://www.science.org/doi/10.1126/science.adz1187
57•XzetaU8•2d ago•28 comments

Launch HN: Modelence (YC S25) – App Builder with TypeScript / MongoDB Framework

9•eduardpi•48m ago•1 comments

Bunny Database

https://bunny.net/blog/meet-bunny-database-the-sql-service-that-just-works/
94•dabinat•4h ago•38 comments

GitHub Browser Plugin for AI Contribution Blame in Pull Requests

https://blog.rbby.dev/posts/github-ai-contribution-blame-for-pull-requests/
24•rbbydotdev•2h ago•20 comments

Show HN: difi – A Git diff TUI with Neovim integration (written in Go)

https://github.com/oug-t/difi
28•oug-t•3h ago•23 comments

Show HN: Sandboxing untrusted code using WebAssembly

https://github.com/mavdol/capsule
22•mavdol04•2h ago•10 comments

The Everdeck: A Universal Card System (2019)

https://thewrongtools.wordpress.com/2019/10/10/the-everdeck/
17•surprisetalk•6d ago•5 comments

Floppinux – An Embedded Linux on a Single Floppy, 2025 Edition

https://krzysztofjankowski.com/floppinux/floppinux-2025.html
206•GalaxySnail•12h ago•133 comments

Show HN: Safe-now.live – Ultra-light emergency info site (<10KB)

https://safe-now.live
123•tinuviel•7h ago•53 comments

Data Brokers Can Fuel Violence Against Public Servants

https://www.wired.com/story/how-data-brokers-can-fuel-violence-against-public-servants/
29•achristmascarl•1h ago•7 comments

Banning lead in gas worked. The proof is in our hair

https://attheu.utah.edu/health-medicine/banning-lead-in-gas-worked-the-proof-is-in-our-hair/
219•geox•14h ago•147 comments

The Codex App

https://openai.com/index/introducing-the-codex-app/
758•meetpateltech•22h ago•569 comments

Emerge Career (YC S22) is hiring a product designer

https://www.ycombinator.com/companies/emerge-career/jobs/omqT34S-founding-product-designer
1•gabesaruhashi•4h ago

Anki ownership transferred to AnkiHub

https://forums.ankiweb.net/t/ankis-growing-up/68610
505•trms•20h ago•199 comments

Show HN: Inverting Agent Model (App as Clients, Chat as Server and Reflection)

https://github.com/RAIL-Suite/RAIL
16•ddddazed•2h ago•2 comments

Todd C. Miller – Sudo maintainer for over 30 years

https://www.millert.dev/
559•wodniok•23h ago•290 comments

Archive.today is directing a DDoS attack against my blog?

https://gyrovague.com/2026/02/01/archive-today-is-directing-a-ddos-attack-against-my-blog/
280•gyrovague-com•2d ago•119 comments

New York Wants to Ctrl+Alt+Delete Your 3D Printer

https://blog.adafruit.com/2026/02/03/new-york-wants-to-ctrlaltdelete-your-3d-printer/
88•ptorrone•1h ago•95 comments

How does misalignment scale with model intelligence and task complexity?

https://alignment.anthropic.com/2026/hot-mess-of-ai/
225•salkahfi•16h ago•70 comments

Anthropic is Down

https://updog.ai/status/anthropic
104•ersiees•1h ago•96 comments

Ask HN: Is there anyone here who still uses slide rules?

82•blenderob•2h ago•85 comments

A WhatsApp bug lets malicious media files spread through group chats

https://www.malwarebytes.com/blog/news/2026/01/a-whatsapp-bug-lets-malicious-media-files-spread-t...
19•iamnothere•2h ago•2 comments

LNAI – Define AI coding tool configs once, sync to Claude, Cursor, Codex, etc.

https://github.com/KrystianJonca/lnai
55•iamkrystian17•8h ago•26 comments

GitHub experience various partial-outages/degradations

https://www.githubstatus.com?todayis=2026-02-02
247•bhouston•19h ago•95 comments

See how many words you have written in Hacker News comments

https://serjaimelannister.github.io/hn-words/
116•Imustaskforhelp•3d ago•195 comments

Ask HN: Who is hiring? (February 2026)

296•whoishiring•1d ago•376 comments

xAI joins SpaceX

https://www.spacex.com/updates#xai-joins-spacex
837•g-mork•19h ago•1854 comments