frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Ghidra MCP Server – 110 tools for AI-assisted reverse engineering

https://github.com/bethington/ghidra-mcp
193•xerzes•9h ago•49 comments

Show HN: Craftplan – I built my wife a production management tool for her bakery

https://github.com/puemos/craftplan
468•deofoo•2d ago•135 comments

Show HN: Camel OpenAI Integration Patterns

https://github.com/ibek/camel-openai-patterns
2•aivi•47m ago•0 comments

Show HN: SlitherPong, a hybrid of the Snake and Pong video games

https://www.slitherpong.com/
3•AmbroseBierce•1h ago•0 comments

Show HN: Nocterm – Flutter-inspired TUI framework with hot reload (Dart)

https://nocterm.dev
3•norbert515•2h ago•1 comments

Show HN: Two-week creative lab for developers building with real-time AI video

https://daydream.live/interactive-ai-video-program
9•cmuir•2h ago•2 comments

Show HN: Mmdr – 1000x faster Mermaid rendering in pure Rust (no browser)

https://github.com/1jehuang/mermaid-rs-renderer/blob/master/README.md
2•jeremyh1•2h ago•0 comments

Show HN: Teaching AI agents to write better GraphQL

https://skills.sh/apollographql/skills
3•daleseo•2h ago•0 comments

Show HN: Instantly surface the assumptions behind a UI screenshot

https://app.usercall.co/ai-user-testing
3•junetic•3h ago•1 comments

Show HN: Crnd – Cron daemon built for scripts and AI agents

3•ysm0622•3h ago•0 comments

Show HN: GitHub Browser Plugin for AI Contribution Blame in Pull Requests

https://blog.rbby.dev/posts/github-ai-contribution-blame-for-pull-requests/
60•rbbydotdev•1d ago•33 comments

Show HN: Webhook Skills – Agent skills for webhook providers and best practices

https://github.com/hookdeck/webhook-skills
8•leggetter•3h ago•2 comments

Show HN: Zerobrew – Alternative to Homebrew

https://github.com/lucasgelfond/zerobrew
5•worldsavior•3h ago•3 comments

Show HN: Octosphere, a tool to decentralise scientific publishing

https://octosphere.social/
58•crimsoneer•23h ago•29 comments

Show HN: OpenShears – I built an uninstaller because OpenClaw refuses to die

https://github.com/oswarld/openshears
2•haebom•4h ago•0 comments

Show HN: Safe-now.live – Ultra-light emergency info site (<10KB)

https://safe-now.live
183•tinuviel•1d ago•93 comments

Show HN: Sandboxing untrusted code using WebAssembly

https://github.com/mavdol/capsule
75•mavdol04•1d ago•21 comments

Show HN: C discrete event SIM w stackful coroutines runs 45x faster than SimPy

https://github.com/ambonvik/cimba
61•ambonvik•1d ago•16 comments

Show HN: BPU – An embedded scheduler for stable UART pipelines

7•DenisDolya•3d ago•1 comments

Show HN: Ask your AI what your devs shipped this week

4•inferno22•1h ago•0 comments

Show HN: AI Blocker by Kiddokraft

https://kiddokraft.org/wiki?name=ai-blocker
2•Rezhe•5h ago•0 comments

Show HN: Find better round trips – TSP challenge

https://tsp-game.graphhopper.com/
3•oblonski•6h ago•0 comments

Show HN: Latex-wc – Word count and word frequency for LaTeX projects

https://www.piwheels.org/project/latex-wc/
9•sethbarrettAU•1d ago•4 comments

Show HN: PII-Shield – Log Sanitization Sidecar with JSON Integrity (Go, Entropy)

https://github.com/aragossa/pii-shield
16•aragoss•23h ago•9 comments

Show HN: Adboost – A browser extension that adds ads to every webpage

https://github.com/surprisetalk/AdBoost
123•surprisetalk•2d ago•127 comments

Show HN: Ec – a terminal Git conflict resolver inspired by IntelliJ

https://github.com/chojs23/ec
12•neozz•15h ago•0 comments

Show HN: I built "AI Wattpad" to eval LLMs on fiction

https://narrator.sh/llm-leaderboard
28•jauws•23h ago•31 comments

Show HN: OpenClaw Assistant – Replace Google Assistant with Any AI

https://github.com/yuga-hashimoto/OpenClawAssistant
3•YugaHashimoto•12h ago•0 comments

Show HN: Yutovo – visual online and desktop calculator inside a text editor

https://yutovo.com
3•denprog•9h ago•0 comments

Show HN: difi – A Git diff TUI with Neovim integration (written in Go)

https://github.com/oug-t/difi
44•oug-t•1d ago•47 comments
Open in hackernews

Show HN: GitHub Browser Plugin for AI Contribution Blame in Pull Requests

https://blog.rbby.dev/posts/github-ai-contribution-blame-for-pull-requests/
60•rbbydotdev•1d ago

Comments

rbbydotdev•3d ago
repo: https://github.com/rbbydotdev/refined-github-with-ai-pr
verdverm•1d ago
Wouldn't the thing to do to give them their own account id / email so we can use standard git blame tools?

Why do we need a plugin or new tools to accomplish this?

Don't know why this has been resubmitted and placed on the front of HN. (See 2day old peer comment) What's the feature of this post that warrants special treatment?

jayd16•1d ago
That would cost a seat, I'm guessing.
verdverm•1d ago
I'm using '(human)' and '(agent)' prefix as a poor mans alternative
verdverm•1d ago
how much is this solution like this going to cost you per current seat?

On one hand, I would imagine companies like GitHub will not charge for agent accounts because they want to encourage their use and see the cost recouped by token usage. On the other hand, Microslop is greedy af and struggling to sell their ai products

Anonbrit•1d ago
Giving it its own id doesn't store all the useful metadata this tool preserves, like the model and prompt that generated the code
verdverm•1d ago
ADK does that for me in a database, which I've extended to use Dagger for complete environment and history in OCI
maartin0•1d ago
I guess because 99% of generated code will likely need significant edits, so you'd never want to commit direct "AI contributions" - you don't commit every time you take something from StackOverflow, likewise I wonder if people might start adding credit comments to LLMs?
verdverm•20h ago
> I guess because 99% of generated code will likely need significant edits

What are you guessing / basing this on?

I have many commits with zero human editing. The relative split is def well away from a 99% vs 1% at this point for any edits, most remaining edits for me are only minor, not "significant"

nightpool•1d ago
Many posts get resubmitted if someone finds them interesting and, if it's been a few days, they generally get "second-chance" treatment. That means they'll be able to make it to the front-page based on upvotes, if they didn't make it the first time.
verdverm•1d ago
There are a couple of paths to resubmission, the auto dedup if close enough in time vs fresh post / id. There are also instances where the HN team tilts the scale a bit (typically placing it on the front iirc)

I was curious which path this post took, OP answered in a peer comment

rbbydotdev•1d ago
> Wouldn't the thing to do be to give AI its own account id / email so we can use standard git blame tools?

That’s a reasonable idea and something I considered. The issue is that AI assistance is often inline and mixed with human edits within a single commit (tab completion, partial rewrites, refactors). Treating AI as a separate Git author would require artificial commit boundaries or constant context switching. That quickly becomes tedious and produces noisy or misleading history, especially once commits are squashed.

> Why do we need a plugin or new tools to accomplish this?

There’s currently no friction‑less way to attribute AI‑assisted code, especially for non–turn‑based workflows like Copilot or Cursor completions. In those cases, human and machine edits are interleaved at the line level and collapse into a single author at commit time. Existing Git and blame tooling can’t express that distinction. This is an experiment to complement—not replace—existing contributor workflows.

PS: I asked for a resubmission and was encouraged to try again :)

verdverm•1d ago
> PS: I asked for a resubmission and was encouraged to try again :)

Thanks! I wanted to see if I could get someone else's submission the special treatment. I'll reach out to dang

weaksauce•1d ago
I think the special feature is that it tracks on a per line basis in a blended commit what AI is doing vs. whole commits. not sure the utility of it.
nilespotter•1d ago
Why not just look at the code and see if it's good or not?
Anonbrit•1d ago
Because AI is really good at generating code that looks good on its own, on both first and second glance. It's only when you notice the cumulative effects of layers if such PRs that the cracks really show.

Humans are pretty terrible at reliable high quality choice review. The only thing worse is all the other things we've tried.

rbbydotdev•1d ago
> Because AI is really good at generating code that looks good on its own, on both first and second glance.

This is a good call out. Ai really excels at making things which are coherent, but nonsensical. It's almost as if its a higher-order of Chomsky's "green ideas sleep furiously"

monsieurbanana•1d ago
Because they can produce magnitude more code than you can review. And personally I don't want to review _any_ submitted AI code if I don't have a guarantee that the person who prompted it has reviewed it before.

It's just disrespectful. Why would anyone want to review the output of an LLM without any more context? If you really want to help, submit the prompt, the llm thinking tokens along with the final code. There are only nefarious reasons not to.

shayief•1d ago
It seems like something like this should be added to the commit object/message itself, instead of git notes. Maybe as addition to Co-Authored-By trailer.

This would make sure this data is part of repository history (and commit SHA). Additional tooling can be still used to visualize it.

dec0dedab0de•1d ago
I think this is what aider/cecli does
Kerrick•22h ago
I've added it to my AGENTS.md for Antigravity too.
operator-name•23h ago
I'm not sold at the idea - for most projects it makes sense that the author of the PR should ultimately have ownership in the code that they're submitting. It doesn't matter if that's AI generated, generated with the help of other humans or typed up by a monkey.

> A computer can never be held accountable, therefore a computer must never make a management decision. - IBM Training Manual, 1979

Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes or the responsibility for the code to be good. That lies up to the human "co-author", if you want to use that phrase.

rbbydotdev•23h ago
I agree that accountability should always rest with the human submitting the PR. This isn't for deflecting ownership to AI. The goal is transparency, making it visible how code was produced, not who is accountable for it. These signals can help teams align on expectations, review depth, and risk tolerance, especially for beta or proof‑of‑concept code that may be rewritten later. It can also serve as a reminder to the author about which parts of the code were added with less scrutiny, without changing who ultimately owns the outcome.
ottah•15h ago
I doubt anyone is going to really use it for that purpose. What's more likely is people nitpicking or harassing pr authors over any use of AI.
add-sub-mul-div•22h ago
> It doesn't matter if that's AI generated, generated with the help of other humans or typed up by a monkey.

It doesn't matter how true this should be in principle, in practice there are significant slop issues on the ground that we can't ignore and have to deal with. Context and subtext matter. It's already reasonable in some cases to trust contributions from different people differently based on who they are.

> Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes

The old rules of reputation and shame are gone. The door is open to people who will generate and spam bad PRs and have nothing to lose from it.

Isolating the AI is the next best thing. It's still an account that's facing consequences, even if it's anonymous. Yes there are issues but there's no perfect solution in a world where we can't have good things anymore.

ottah•15h ago
Most code was garbage before AI, and most engineers made significant mistakes. Very little code is not future tech debt. Review and testing has always been the only defense, reputation or skill of the committer is not.
yunwal•14h ago
The issue is the asymmetry between the time it takes to generate convincing AI slop and the time it takes to review it. The convincing part was still somewhat difficult when slop had to be written by hand.
zdragnar•12h ago
> The old rules of reputation and shame are gone. The door is open to people who will generate and spam bad PRs and have nothing to lose from it.

The important part here is that reputation creates an incentive to be conscious of what you're submitting in the first place, not that it grants you some free pass from review.

There's been an unfortunate uptick in people submitting garbage they spent no time on and then whining about feedback because they trust what the AI put together more than their own skills and don't think it could be wrong.

Alxc1•22h ago
I believe GitLens has a version of this feature that I tried. To others points, seeing the person who actually committed it was more helpful.
ottah•15h ago
Why!? What possible benefit is there to stuffing my git commit history with this noise?
rbbydotdev•6h ago
It’s not in the git commit history, that’s the cool part! Git-ai stores it in git notes, easy to remove later if you don’t like it ;)
Our_Benefactors•13h ago
> Projects like Zig may never allow ai contributions

Good luck enforcing that.

This extension is solving for the wrong problem and is actually only useful as some kind of ideology cudgel, it literally can only create friction. Nobody important cares if code is ai generated, they care if it solves problems correctly.

ahmetozer•1h ago
you can use sandal with lw and tmp args https://github.com/ahmetozer/sandal, i hope it will help