frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

We don't need more contributors who aren't programmers to contribute code

https://discourse.llvm.org/t/rfc-llvm-ai-tool-policy-human-in-the-loop/89159
86•pertymcpert•2h ago

Comments

whatever1•1h ago
The code writers increased exponentially overnight. The number of reviewers is constant (slightly reduced due to layoffs).
rvz•1h ago
And so did the slop.
EdwardDiego•46m ago
Good policy.
hsuduebc2•29m ago
Contributors should never find themselves in the position of saying “I don’t know, an LLM did it”

I would never have thought that someone could actually write this.

clayhacks•19m ago
I’ve seen a bunch of my colleagues say this when I ask about the code they’ve submitted for review. Incredibly frustrating, but likely to become more common
jfreds•11m ago
I get this at work, frequently.

“Oh, cursor wrote that.”

If it made it into your pull request, YOU wrote it, and it it’ll be part of your performance review. Cursor doesn’t have a performance review. Simple as

hsuduebc2•5m ago
Yea, this is just lazy. If you don't know what it does and how then you shouldn't submit it at all.
scuff3d•20m ago
It's depressing this has to be spelled out. You'd think people would be smart enough not to harass maintainers with shit they don't understand.
ActionHank•6m ago
People who are smart enough to think that far ahead are also smart enough not to fall into the “ai can do all jobs perfectly all the time and just need my divine guidance” trap.
doctorpangloss•3m ago
on the flip side, the inability for LLVM to take contributions - whatever that means, I don't know what the best system is - leads to all sorts of problems in the ecosystem, slow down in triton features, problems with rust, etc.
29athrowaway•20m ago
Then the vibe coder will ask an LLM to answer questions about the contribution.
jfreds•20m ago
> automated review tools that publish comments without human review are not allowed

This seems like a curious choice. At my company we have both Gemini and cursor (I’m not sure which model under the hood on that) review agents available. Both frequently raise legitimate points. Im sure they’re abusable, I just haven’t seen it

bandrami•7m ago
An LLM is a plausibility engine. That can't be the final step of any workflow.
zeroonetwothree•19m ago
I only wish my workplace had the same policy. I’m so tired of reviewing slop where the submitter has no idea what it’s even for.
vjay15•12m ago
It is insane that this is happening in one of the most essential piece of software. This is a much needed step to decrease the increase of slop contribution. It's more work for the maintainer to review all this mess.
Negitivefrags•11m ago
At my company I just tell people “You have to stand behind your work”

And in practice that means that I won’t take “The AI did it” as an excuse. You have to stand behind the work you did even if you used AI to help.

I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.

bitwize•9m ago
The smartest and most sensible response.

I'm dreading the day the hammer falls and there will be AI-use metrics implemented for all developers at my job.

locusofself•7m ago
It's already happened at some very big tech companies
darth_avocado•3m ago
> At my company I just tell people “You have to stand behind your work”

Since when has that not been the bare minimum. Even before AI existed, and even if you did not work in programming at all, you sort of have to do that as a bare minimum. Even if you use a toaster and your company guidelines suggest you toast every sandwich for 20 seconds, if following every step as per training results in a lump of charcoal for bread, you can’t serve it up to the customer. At the end of the day, you make the sandwich, you’re responsible for making it correctly.

looneysquash•11m ago
Looks like a good policy to me.

One thing I didn't like was the copy/paste response for violations.

It makes sense to have one. Just the text they propose uses what I'd call insider terms, and also terms that sort of put down the contributor.

And while that might be appropriate at the next level of escalation, the first level stock text should be easier for the outside contributor to understand, and should better explain the next steps for the contributor to take.

mmsc•5m ago
This AI usage is like a turbo-charger for the Dunning–Kruger effect, and we will see these policies crop up more and more, as technical people become more and more harassed and burnt out by AI slop.

I also recently wrote a similar policy[0] for my fork of a codebase[1]. I had to write this because the original developer took the AI pill, and starting committing totally broken code that was fulled of bugs, and doubled down when asked about it [2].

On an analysis level, in a recent post[3], I commented that "Non-coders using AI to program are effectively non-technical people, equipped with the over-confidence of technical people. Proper training would turn those people into coders that are technical people. Traditional training techniques and material cannot work, as they are targeted and created with technical people in mind."

But what's more, we're also seeing programmers use AI creating slop. They're effectively technical people equipped with their initial over-confidence, highly inflated by a sense of effortless capability. Before AI, developers were once (sometimes) forced to pause, investigate, and understand, and now it's just easier and more natural to simply assume they grasp far more than they actually do.

[0]: https://gixy.io/contributing/#ai-llm-tooling-usage-policy

[1]: https://github.com/MegaManSec/gixyng

[2]: https://joshua.hu/gixy-ng-new-version-gixy-updated-checks#qu...

[3]: https://joshua.hu/ai-slop-story-nginx-leaking-dns-chatgpt#fi...

Study links America's favorite cooking oil to obesity

https://medicalxpress.com/news/2025-11-links-america-favorite-cooking-oil.html
1•PaulHoule•3m ago•0 comments

Show HN: Weekly newsletter with tactical frameworks from 50 $1M+ founders

https://www.doanything.com/preview/uXalImXcFZk
1•AlexMorganFndr•3m ago•0 comments

How musicals use motifs to tell stories

https://pudding.cool/2025/12/motifs/
1•gmays•10m ago•0 comments

Ask HN: What to do when Claude Code is writing code?

1•brihati•10m ago•1 comments

Show HN: Schengen Calculator – Avoid €5K Fines for Overstaying EU"

https://owlfacts.com
1•sunrays•11m ago•1 comments

A personal recap of 2025: on running, LLMs, family, coffee, work

https://dimitarmisev.com/blog/2025-recap
1•misev•16m ago•0 comments

I Built a Module System for a Language That Doesn't Have One

https://www.claudianadalin.com/blog/building-pinecone
1•xbmcuser•17m ago•0 comments

Show HN: Magic CSV – Transform CSVs with plain English, no formulas

https://magiccsv.app/
1•bored-developer•19m ago•0 comments

The Lore of the World: Field Notes for a Child's Codex

https://www.theintrinsicperspective.com/p/the-lore-of-the-world
3•Jun8•25m ago•0 comments

Show HN: Agape – human-centered CLI task manager

https://github.com/josequiceno2000/agape
2•josequiceno2000•25m ago•0 comments

Show HN: PDU – Open-source PostgreSQL data rescue tool

https://github.com/wublabdubdub/PDU-PostgreSQLDataUnloader
2•zhangchenPDU•25m ago•1 comments

Build Your Own ML Framework

https://mlsysbook.ai/tinytorch/intro.html
2•auraham•25m ago•0 comments

Observations on safety friction and misclassification in conversational AI

2•ayumi-observer•26m ago•0 comments

A Woman on a NY Subway Just Set the Tone for Next Year

https://www.honest-broker.com/p/a-woman-on-a-ny-subway-just-set-the
4•thomassmith65•26m ago•0 comments

A Woman on a NY Subway Just Set the Tone for Next Year

https://honest-broker.com/p/a-woman-on-a-ny-subway-just-set-the
1•thomassmith65•28m ago•2 comments

Advice for generalists who want to join startups

https://twitter.com/benln/status/2006057848430604705
2•gmays•35m ago•0 comments

Languish – Programming Language Trends

https://tjpalmer.github.io/languish/
2•nickswalker•35m ago•0 comments

What to Expect from the AI Engineering World in 2026

https://sarthakai.substack.com/p/what-to-expect-from-the-ai-engineering
2•sarthakrastogi•42m ago•0 comments

Show HN: LLMRouter – first LLM routing library with 300 stars in 24h

https://github.com/ulab-uiuc/LLMRouter
3•tao2024•46m ago•1 comments

Show HN: real-time usage monitor for Claude – see cost without leaving workflow

https://github.com/SrivathsanSivakumar/simple-usage-monitor
3•supersonic339•53m ago•1 comments

Meta is sued by US Virgin Islands over ads for scams, dangers to children

https://www.reuters.com/legal/litigation/meta-is-sued-by-us-virgin-islands-over-ads-scams-dangers...
7•1vuio0pswjnm7•54m ago•0 comments

Poland urges Brussels to probe TikTok over AI-generated content

https://www.reuters.com/world/china/poland-urges-brussels-probe-tiktok-over-ai-generated-content-...
4•1vuio0pswjnm7•56m ago•1 comments

MongoBleed: Unauthenticated memory-read vulnerability in MongoDB

https://www.bitsight.com/blog/critical-vulnerability-alert-cve-2025-14847-mongodb-mongobleed
1•epicprogrammer•59m ago•1 comments

Nvelox: Lightweight, event-driven load balancer built for high-concurrency

https://github.com/nvelox/nvelox
2•thunderbong•1h ago•0 comments

Creating my own blog from scratch using zola

https://vjay15.github.io/blog/zola-tutorial/
2•vjay15•1h ago•1 comments

How can I detect that the system is running low on memory?

https://devblogs.microsoft.com/oldnewthing/20251229-00/?p=111927
4•ibobev•1h ago•0 comments

Restoring My Childhood Family Computer Part 4: Emulation

https://www.gridbugs.org/restoring-my-childhood-family-computer-part-4/
2•ibobev•1h ago•0 comments

Ten things we forgot to be true

https://kgrep.com/about
6•0xlogk•1h ago•2 comments

Selective Applicative Functors

https://blog.veritates.love/selective_applicatives_theoretical_basis.html
1•ibobev•1h ago•0 comments

OpenAI Is Paying Employees More Than Any Major Tech Startup in History

https://www.wsj.com/tech/ai/openai-is-paying-employees-more-than-any-major-tech-startup-in-histor...
4•JumpCrisscross•1h ago•1 comments