frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Your vibe coded slop PR is not welcome

https://samsaffron.com/archive/2025/10/27/your-vibe-coded-slop-pr-is-not-welcome
154•keybits•2h ago

Comments

colesantiago•2h ago
I wouldn't call it "vibe coded slop" the models are getting way better and I can work with my engineers a lot faster.

I am the founder and a product person so it helps in reducing the number of needed engineers at my business. We are currently doing $2.5M ARR and the engineers aren't complaining, in fact it is the opposite, they are actually more productive.

We still prioritize architecture planning, testing and having a CI, but code is getting less and less important in our team, so we don't need many engineers.

pards•2h ago
> code is getting less and less important in our team, so we don't need many engineers.

That's a bit reductive. Programmers write code; engineers build systems.

I'd argue that you still need engineers for architecture, system design, protocol design, API design, tech stack evaluation & selection, rollout strategies, etc, and most of this has to be unambiguously documented in a format LLMs can understand.

While I agree that the value of code has decreased now that we can generate and regenerate code from specs, we still need a substantial number of experienced engineers to curate all the specs and inputs that we feed into LLMs.

HPsquared•1h ago
Maybe the code itself is less important now, relative to the specification.
didericis•1h ago
> we can generate and regenerate code from specs

We can (unreliably) write more code in natural english now. At its core it’s the same thing: detailed instructions telling the computer what it should do.

oompydoompy74•2h ago
Did you read the full article?
colesantiago•2h ago
Of course I did, however:

> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".

https://news.ycombinator.com/newsguidelines.html

lawn•1h ago
> so it helps in reducing the number of needed engineers at my business

> the engineers aren't complaining

You're missing a piece of the puzzle here, Mr business person.

colesantiago•1h ago
I mean our MRR and ARR is growing so we must be doing something right.
sgarland•49m ago
WeWork thought that as well.
wycy•1h ago
> the engineers aren't complaining, in fact it is the opposite, they are actually more productive.

More productive isn't the opposite of complaining.

colesantiago•1h ago
I don't hear any either way.
blitzar•1h ago
If an engineer complains in the woods and nobody is around to hear them, did they even complain at all?
theultdev•1h ago
> reducing the number of needed engineers at my business

> code is getting less and less important in our team

> the engineers aren't complaining

lays off engineers for ai trained off of other engineer's code and says code is less important and engineers aren't complaining.

colesantiago•1h ago
Um, yes?

They can focus on other things that are more impactful in the business rather than just slinging code all day, they can actually look at design and the product!

Maximum headcount for engineers is around 7, no more than that now. I used to have 20, but with AI we don't need that many for our size.

theultdev•1h ago
Yeah I'm sure they aren't complaining because you'll just lay them off like the others.

I don't see how you could think 7 engineers would love the workload of 20 engineers, extra tooling or not.

Have fun with the tech debt in a few years.

BigTTYGothGF•52m ago
> Maximum headcount for engineers is around 7, no more than that now. I used to have 20,

If I survived having 65% of my colleagues laid off you'd better believe I wouldn't complain in public.

hansmayer•1h ago
> and a product person

Tells me all I need to know about your ability for sound judgement on technical topics right there.

sangeeth96•15m ago
What do the spends for AI/LLM services look like per person? Do you track any dev/AI metrics related to how the usage is in the company?
Bengalilol•1h ago
Shouldn't there be guidelines for open source projects where it is clearly stipulated that code submitted for review must follow the project's code format and conventions?
c0wb0yc0d3r•1h ago
This is the thought that I always have whenever I see the mention of coding standards. Not only should there be standards they should be enforced by tooling.

Now that being said a person should feel free to do what they want with their code. It’s somewhat tough to justify the work of setting up infrastructure to do that on small projects, but AI PRs aren’t likely a big issue fit small projects.

portaouflop•1h ago
In a perfect world people would read and understand contribution guidelines before opening a PR or issue.

Alas…

deadbunny•55m ago
As if people read guidelines. Sure they're good to have so you can point to them when people violate them but people (in general) will not by default read them before contributing.
kasey_junk•46m ago
I’ve found LLM coding agents to be quite good at writing linters…
isaacremuant•38m ago
Code format and conventions are not the problem. It's the complexity of the change without testing, thinking, or otherwise having ownership of your PR.

Some people will absolutely just run something, let the AI work like a wizard and push it in hopes of getting an "open source contribution".

They need to understand due diligence and reduce the overhead of maintainers so that maintainers don't review things before it's really needed.

It's a hard balance to strike, because you do want to make it easy on new contributors, but this is a great conversation to have.

flohofwoe•26m ago
> that code submitted for review must follow the project's code format and conventions

...that's just scratching the surface.

The problem is that LLMs make mistakes that no single human would make, and coding conventions should anyway never be the focus of a code review and should usually be enforced by tooling.

E.g. when reading/reviewing other people's code you tune into their brain and thought process - after reading a few lines of (non-trivial) code you know subconsciously what 'programming character' this person is and what type of problems to expect and look for.

With LLM generated code it's like trying to tune into a thousand brains at the same time, since the code is a mishmash of what a thousand people have written and published on the internet. Reading a person's thought process via reading their code doesn't work anymore, because there is no coherent thought process.

Personally I'm very hesitant to merge PRs into my open source projects that are more than small changes of a couple dozen lines at most, unless I know and trust the contributor to not fuck things up. E.g. for the PRs I'm accepting I don't really care if they are vibe-coded or not, because the complexity for accepted PRs is so low that the difference shouldn't matter much.

skydhash•17m ago
Also there’s two main methods of reviewing. If you’re in an org, everyone is responsible for their own code, so review is mostly for being aware of stuff and helping catch mistakes. In an OSS project, everything’s is under your responsibility, and there’s a need to vet code closely. LGTM is not an option.
softwaredoug•1h ago
Anyone else feel like we're cresting the LLM coding hype curve?

Like a recognition that there's value there, but we're passing the frothing-at-the-mouth stage of replacing all software engineers?

alwa•1h ago
It feels that way to me, too—starting to feel closer to maturity. Like Mr. Saffron here, saying “go ham with the AI for prototyping, just communicate that as a demo/branch/video instead of a PR.”

It feels like people and projects are moving from a pure “get that slop out of here” attitude toward more nuance, more confidence articulating how to integrate the valuable stuff while excluding the lazy stuff.

deepsquirrelnet•32m ago
I think that happened when gpt5 was released and pierced OpenAIs veil. While not a bad model, we found out exactly what Mr. Altman’s words are worth.
jandrese•28m ago
When people talk about the “AI bubble popping” this is what they mean. It is clear that AI will remain useful, but the “singularity is nigh” hype is faltering and the company valuations based on perpetual exponential improvement are just not realistic. Worse, the marginal improvements are coming at ever higher resource requirements with each generation, which puts a soft cap on how good an AI can be and still be economical to run.
dekoidal•15m ago
Well, when MS give OpenAI free use of their servers and OpenAI call it a $10 billion investment, then they use up their tokens and MS calls in $10 billion in revenue, I think so, yes.
jermaustin1•13m ago
I've been skeptical about LLMs being able to replace humans in their current state (which has gotten marginally better in the last 18 months), but let us not forget that GPT-3.5 (the first truly useful LLM) was only 3 years ago. We aren't even 10 years out from the initial papers about GPTs.
javier123454321•8m ago
I was extremely skeptical at the beginning, and therefore critical of what was possible as my default stance. Despite all that, the latest iterations of cli agents which attach to LSPs and scan codebase context have been surprising me in a positive direction. I've given them tasks that require understanding the project structure and they've been able to do so. Therefore, for me my trajectory has been from skeptic to big proponent of the use, of course with all the caveats that at the end of the day, it is my code which will be pushed to prod. So I never went through the trough of disillusionment, but am arriving at productivity and find it great.
corytheboyd•7m ago
Maybe, maybe not, it’s hard to tell from articles like this from OSS projects what is generally going on, especially with corporate work. There is no such rhetoric at $job, but also, the massive AI investment seemingly has yet to shift the needle. If it doesn’t they’ll likely fire a bunch of people again and continue.
catigula•3m ago
It's been less than a year and agents have gone from patently useless to very useful if used well.
Toby1VC•1h ago
Nice jewish word mostly meant to mock. Why would I care what a plugin that I don't even see in use has to say to my face (since I had to read this with all the interpretation potential and receptiveness available). The same kind of inserted judgment that lingers similar to "Yes, I will judge you if you use AI".
softskunk•1h ago
There’s nothing wrong with judgment. Judging someone’s character based on whether they use generative “AI” is a valid practice. You may not like being judged, but that’s another matter entirely.
Toby1VC•8m ago
>There’s nothing wrong with judgment. Judging someone’s character based on whether they use generative “AI” is a valid practice. You may not like being judged, but that’s another matter entirely.

You and I know that using AI is a metric to consider when judging ability and quality.

The difference is that it's not judgment but a broadcast, announcement.

In this case a snotty one from Discourse.

I mention that it lingers because I think that is a real psychological effect that happens.

Small announcements like this carry over into the future and flood any evaluation of yourself which can be described as torture and sabotage since it has an effect on decisions you make sometimes destroying things.

mattlondon•56m ago
Which word? Slop? I think it is from medieval old English if that is the word you are referring to.
BigTTYGothGF•54m ago
> Nice jewish word

"Slop" doesn't seem to be Yiddish: https://www.etymonline.com/word/slop, and even if it was, so what?

darkwater•1h ago
The title doesn't make justice to the content.

I really liked the paragraph about LLMs being "alien intelligence"

   > Many engineers I know fall into 2 camps, either the camp that find the new class of LLMs intelligent, groundbreaking and shockingly good. In the other camp are engineers that think of all LLM generated content as “the emperor’s new clothes”, the code they generate is “naked”, fundamentally flawed and poison.

   I like to think of the new systems as neither. I like to think about the new class of intelligence as “Alien Intelligence”. It is both shockingly good and shockingly terrible at the exact same time.

   Framing LLMs as “Super competent interns” or some other type of human analogy is incorrect. These systems are aliens and the sooner we accept this the sooner we will be able to navigate the complexity that injecting alien intelligence into our engineering process leads to.
It's a similitude I find compelling. The way they produce code and the way you have to interact with them really feels "alien", and when you start humanizing them, you get emotions when interacting with it and that's not correct. I mean, I do get emotional and frustrated even when good old deterministic programs misbehaved and there was some bug to find and squash or work-around, but the LLM interactions can bring the game to a complete new level. So, we need to remember they are "alien".
andai•1h ago
Some movements expected alien intelligence to arrive in the early 2020s. They might have been on the mark after all ;)
wat10000•54m ago
I’m reminded of Dijkstra: “The question of whether machines can think is about as relevant as the question of whether submarines can swim.”

These new submarines are a lot closer to human swimming than the old ones were, but they’re still very different.

keiferski•3m ago
This is why at a fundamental level, the concept of AGI doesn't make a lot of sense. You can't measure machine intelligence by comparing it to a human's. That doesn't mean machines can't be intelligent...but rather that the measuring stick cannot be an abstracted human being.
jamesbelchamber•1h ago
> “I am closing this but this is interesting, head over to our forum/issues to discuss”

I really like the way Discourse uses "levels" to slowly open up features as new people interact with the community, and I wonder if GitHub could build in a way of allowing people to only be able to open PRs after a certain amount of interaction, too (for example, you can only raise a large PR if you have spent enough time raising small PRs).

This could of course be abused and/or lead to unintended restrictions (e.g. a small change in lots of places), but that's also true of Discourse and it seems to work pretty well regardless.

mattlondon•1h ago
The way we do it is to use AI to review the PR before a human reviewer sees it. Obvious errors, non-consistent patterns, weirdness etc is flagged before it goes any further. "Vibe coded" slop usually gets caught, but "vibe engineered" surgical changes that adhere to common patterns and standards and have tests etc get to be seen by a real live human for their normal review.

It's not rocket science.

franktankbank•50m ago
Do you work at a profitable company?
andai•59m ago
>That said, there is a trend among many developers of banning AI. Some go so far as to say “AI not welcome here” find another project.

>This feels extremely counterproductive and fundamentally unenforceable to me. Much of the code AI generates is indistinguishable from human code anyway. You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.

Isn't that exactly the point? Doesn't this achieve exactly what the whole article is arguing for?

A hard "No AI" rule filters out all the slop, and all the actually good stuff (which may or may not have been made with AI) makes it in.

When the AI assisted code is indistinguishable from human code, that's mission accomplished, yeah?

Although I can see two counterarguments. First, it might just be Covert Slop. Slop that goes under the radar.

And second, there might be a lot of baby thrown out with that bathwater. Stuff that was made in conjunction with AI, contains a lot of "obviously AI", but a human did indeed put in the work to review it.

I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review? (And a proof of competence, to boot?)

jrochkind1•56m ago
Well, but why not instead of asking/accepting people will lie undetectably when you say "No AI" and it's okay you're fine with lying, just say instead "Only AI when you spend the time to turn it into a real reviewed PR, which looks like X, Y, and Z", giving some actual tips on how to use AI acceptably. Which is what OP suggests.
felipeerias•54m ago
Personally, I would not contribute to a project that forced me to lie.

And from the point of view of the maintainers, it seems a terrible idea to set up rules with the expectation that they will be broken.

sgarland•51m ago
> I guess the problem is there's no way of knowing that? Is there a Proof of Work for code review?

In a live setting, you could ask the submitter to explain various parts of the code. Async, that doesn’t work, because presumably someone who used AI without disclosing that would do the same for the explanation.

jrochkind1•57m ago
Essay is way more interesting than the title, which doesn't actually capture it.
jamesbelchamber•6m ago
The title seems perfectly engineered to get upvotes from people who don't read the article, which puts the article in front of more people who would actually read it (which is good because the article is, as you say, very interesting and worth sharing).

I don't like it but I can hardly blame them.

jcgrillo•47m ago
> Some go so far as to say “AI not welcome here” find another project.

This feels extremely counterproductive and fundamentally unenforceable to me.

But it's trivially enforceable. Accept PRs from unverified contributors, look at them for inspiration if you like, but don't ever merge one. It's probably not a satisfying answer, but if you want or need to ensure your project hasn't been infected by AI generated code you need to only accept contributions from people you know and trust.

anon3242•10m ago
This is sad. The barrier of entry will be raised extremely high, maybe even requiring some real world personal connections to the maintainer.
lapcat•46m ago
> That said it is a living demo that can help make an idea feel more real. It is also enormously fun. Think of it as a delightful movie set.

[pedantry] It bothers me that the photo for "think of prototype PRs as movie sets" is clearly not a movie set but rather the set of the TV show Seinfeld. Anyone who watched the show would immediately recognize Jerry's apartment.

DerThorsten•41m ago
Its not the set of the TV show I beliefe, but a recreation.

https://nypost.com/2015/06/23/you-can-now-visit-the-iconic-s...

It looks a bit different wrt. the stuff on the fridge and the items in the cupboard

lapcat•33m ago
I'm not sure what you mean. Those two photos are very different. The floors are entirely different, the tables are entirely different, one of the chairs/couches is different, even the intercom and light switch are different.

In any case, though, neither one is a movie set.

DerThorsten•26m ago
I think we agree, it looks like the seinfeld set, but it not the orginal set, just something looking very similar.
lapcat•21m ago
> I think we agree, it looks like the seinfeld set

I don't think we agree. What do you mean by "it"?

> it not the orginal set, just something looking very similar.

Your NY Post link is explicitly not the original set but rather a recreation. It says so in that article.

However, the photo in the NY Post is very different from the photo in the submitted blog post. Are you claiming that the photo in the submitted blog post is also not the original set? If not, then what is it, and why would there be multiple recreations?

bloppe•38m ago
Maybe we need open source credit scores. PRs from talented engineers with proven track records of high quality contributions would be presumed good enough for review. Unknown, newer contributors could have a size limit on their PRs, with massive PRs rejected automatically.
selfhoster11•33m ago
We don't need more KYC, no.
javier123454321•14m ago
Reputation building is not kyc. It is actually the thing that enables anonymization to work in a more sophisticated way.
mfenniak•4m ago
The Forgejo project has been gently trying to redirect new contributors into fixing bugs before trying to jump into the project to implement big features (https://codeberg.org/forgejo/discussions/issues/337). This allows a new contributor to get into the community, get used to working with the codebase, do something of clear value... but for the project a lot of it is about establishing reputation.

Will the contributor respond to code-review feedback? Will they follow-up on work? Will they work within the code-of-conduct and learn the contributor guidelines? All great things to figure out on small bugs, rather than after the contributor has done significant feature work.

prymitive•28m ago
The problem with AI isn’t new, it’s the same old problem with technology: computers don’t do what you want, only what you tell them. A lot of PRs can be judged by how well they are described and justified, it’s because the code itself isn’t that important, it’s the problem that you are solving with it that is. People are often great at defining problems, AIs less so IMHO. Partially because they simply have no understanding, partially because they over explain everything to a point where you just stop reading, and so you never get to the core of the problem. And even if you do there’s a good chance AI misunderstood the problem and the solution is wrong in a some more or less subtle way. This is further made worse by the sheer overconfidence of AI output, which quickly erodes any trust that they did understand the problem.
Lerc•25m ago
It is possible that some projects could benefit from triage volunteers?

There are plenty of open source projects where it is difficult to get up to speed with the intricacies of the architecture that limits the ability of talented coders to contribute on a small scale.

There might be merit in having a channel for AI contributions that casual helpers can assess to see if they pass a minimum threshold before passing on to a project maintainer to assess how the change works within the context of the overall architecture.

It would also be fascinating to see how good an AI would be at assessing the quality of a set of AI generated changes absent the instructions that generated them. They may not be able to clearly identify whether the change would work, but can they at least rank a collection of submissions to select the ones most worth looking at?

At the very least the pile of PRs count as data of things that people wanted to do, even if the code was completely unusable, placing it into a pile somewhere might be minable for the intentions of erstwhile contributors.

dearilos•17m ago
We’re fixing this slop problem - engineers write rules that are enforced on PRs. Fixes the problem pretty well so far.
jcgrillo•8m ago
I guess the main question I'm left with after reading this is "what good is a prototype, then?" In a few of the companies I've worked at there was a quarterly or biannual ritual called "hack week" or "innovation week" or "hackathon" where engineers form small teams and try to bang out a pet project super fast. Sometimes these projects get management's attention, and get "promoted" to a product or feature. Having worked on a few of these "promoted" projects, to the last they were unmitigated disasters. See, "innovation" doesn't come from a single junior engineer's 2AM beer and pizza fueled fever dream. And when you make the mistake of believing otherwise, what seemed like some bright spark's clever little dream turns into a nightmare right quick. The best thing you can do with a prototype is delete it.
andy99•5m ago
This is a problem everywhere now, and not just in code. It now takes zero effort to produce something, whether code or a work plan or “deep research” and then lob it over the fence, expecting people to review and act upon it.

It’s an extension of the asymmetric bullshit principle IMO, and I think now all workplaces / projects need norms about this.

An Idea Is a Need

https://www.isaacbowen.com/2025/10/28/idea
1•isaacbowen•25s ago•0 comments

How the UK Lost Its Shipbuilding Industry

https://www.construction-physics.com/p/how-the-uk-lost-its-shipbuilding
1•chmaynard•3m ago•0 comments

Maritime lions hunting seals on the beach

https://www.bbc.com/future/article/20251009-the-photo-showing-namibias-desert-lions-living-on-the...
1•andsoitis•3m ago•0 comments

The Cursed Legacy of JavaScript's Date Class

https://spin.atomicobject.com/javascript-date-class/
1•philk10•3m ago•0 comments

I created an app that shows the best tools for developers

https://devbarrel.com
1•Nikos_•4m ago•0 comments

Ubiquiti SFP Wizard

https://blog.ui.com/article/welcome-to-sfp-liberation-day
1•eXpl0it3r•6m ago•0 comments

How does Cloudflare's Speed Test work?

https://blog.cloudflare.com/how-does-cloudflares-speed-test-really-work/
1•kwar13•7m ago•0 comments

How to Download All Videos from a YouTube Channel with yt-dlp

https://www.endpointdev.com/blog/2025/09/how-to-download-youtube-channel/
1•chmaynard•8m ago•0 comments

Show HN: Early-Stage MVP – AI-Powered Customer Support Solutions

https://afec48d1-2eb5-490b-a1d8-f3b483aa87dc-00-3om0kaugaka9.kirk.replit.dev/landing/972ca3fc-065...
1•Founder-Led•8m ago•0 comments

Fuzzy/Non-Binary Graphing

https://gods.art/articles/fuzzy_graphing.html
2•calebm•8m ago•1 comments

RFC: Evolving PyTorch/XLA for a more native experience on TPU

https://github.com/pytorch/xla/issues/9684
1•agnosticmantis•8m ago•0 comments

Em Dashes and Elipses

https://doc.searls.com/2025/10/27/on-em-dashes-and-elipses/
1•speckx•8m ago•0 comments

Microsoft, OpenAI reach new deal to allow OpenAI to restructure

https://www.reuters.com/business/microsoft-openai-reach-new-deal-allow-openai-restructure-2025-10...
1•falcor84•10m ago•0 comments

China Has Added Forest the Size of Texas Since 1990

https://e360.yale.edu/digest/china-new-forest-report
4•Brajeshwar•12m ago•1 comments

America's immigration crackdown is disrupting the global remittance market

https://www.npr.org/sections/planet-money/2025/10/28/g-s1-94960/americas-immigration-crackdown-is...
1•thelastgallon•13m ago•0 comments

Microsoft to Get 27% of OpenAI, Access to AI Models Until 2032

https://www.bloomberg.com/news/articles/2025-10-28/microsoft-to-get-27-of-openai-access-to-ai-mod...
8•thomasjoulin•14m ago•3 comments

The Crop That Destroyed Entire Countries [video]

https://www.youtube.com/watch?v=fsu6bJQewtU
1•thelastgallon•14m ago•0 comments

Ask HN: In 2025, what search engine offers the best results?

1•AbstractH24•14m ago•1 comments

Local tech bro discovers social interaction [video]

https://www.youtube.com/watch?v=3gm7gZp99fw
1•LazyInsanity44•14m ago•0 comments

Sylius – open-source Headless eCommerce Platform

https://sylius.com/
1•NotInOurNames•16m ago•0 comments

Knausgaard – The Reenchanted World

https://harpers.org/archive/2025/06/the-reenchanted-world-karl-ove-knausgaard-digital-age/
1•latentnumber•17m ago•0 comments

Elon Musk's Grokipedia contains copied Wikipedia pages

https://www.theverge.com/news/807686/elon-musk-grokipedia-launch-wikipedia-xai-copied
3•amrrs•17m ago•0 comments

63% of jobs require keyboard use but only 2.5% of high schools teach keyboarding

https://www.typequicker.com/blog/we-type-more-than-ever-but-its-not-taught
4•absoluteunit1•19m ago•0 comments

Show HN: CloudNetDraw – Generate Azure network diagrams (SaaS and self-hosted)

1•cloudnet-draw•20m ago•1 comments

Claude on Vertex AI

https://docs.claude.com/en/api/claude-on-vertex-ai
1•Anon84•22m ago•0 comments

Frank Gasking on preserving «lost» games

https://spillhistorie.no/2025/10/24/frank-gasking-on-preserving-lost-games/
1•doener•22m ago•0 comments

Vitamin D reduces incidence and duration of colds in those with low levels

https://ijmpr.in/article/the-role-of-vitamin-d-supplementation-in-the-prevention-of-acute-respira...
3•cachecrab•23m ago•1 comments

Pico-Banana-400K: A Large-Scale Dataset for Text-Guided Image Editing

https://machinelearning.apple.com/research/pico-banana
1•meysamazad•24m ago•1 comments

Faster Database Queries: Practical Techniques

https://kapillamba4.medium.com/faster-database-queries-practical-techniques-074ba9afdaa3
1•kapil4•26m ago•0 comments

FXRant: "Halloween: The Night of Terror Ends", a Seamless Edit

http://fxrant.blogspot.com/2025/10/halloween-night-of-terror-ends-seamless.html
2•speckx•28m ago•0 comments