frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Pebble Watch software is now open source

https://ericmigi.com/blog/pebble-watch-software-is-now-100percent-open-source
1049•Larrikin•18h ago•196 comments

Most Stable Raspberry Pi? 81% Better NTP with Thermal Management

https://austinsnerdythings.com/2025/11/24/worlds-most-stable-raspberry-pi-81-better-ntp-with-ther...
167•todsacerdoti•6h ago•52 comments

Meta Segment Anything Model 3

https://ai.meta.com/blog/segment-anything-model-3/?_fb_noscript=1
26•alcinos•5d ago•5 comments

Unpowered SSDs slowly lose data

https://www.xda-developers.com/your-unpowered-ssd-is-slowly-losing-your-data/
527•amichail•17h ago•226 comments

Human brains are preconfigured with instructions for understanding the world

https://news.ucsc.edu/2025/11/sharf-preconfigured-brain/
174•XzetaU8•6h ago•114 comments

Broccoli Man, Remastered

https://mbleigh.dev/posts/broccoli-man-remastered/
26•mbleigh•5d ago•3 comments

Claude Advanced Tool Use

https://www.anthropic.com/engineering/advanced-tool-use
542•lebovic•17h ago•224 comments

Using an Array of Needles to Create Solid Knitted Shapes

https://dl.acm.org/doi/10.1145/3746059.3747759
28•PaulHoule•3d ago•4 comments

Show HN: I built an interactive HN Simulator

https://news.ysimulator.run/news
380•johnsillings•19h ago•176 comments

How the Atomic Tests Looked Like from Los Angeles

https://www.amusingplanet.com/2016/09/how-atomic-tests-looked-like-from-los.html
67•ohjeez•3d ago•39 comments

Cool-retro-term: terminal emulator which mimics look and feel of CRTs

https://github.com/Swordfish90/cool-retro-term
249•michalpleban•19h ago•93 comments

Implications of AI to schools

https://twitter.com/karpathy/status/1993010584175141038
250•bilsbie•19h ago•279 comments

Build a Compiler in Five Projects

https://kmicinski.com/functional-programming/2025/11/23/build-a-language/
135•azhenley•1d ago•23 comments

Dumb Ways to Die: Printed Ephemera

https://ilovetypography.com/2025/11/19/dumb-ways-to-die-printed-ephemera/
21•jjgreen•5d ago•14 comments

What OpenAI did when ChatGPT users lost touch with reality

https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html
215•nonprofiteer•1d ago•329 comments

Show HN: OCR Arena – A playground for OCR models

https://www.ocrarena.ai/battle
162•kbyatnal•3d ago•51 comments

Rethinking C++: Architecture, Concepts, and Responsibility

https://blogs.embarcadero.com/rethinking-c-architecture-concepts-and-responsibility/
32•timeoperator•5d ago•21 comments

Claude Opus 4.5

https://www.anthropic.com/news/claude-opus-4-5
991•adocomplete•18h ago•451 comments

How did the Win 95 user interface code get brought to the Windows NT code base?

https://devblogs.microsoft.com/oldnewthing/20251028-00/?p=111733
111•ayi•3d ago•62 comments

Chrome Jpegxl Issue Reopened

https://issues.chromium.org/issues/40168998
261•markdog12•1d ago•119 comments

Google's new 'Aluminium OS' project brings Android to PC

https://www.androidauthority.com/aluminium-os-android-for-pcs-3619092/
138•jmsflknr•18h ago•190 comments

Shai-Hulud Returns: Over 300 NPM Packages Infected

https://helixguard.ai/blog/malicious-sha1hulud-2025-11-24
962•mrdosija•1d ago•728 comments

The Bitter Lesson of LLM Extensions

https://www.sawyerhood.com/blog/llm-extension
124•sawyerjhood•18h ago•66 comments

AI has a deep understanding of how this code works

https://github.com/ocaml/ocaml/pull/14369
153•theresistor•16h ago•51 comments

Fifty Shades of OOP

https://lesleylai.info/en/fifty_shades_of_oop/
117•todsacerdoti•1d ago•67 comments

Building the largest known Kubernetes cluster

https://cloud.google.com/blog/products/containers-kubernetes/how-we-built-a-130000-node-gke-cluster/
137•TangerineDream•3d ago•78 comments

Inside Rust's std and parking_lot mutexes – who wins?

https://blog.cuongle.dev/p/inside-rusts-std-and-parking-lot-mutexes-who-win
181•signa11•5d ago•81 comments

Three Years from GPT-3 to Gemini 3

https://www.oneusefulthing.org/p/three-years-from-gpt-3-to-gemini
295•JumpCrisscross•2d ago•219 comments

Making Crash Bandicoot (2011)

https://all-things-andy-gavin.com/video-games/making-crash/
7•davikr•1h ago•0 comments

Using Antigravity for Statistical Physics in JavaScript

https://christopherkrapu.com/blog/2025/antigravity-stat-mech/
29•ckrapu•3d ago•22 comments
Open in hackernews

AI has a deep understanding of how this code works

https://github.com/ocaml/ocaml/pull/14369
152•theresistor•16h ago

Comments

sebast_bake•12h ago
rip
bravetraveler•12h ago
"Challenge me on this" while meaning "endure the machine, actually"

I guess the proponents are right. We'll use LLMs one way or another, after all. They'll become one.

fzeroracer•4h ago
"Challenge me on this"

Five seconds later when challenged on why AI did something

"Beats me, AI did it and I didn't question it."

Really embarrassing stuff all around. I feel bad for open source maintainers.

djoldman•11h ago
Maintainers and repo owners will get where they want to go the fastest by not referring to what/who "generated" code in a PR.

Discussions about AI/LLM code being a problem solely because AI/LLM is not generally a productive conversation.

Better is to critique the actual PR itself. For example, needs more tests, needs to be broken up, doesn't follow our protocols for merging/docs, etc.

Additionally, if there isn't a code of conduct, AI policy, or, perhaps most importantly, a policy on how to submit PRs and which are acceptable, it's a huge weakness in a project.

In this case, clearly some feathers were ruffled but cool heads prevailed. Well done in the end..

rogerrogerr•11h ago
AI/LLMs are a problem because they create plausible looking code that can pass any review I have time to do, but doesn’t have a brain behind it that can be accountable for the code later.

As a maintainer, it used to be I could merge code that “looked good”, and if it did something subtly goofy later I could look in the blame, ping the guy who wrote it, and get a “oh yeah, I did that to flobberate the bazzle. Didn’t think about when the bazzle comes from the shintlerator and is already flobbed” response.

People who wrote plausible looking code were usually decent software people.

Now, I would get “You’re absolutely right! I implemented this incorrectly. Here’s a completely different set of changes I should have sent instead. Hope this helps!”

chii•8h ago
> doesn’t have a brain behind it that can be accountable for the code later.

the submitter could also bail just as easily. Having an AI make the PR or not makes zero difference for this accountability. Ultimately, the maintainer pressing the merge button is accountable.

What else would your value be as a maintainer, if all you did was a surface look, press merge, then find blame later when shit hits the fan?

rogerrogerr•7h ago
I don’t accept giant contributions from people who don’t have track records of sticking around. It’s faster for me to write something myself than review huge quantities of outsider code as a zero-trust artifact.
ares623•5h ago
If I had a magic wand I would wish for 2 parallel open source communities diverging from today.

One path continues on the track it has always been on, human written and maintained.

The other is fully on the AI track. Massive PRs with reviewers rubber stamping them.

I’d love to see which track comes out ahead.

Edit: in fact, perhaps there are open source projects already fully embracing AI authored contributions?

ctenb•4h ago
I agree. It would also work out like a long term supervised learning process though. Humans showing how it's really done, and AI companies taking that as a gold standard for training and development of AI.
ares623•4h ago
I'm not so sure. There's already decades of data available for the existing process.
ctenb•2h ago
That is true, but it doesn't help for new languages, frameworks, etc
jebarker•1h ago
How would you define “ahead”?
snickerbockers•10h ago
I don't suppose you saw the post where OP asked claude to explain why this patch was not plagiarized? It's pretty damning.
lambda_foo•7h ago
Why have the OP in the loop at all if he’s just sending prompts to AI? Surely it’s a wonderful piece of performance art.
footy•22m ago
it reads like humiliation fetish material honestly. I'd delete my account but he just doubles down.
orwin•5m ago
I think that's probably the most beautiful AI-generated post that was ever generated. The fact that he posted it shows that either he didn't read it, didn't understood it, or thought it would be fun to show how the AI implementation was inferior to the one it was 'inspired' from.
armchairhacker•1h ago
I agree, but @gasche brings up real points in https://github.com/ocaml/ocaml/pull/14369#issuecomment-35565.... In particular I found these important:

- Copyright issues. Even among LLM-generated code, this PR is particularly suspicious, because some files begin with the comment “created by [someone’s name]”

- No proposal. Maybe the feature isn’t useful enough to be worth the tech debt, maybe the design doesn’t follow conventions and/or adds too much tech debt

- Not enough tests

- The PR is overwhelmingly big, too big for the small core team that maintains OCaml

- People are already working on this. They’ve brainstormed the design, they’re breaking the task into smaller reviewable parts, and the code they write is trusted more than LLM-generated code

Later, @bluddy mentions a design issue: https://github.com/ocaml/ocaml/pull/14369#issuecomment-35568...

autumnstwilight•11h ago
>>> Here's my question: why did the files that you submitted name Mark Shinwell as the author?

>>> Beats me. AI decided to do so and I didn't question it.

Really sums the whole thing up...

lambda_foo•7h ago
Pretty much. I guess it’s open source but it’s not in the spirit of open source contribution.

Plus it puts the burden of reviewing the AI slop onto the project maintainers and the future maintenance is not the submitters problem. So you’ve generated lots of code using AI, nice work that’s faster for you but slower for everyone else around you.

skeledrew•6h ago
Another consideration here that hits both sides at once is that the maintainers on the project are few. So while it could be a great burden pushing generated code on them for review, it also seems a great burden to get new features done in the first place. So it boils down to the choice of dealing with generated code for X feature, or not having X feature for a long time, if ever.
dudinax•4h ago
With the understanding that generated code for X may never be mergable given the limited resources.
skeledrew•48m ago
Yes, and that may eventually lead to a more generation-friendly fork to which those desiring said friendliness, or just more features in general, will flock.
squigz•42m ago
I think everyone would appreciate if these people using LLMs to spit out these PRs would fork things and "contribute" to those forks instead.
skeledrew•20m ago
It's a fairly simple matter to reject a PR. And a nice-to-have if they update their contribution guidelines to reflect their preferences.
squigz•17m ago
It's also a fairly simple matter to respect the time of the maintainers of software you want to contribute to - by, for example, talking to them before dumping 16,000 LoC in a PR and expecting them to review it.

Unless, of course, it has nothing to do with actually contributing and improving software.

gexla•1h ago
Their issue seemed to be the process. They're setup for a certain flow. Jamming that flow breaks it. Wouldn't matter if it were AI or a sudden surge of interested developers. So, it's not a question of accepting or not accepting AI generated code, but rather changing the process. That in itself is time-consuming and carries potential risk.
skeledrew•38m ago
Definitely, with the primary issue in this case being that the PRer didn't discuss with the maintainers before going to work. Things could've gone very differently if that discussion was had, especially disclosing the intent to use generated code. Though of course there's the risk that disclosure could've led to a preemptive shutdown of the discussion, as there are those who simply don't want to consider it at all.
andai•1h ago
I thought you were paraphrasing. What in blazes...
bsder•11h ago
Can we please go back to "You have to make an account on our server to contribute or pull from the git?"

One of the biggest problems is the fact that the public nature of Github means that fixes are worth "Faux Internet Points" and a bunch of doofuses at companies like Google made "social contribution" part of the dumbass employee evaluation process.

Forcing a person to sign up would at least stop people who need "Faux Internet Points" from doing a drive-by.

fhd2•1h ago
Fully agree, luckily I don't maintain projects on GitHub anymore, but it used to be challenging long before LLMs. I had one fairly questionable contribution from someone who asked me to please merge it because their professor tasked them to build out a GitHub profile. I kinda see where the professor was coming from, but that wasn't the way. The contributor didn't really care about the project or improving it, they cared about doing what they were told, and the quality of the code and conversation followed from that.

There's many other kinds of questionable contributions. In my experience, the best ones are from people who actively use the thing, somewhat actively engage in the community (well, tickets), and try to improve the software for themselves or others. From my experience, GitHub encourages the bad kind, and the minor barriers to entry posed by almost any other contribution method largely deters them. As sad as that may be.

wilg•10h ago
Incredibly, everyone in this situation seems to have acted reasonably and normally and the situation was handled.
raincole•4h ago
https://news.ycombinator.com/edit?id=45982416

(Not so)interestingly, the PR author even advertised this work on HN.

ares623•3h ago
what’s stopping the author from maintaining their own fork i wonder?
kreetx•1h ago
Nothing!

Another question though when reading his blog: is he himself full AI? as in, not even a human writing those blog posts. Reads a bit like that.

IsTom•36m ago
Either a regular bot or a flesh bot, doesn't really matter at that point, does it?
spongebobism•12m ago
Presumably the LLM also wrote the blog post. At least, it also generated a file named OCAML_DWARF_BLOG_POST.md: https://github.com/ocaml/ocaml/pull/14369/files#diff-bc37d03...
rsynnott•1h ago
> Here's the AI-written copyright analysis...

Oh, wow. They're being way too tolerant IMO; I'd have just blocked him from the repo at about that point.

fhd2•57m ago
Their emotional maturity is off the charts, rather impressive.
nikcub•1h ago
https://github.com/ocaml/ocaml/pull/14369/files#diff-bc37d03...

Found this part hilarious - git ignoring all of the claude planning MD files that it tends to spit out, and including that in the PR

Lazy AI-driven contributions like this are why so many open source maintainers have a negative reaction to any AI-generated code

ochronus•1h ago
Kudos to the folks in the thread!
anilgulecha•1h ago
For the longest time, Linus's dictum "Talk is cheap. Show me the code" held. Now that's fallen! New rules for the new world are needed..
aarestad•16m ago
“code is cheap, show me the talk” - ie “show me you _understand_ the ‘cheap’ code”
oliwarner•1h ago
There are LLMs with more self-awareness than this guy.

Repeatedly using AI to answer questions about the legitimacy of commits from an AI, to people who are clearly skeptical is breathtakingly dense. At least they're open about it.

I did love the ~"I'll help maintain this trash mountain, but I'll need paying". Classy.

armchairhacker•1h ago
OP’s code (at least plausibly) helped him. From https://github.com/ocaml/ocaml/pull/14369#issuecomment-35568...

> Damn, I can’t debug OCaml on my Mac because there’s no DWARF info…But, hey, there’s AI and it seems to one-shot fairly complex stuff in different languages, from just a Github issue…My needs are finally taken care of!

So I do believe using an LLM to generate a big feature like OP did can be very useful, so much that I’m expecting to see such cases more frequently soon. Perhaps in the future, everyone will be constantly generating big program/library extensions that are buggy except for their particular usecase, could be swapped with someone else’s non-public extensions that they generated for the same usecase, and must be re-generated each time the main program/library updates. And that’s OK, as long as the code generation doesn’t use too much energy or cause unforeseen problems. Even badly-written code is still useful when it works.

What’s probably not useful is submitting such code as a PR. Even if it works for its original use-case, it almost certainly still has bugs, and even ignoring bugs it adds tech debt (with bugs, the tech debt is significantly worse). Our code already depends on enough libraries that are complicated, buggy, and badly-written, to the extent that they slow development and make some feasible-sounding features infeasible; let’s not make it worse.

squigz•41m ago
> cause unforeseen problems

This is literally the point of having software developers, PR reviews, and other such things. To help prevent such problems. What you're describing sounds like security hell, to say nothing of the support nightmare.

lapcat•50m ago
Techies sowing: Haha look at the Sokal hoax!!! The humanities are a joke!!

Techies reaping: Well this AI slop sucks. WTF.

Seriously, though, LLM-generated code, which is already difficult to review, would provide very good cover for state-sponsored attackers to insert vulnerabilities intentionally, with plausible deniability. "AI decided to do so and I didn't question it."

fzaninotto•49m ago
I've closed my share of AI-generated PRs on some OSS repositories I maintain. These contributors seem to jump from one project to another, until their contribution is accepted (recognized ?).

I wonder how long the open-source ecosystem will be able to resist this wave. The burden of reviewing AI-generated PRs is already not sustainable for maintainers, and the number of real open-source contributors is decreasing.

Side note: discovering the discussions in this PR is exactly why I love HN. It's like witnessing the changes in our trade in real time.

inejge•20m ago
> I wonder how long the open-source ecosystem will be able to resist this wave.

This PR was very successfully resisted: closed and locked without much reviewing. And with a lot of tolerance and patience from the developers, much more than I believe to be fruitful: the "author" is remarkably resistant to argument. So, I think that others can resist in the same way.

footy•30m ago
> AI decided to do so and I didn't question it

in response to someone asking about why the author name doesn't match the contributor's name. Incredible response.

flakiness•27m ago
In this case the PR author (either LLM or person) is "honest" enough to leave the generated copyright header that includes the LLM's source material. It' not hard to imagine that more selfish people tweak the code to hide the origin. The same situation as the AI-generated homework essays.

I generally like AI coding using CC etc, but this forced me to remember that these generated code ultimately came from these stolen (spiritually, not necessarily legally) pieces.

bdbdbdb•19m ago
No it does not. AI does not understand anything at all. It is a word prediction engine