frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Adafruit: Arduino’s Rules Are ‘Incompatible With Open Source’

https://thenewstack.io/adafruit-arduinos-rules-are-incompatible-with-open-source/
299•MilnerRoute•19h ago•146 comments

DNA Learning Center: Mechanism of Replication 3D Animation

https://dnalc.cshl.edu/resources/3d/04-mechanism-of-replication-advanced.html
19•timschmidt•1w ago•6 comments

Roomba maker goes bankrupt, Chinese owner emerges

https://news.bloomberglaw.com/bankruptcy-law/robot-vacuum-roomba-maker-files-for-bankruptcy-after...
336•nreece•13h ago•386 comments

Unscii

http://viznut.fi/unscii/
185•Levitating•10h ago•20 comments

Arborium: Tree-sitter code highlighting with Native and WASM targets

https://arborium.bearcove.eu/
159•zdw•10h ago•24 comments

If AI replaces workers, should it also pay taxes?

https://english.elpais.com/technology/2025-11-30/if-ai-replaces-workers-should-it-also-pay-taxes....
251•PaulHoule•13h ago•411 comments

Ask HN: What Are You Working On? (December 2025)

320•david927•21h ago•1033 comments

Largest U.S. Recycling Project to Extend Landfill Life for Virginia Residents

https://ampsortation.com/articles/largest-us-recycling-project-spsa
11•mooreds•2h ago•10 comments

Invader: Where to Spot the 8-Bit Street Art in London

https://londonist.com/london/art-and-photography/invader-where-to-spot-the-8-bit-street-art-in-lo...
23•zeristor•1w ago•7 comments

Optery (YC W22) Hiring CISO, Release Manager, Tech Lead (Node), Full Stack Eng

https://www.optery.com/careers/
1•beyondd•2h ago

$5 whale listening hydrophone making workshop

https://exclav.es/2025/08/03/dinacon-2025-passive-acoustic-listening/
61•gsf_emergency_6•4d ago•21 comments

AI agents are starting to eat SaaS

https://martinalderson.com/posts/ai-agents-are-starting-to-eat-saas/
212•jnord•14h ago•224 comments

Rob Reiner has died

https://www.hollywoodreporter.com/movies/movie-news/rob-reiner-dead-harry-met-sally-princess-brid...
173•RickJWagner•10h ago•71 comments

John Varley has died

http://floggingbabel.blogspot.com/2025/12/john-varley-1947-2025.html
107•decimalenough•11h ago•40 comments

The Problem of Teaching Physics in Latin America (1963)

https://calteches.library.caltech.edu/46/2/LatinAmerica.htm
65•rramadass•17h ago•49 comments

The Java Ring: A Wearable Computer (1998)

https://www.nngroup.com/articles/javaring-wearable-computer/
18•cromulent•5d ago•15 comments

Show HN: I wrote a book – Debugging TypeScript Applications (in beta)

https://pragprog.com/titles/aodjs/debugging-typescript-applications/
32•ozornin•1w ago•12 comments

The History of Xerox

https://www.abortretry.fail/p/the-history-of-xerox
45•rbanffy•3d ago•10 comments

Common Rust Lifetime Misconceptions

https://github.com/pretzelhammer/rust-blog/blob/master/posts/common-rust-lifetime-misconceptions.md
64•CafeRacer•8h ago•21 comments

Hashcards: A plain-text spaced repetition system

https://borretti.me/article/hashcards-plain-text-spaced-repetition
350•thomascountz•21h ago•156 comments

JSDoc is TypeScript

https://culi.bearblog.dev/jsdoc-is-typescript/
182•culi•18h ago•212 comments

CapROS: Capability-Based Reliable Operating System

https://www.capros.org/
93•gjvc•13h ago•36 comments

A trip through the Graphics Pipeline (2011)

https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/
19•kruuuder•4d ago•3 comments

Rio de Janeiro's talipot palm trees bloom for the first and only time

https://apnews.com/article/brazil-rio-talipot-palm-flamengo-park-dcfb1ce237af7a10ab72205fc9bbdc02
192•1659447091•1w ago•39 comments

Running on Empty: Copper

https://thehonestsorcerer.substack.com/p/running-on-empty-copper
77•the-needful•6d ago•52 comments

Read Something Wonderful

https://readsomethingwonderful.com/
148•snorbleck•10h ago•28 comments

Elevated errors across many models

https://status.claude.com/incidents/9g6qpr72ttbr
309•pablo24602•16h ago•146 comments

I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me

https://marcusolang.substack.com/p/im-kenyan-i-dont-write-like-chatgpt
165•florian_s•2h ago•126 comments

Avoid UUIDv4 Primary Keys

https://andyatkinson.com/avoid-uuid-version-4-primary-keys
114•pil0u•4h ago•139 comments

An attempt to articulate Forth's practical strengths and eternal usefulness

https://im-just-lee.ing/forth-why-cb234c03.txt
72•todsacerdoti•1w ago•39 comments
Open in hackernews

AI was not invented, it arrived

https://andrewarrow.dev/2025/12/ai-was-not-invented-it-arrived/
22•fcpguru•22h ago

Comments

realitydrift•21h ago
This framing clicks for me, especially the idea that we crossed a threshold by building conditions rather than intentions. One way to see what emerged is not as intelligence per se, but as a new channel for compressing human meaning.

At scale, any compression system faces a tradeoff between entropy and fidelity. As these models absorb more language and feedback, meaning doesn’t just get reproduced, it slowly drifts. Concepts remain locally coherent while losing alignment with their original reference points. That’s why hallucination feels like the wrong diagnosis. The deeper issue is long run semantic stability, not one off mistakes.

The arrival moment wasn’t when the system got smarter, but when it became a dominant mediator of meaning and entropy started accumulating faster than humans could notice.

myhf•21h ago
Small correction: AI was not invented and it did not arrive.
phplovesong•21h ago
It decended like a shitstorm, and now we are all covered in it.
qlm•20h ago
No I'm fairly certain it was invented and that this style of breathless science fiction roleplay will be looked back on as an embarrassing relic of the era.
echelon•20h ago
I didn't even read the article and know that the headline is 100% correct.

It's the result of stochastic hill climbing of a vast reservoir of talented people, industry, and science. Each pushing the frontiers year by year, building the infra, building the connective tissue.

We built the collection of requirements that enabled it through human curiosity, random capitalistic process, boredom, etc. It was gaming GPUs for goodness sake that enabled the scale up of the algorithms. You can't get more serendipitous than that. (Perhaps some of the post-WWII/cold war tech even better qualifies for random hill climbing luck. Microwave ovens, MRI machines, etc. etc.)

Machine learning is inevitable in a civilization that has evolved intelligence, industrialization, and computation.

We've passed all the hard steps to this point. Let's see what's next. Hopefully not the great filter.

hnhg•20h ago
How is that different from "Compact Discs weren't invented, they arrived"?
throw310822•20h ago
CDs are designed to be exactly in the way they are, and you don't get out of them anything more, or different, than what you put in.

Compute and transformers are a substratum, but the stuff that developed on it through training isn't made according to our design.

echelon•20h ago
Point to the single inventor of AI. You're going to have trouble.

Maybe you give it to the authors of a few papers, but even then you'll struggle to capture even a fraction of the necessary preconditions.

The successes also rely on observing the failures and the alternative approaches. Do we throw out their credit as well?

The list would be longer than the human genome paper.

qlm•20h ago
Yes and exactly the same thing could be said for the invention of compact discs. You're just describing "history".
tim333•15h ago
I don't have a problem with the headline but the article is kind of bad.

And the headline is vague enough that you could read many meanings into it.

My take would be going back to Turing, he could see AI in the future was likely and the output of a Turing complete system is kind of a mathematical function - we just need the algorithms and hardware to crank through it which he thought we might have 50 years on but it's taken nearer 75.

The "intelligence did not get installed. It condensed" stuff reads like LLM slop.

tomxor•20h ago
> The idea is unsettling because it reframes human agency

Not really, it's called discovery, aka science.

This weird framing is just perpetuating the idea of LLMs being some kind of magic pixie dust. Stop it.

cubefox•20h ago
Like magic pixie dust, nobody knows in detail how AI models work. They are not explicitly created like GOFAI or arbitrary software. The machine learning algorithms are explicitly written by humans, but the model in turn is "written" by a machine learning algorithm, in the form of billions of neural network weights.
kreetx•20h ago
I think we do know how they work, no? We give a model some input, this travels through the big neural net of probabilities (gotten with training) and then arrives at a result.

Sure, you don't know what the exact constellation of a trained model will be upfront. But similarly you don't know what, e.g, the average age of some group of people is until you compute it.

cubefox•20h ago
If it solves a problem, we generally don't know how it did it. We can't just look at its billions of weights and read what they did. They are incomprehensible to us. This is very different from GOFAI, which is just a piece of software whose code can be read and understood.
kreetx•2h ago
Any statistical model does this.
visarga•20h ago
May I point out that we don't know in detail how most code runs? Not talking about assembly, I am talking about edge cases, instabilities, etc. We know the happy path and a bit around it. All complex systems based on code are unpredictable from static code alone.
cubefox•20h ago
We know at least quite well how it runs if we look at the code. But we know almost nothing about how a specific AI model works. Looking at the weights is pointless. It's like looking into Beethoven's brain to figure out how it came up with the Moonshine sonata.
littlestymaar•20h ago
This applies to pretty much every technology:

When we built nuclear powerplant we had no idea what really mattered for safety or maintenance, or even what day-to-day operations would be like, and we discovered a lot of things as we ran them (which is why we have been able to keep expanding their lifetime much longer than they were planned for).

Same for airplanes: there's tons of empirical knowledge about them, and people are still trying to build better models for why things that works do works the way they do (a former roommate of mine did a PhD on modeling combustion in jet engines, and she told me how much of the details were unknown, despite the technology being widely used for the past 70 years).

By the way, this is the fundamental reason why waterfall often fails, we generally don't understand enough about something before we build it and use it extensively.

cubefox•20h ago
GOFAI software ≈ airplane

ML model ≈ bird

Rikudou•20h ago
Nah, I'm pretty sure we invented it. Otherwise I'm not sure what costs all these companies so much money.

Granted, I only managed to read two and half paragraphs before deciding it's not worth my time, but the argument that we didn't teach it irony is bullshit: we did exactly that by feeding it text with irony.

echelon•20h ago
Gaming GPUs enabled it. That's random serendipidous connective tissue that was presaged by none of the people who wrote the first papers fifty years ago.

Individual researchers and engineers are pushing forward the field bit by bit, testing and trying, until the right conditions and circumstances emerge to make it obvious. Connections across fields and industries enable it.

Now that the salient has emerged, everyone wants to control it.

Capital battles it out for the chance to monopolize it.

There's a chance that the winner(s) become much bigger than the tech giants of today. Everyone covets owning that.

The battle to become the first multi-trillionaire is why so much money is being spent.

bgwalter•20h ago
This gushing article omits the fact that multiple OpenAI researchers were on record saying that they were surprised by the early success of "AI". Of course the development was incremental, slow and unspectacular to insiders.

After everyone has been exposed to the patterns, idioms and mistakes of the parrots only the most determined (or monetarily invested) people are still impressed.

Emergence? Please, just because something has blinkenlights and humming fans does not mean it's intelligent.

throw310822•20h ago
Imagine that there's a lot of people who are dismissive even now, when the parrots can write their code or crush them in a philosophical discussion.
bgwalter•20h ago
They cannot write my code [1] and a photocopier also "wins" a philosophical discussion if you put Hegel snippets on it.

[1] They steal it though to produce bad imitations.

throw310822•20h ago
> a photocopier also "wins" a philosophical discussion if you put Hegel snippets on it.

I don't think so, have you tried?

kreetx•20h ago
People disagreeing with the article aren't "dismissing AI". Did you read what it said?
throw310822•20h ago
Hey Claude, can you help me categorise the tone/ sentiment of this statement, in three words?

"After everyone has been exposed to the patterns, idioms and mistakes of the parrots only the most determined (or monetarily invested) people are still impressed."

Claude: Cynical, dismissive, condescending.

kreetx•6h ago
The original post and the rest of the comment are about invent vs arrive (discover?). I'm sure I'll be able to find (parts of) your comments, too, that diverge in sentiment.
rpdillon•17h ago
bgwalter is clearly dismissing AI. The post has all the telltale signs.

* Rather than the curious "What is it good at? What could I use it for? We instead get "It's not better than me!". That lacks insight and is intentionally sidestepping the point that it has utility for a lot of people who need coding work done.

* Using a bad analogy protected by scare quotes to make an invalid point that suggests a human would be able to argue with a photocopier or a philosophical treatise. It's clearly the case that humans can only argue with an LLM, due to the interactive nature of the dialogue.

* The use of the word "steal" to indicate theft of material when training AI models, again intentionally conflating theft with copyright infringement. But even that suggestion is not accurate: Model training is currently considered fair use and court findings were already trending in this direction. So even the suggestion it's copyright infringement doesn't hold water. Piracy of material would invalidate that, but that's not what happening in the case of bgwalters code, I don't expect. I expect bgwalter published their code online and it was scraped.

Agree with the sibling comment, posting Claude's assessment that mirrors this analysis. Dismissive and cynical is a good way to put it.

bgwalter•16h ago
Thanks, Claude.
kreetx•1h ago
You don't have anything yourself to say on the actual the topic, do you?
tptacek•20h ago
If we'd had blogs back in the 1980s, someone would have written a post that sounded just like this, but about databases. People really did talk this way about "databases". There were people who were afraid of them.
happytoexplain•20h ago
This trope is being worn to the point of absurdity. Yes, people don't like things. All throughout history. Sometimes reasonably, sometimes unreasonably.

X is not Y. It's X.

tptacek•20h ago
It's not about "like" or "dislike". It's that people are unsettled by new technology that they can't immediately get their heads around. But today, it sounds kind of silly to be unsettled by the concept of a database.
raincole•20h ago
People said SQL is the "fourth generation language."

Hell, people said Lisp is an "AI programming language."

The lesson here might be that people say unhinged things about the new technology they hype for.

nospice•20h ago
I don't think this framing is useful. First, it applies to every scientific advance ever. Shoulders of giants and all that. We still choose to celebrate discovery because without it, fewer people would pursue scientific research.

And second, this article is almost certainly AI-written, so the joke is on us for engaging with it.

visarga•20h ago
I think it is AI worded, but ideas come from a real human.
yannyu•20h ago
Well, then we should judge the ideas on their own merits. And it's also not a great idea.

It's a shallow, post-hoc, mystic rationalization that ignores all the work in multiple fields that actually converged to get us to this point.

danaris•19h ago
...yes?

What AI out there now is coming up with ideas for articles?

tomrod•20h ago
I disagree strongly. AI came from smart engineering and design applied to algorithms developed for intellectual curiosity. It was absolutely invented.
amelius•20h ago
Well, intelligence evolved over millions of years without design (assuming you are not religious).

This all happened without anyone even looking for a way to create intelligence.

The biggest step in AI was the invention of the artificial neural network. However, it is still a copy of nature's work, and in fact you could argue that even the inventor is nature's work. So there's a big argument in favor of "it arrived".

qlm•20h ago
Everything that has been "invented" was invented by humans and on some level depends on the laws of nature to function.

I recently bought whey protein powder that doesn't come from milk. It was synthesized by human-engineered microbes. Did this invention "arrive"?

kreetx•20h ago
"Intelligence" is too vague. Do you mean neural nets in our heads developed in millions of years? Do we know that it is a neural net?
tomrod•20h ago
Not all AI algorithms are neural networks. So from the get go, you are conflating terms to propose an underspecified and improperly esoteric worldview.

We invented AI. That the structure of a neuron inspired one subsystem architecture framework offers nothing essentialist or sacrosanct to the whole enterprise.

Sticks were our first clubs, but we don't limit our design and engineering for tools or weapons to the nature of trees. We extract good principles and invent the form as well as, often, the function.

fromMars•20h ago
This is a brilliant article and shouldn't be dismissed so quickly.

I think the framing is dead on.

beders•20h ago
AI - as a discipline - has been around forever (1956), essentially since the birth of Lisp - both with staggering successes as well as spectacular failures that ushered in 2(3?) so-called AI winters.

The author probably just means LLMs. And that's really all you need to know about the quality of this article.

empiko•20h ago
I would say it was discovered, not invented. People were messing around with some algorithms, intruiged by their results. Eventually researchers discovered that with using certain training algorithm with certain data can lead to really wonderful outputs. But this is pure empirical discovery.

No AI researcher from 2010 would predict that transformer architecture (if we could send them the description back in time), SGD, and Web crawling could lead to a very coherent and useful LMs.

kreetx•20h ago
Yup. LLMs are a big statistical model, where also any sub-part doesn't know the whole. If it's really similar to a brain, I guess we might say we discovered it. But if it isn't, we invented it. The fact that it is so useful doesn't have to mean that "it arrived".
ripped_britches•20h ago
> Code cannot mine lithium

Hold my beer

sleepybrett•20h ago
These people are in a cult.
rdiddly•19h ago
Are LLMs intelligent? The question is far from settled, despite widespread discussion to the point of tedium. But this post freely equates the two without any reflection or qualification, not even a footnote. Omitting it avoids the tedium, but also places the post in the realm of the fanciful, which incidentally has a partial Venn-diagram overlap with the realm of marketing. Maybe that wasn't the author's intent, but that's what walked through the open door when this post arrived.