It's the result of stochastic hill climbing of a vast reservoir of talented people, industry, and science. Each pushing the frontiers year by year, building the infra, building the connective tissue.
We built the collection of requirements that enabled it through human curiosity, random capitalistic process, boredom, etc. It was gaming GPUs for goodness sake that enabled the scale up of the algorithms. You can't get more serendipitous than that. (Perhaps some of the post-WWII/cold war tech even better qualifies for random hill climbing luck. Microwave ovens, MRI machines, etc. etc.)
Machine learning is inevitable in a civilization that has evolved intelligence, industrialization, and computation.
We've passed all the hard steps to this point. Let's see what's next. Hopefully not the great filter.
Compute and transformers are a substratum, but the stuff that developed on it through training isn't made according to our design.
Maybe you give it to the authors of a few papers, but even then you'll struggle to capture even a fraction of the necessary preconditions.
The successes also rely on observing the failures and the alternative approaches. Do we throw out their credit as well?
The list would be longer than the human genome paper.
And the headline is vague enough that you could read many meanings into it.
My take would be going back to Turing, he could see AI in the future was likely and the output of a Turing complete system is kind of a mathematical function - we just need the algorithms and hardware to crank through it which he thought we might have 50 years on but it's taken nearer 75.
The "intelligence did not get installed. It condensed" stuff reads like LLM slop.
Not really, it's called discovery, aka science.
This weird framing is just perpetuating the idea of LLMs being some kind of magic pixie dust. Stop it.
Sure, you don't know what the exact constellation of a trained model will be upfront. But similarly you don't know what, e.g, the average age of some group of people is until you compute it.
When we built nuclear powerplant we had no idea what really mattered for safety or maintenance, or even what day-to-day operations would be like, and we discovered a lot of things as we ran them (which is why we have been able to keep expanding their lifetime much longer than they were planned for).
Same for airplanes: there's tons of empirical knowledge about them, and people are still trying to build better models for why things that works do works the way they do (a former roommate of mine did a PhD on modeling combustion in jet engines, and she told me how much of the details were unknown, despite the technology being widely used for the past 70 years).
By the way, this is the fundamental reason why waterfall often fails, we generally don't understand enough about something before we build it and use it extensively.
ML model ≈ bird
Granted, I only managed to read two and half paragraphs before deciding it's not worth my time, but the argument that we didn't teach it irony is bullshit: we did exactly that by feeding it text with irony.
Individual researchers and engineers are pushing forward the field bit by bit, testing and trying, until the right conditions and circumstances emerge to make it obvious. Connections across fields and industries enable it.
Now that the salient has emerged, everyone wants to control it.
Capital battles it out for the chance to monopolize it.
There's a chance that the winner(s) become much bigger than the tech giants of today. Everyone covets owning that.
The battle to become the first multi-trillionaire is why so much money is being spent.
After everyone has been exposed to the patterns, idioms and mistakes of the parrots only the most determined (or monetarily invested) people are still impressed.
Emergence? Please, just because something has blinkenlights and humming fans does not mean it's intelligent.
[1] They steal it though to produce bad imitations.
I don't think so, have you tried?
"After everyone has been exposed to the patterns, idioms and mistakes of the parrots only the most determined (or monetarily invested) people are still impressed."
Claude: Cynical, dismissive, condescending.
* Rather than the curious "What is it good at? What could I use it for? We instead get "It's not better than me!". That lacks insight and is intentionally sidestepping the point that it has utility for a lot of people who need coding work done.
* Using a bad analogy protected by scare quotes to make an invalid point that suggests a human would be able to argue with a photocopier or a philosophical treatise. It's clearly the case that humans can only argue with an LLM, due to the interactive nature of the dialogue.
* The use of the word "steal" to indicate theft of material when training AI models, again intentionally conflating theft with copyright infringement. But even that suggestion is not accurate: Model training is currently considered fair use and court findings were already trending in this direction. So even the suggestion it's copyright infringement doesn't hold water. Piracy of material would invalidate that, but that's not what happening in the case of bgwalters code, I don't expect. I expect bgwalter published their code online and it was scraped.
Agree with the sibling comment, posting Claude's assessment that mirrors this analysis. Dismissive and cynical is a good way to put it.
X is not Y. It's X.
Hell, people said Lisp is an "AI programming language."
The lesson here might be that people say unhinged things about the new technology they hype for.
And second, this article is almost certainly AI-written, so the joke is on us for engaging with it.
It's a shallow, post-hoc, mystic rationalization that ignores all the work in multiple fields that actually converged to get us to this point.
What AI out there now is coming up with ideas for articles?
This all happened without anyone even looking for a way to create intelligence.
The biggest step in AI was the invention of the artificial neural network. However, it is still a copy of nature's work, and in fact you could argue that even the inventor is nature's work. So there's a big argument in favor of "it arrived".
I recently bought whey protein powder that doesn't come from milk. It was synthesized by human-engineered microbes. Did this invention "arrive"?
We invented AI. That the structure of a neuron inspired one subsystem architecture framework offers nothing essentialist or sacrosanct to the whole enterprise.
Sticks were our first clubs, but we don't limit our design and engineering for tools or weapons to the nature of trees. We extract good principles and invent the form as well as, often, the function.
I think the framing is dead on.
The author probably just means LLMs. And that's really all you need to know about the quality of this article.
No AI researcher from 2010 would predict that transformer architecture (if we could send them the description back in time), SGD, and Web crawling could lead to a very coherent and useful LMs.
Hold my beer
realitydrift•21h ago
At scale, any compression system faces a tradeoff between entropy and fidelity. As these models absorb more language and feedback, meaning doesn’t just get reproduced, it slowly drifts. Concepts remain locally coherent while losing alignment with their original reference points. That’s why hallucination feels like the wrong diagnosis. The deeper issue is long run semantic stability, not one off mistakes.
The arrival moment wasn’t when the system got smarter, but when it became a dominant mediator of meaning and entropy started accumulating faster than humans could notice.