1) LLMs have failed to live up to the hype.
Maybe. Depends upon's who's hype. But I think it is fine to say that we don't have AGI today (however that is defined) and that some people hyped that up.
2) LLMs haven't failed outright
I think that this is a vast understatement.
LLMs have been a wild success. At big tech over 40% of checked in code is LLM generated. At smaller companies the proportion is larger. ChatGPT has over 800 million weekly active users.
Students throughout the world, and especially in the developed world are using "AI" at 85-90% (from some surveys).
Between 40% of professionals and 90% (depending upon survey and profession) are using "AI".
This is 3 years after the launch of ChatGPT (and the capabilities of chatGPT 3.5 were so limited compared to today that it is a shame that they get bundled together in our discussions). I would say instead of "failed outright" that they are the most successful consumer product of all time (so far).
Really? I derive a ton of value from it. For me it’s a phenomenal advancement and not a failure at all.
I've been programming for 30+ years and now a people manager. Claude Code has enabled me to code again and I'm several times more productive than I ever was as an IC in the 2000s and 2010s. I suspect this person hasn't really tried the most recent generation, it is quite impressive and works very well if you do know what you are doing
"it still requires genuine expertise to spot the hallucinations"
"works very well if you do know what you are doing"
For example, build a TUI or GUI with Claude Code while only giving it feedback on the UX.
Hallucinations that lead to code that doesn't work just get fixed.
If you know what you are doing it works kind of mid. You see how anything more then a prototype will create lots of issues in the long run.
Dunning-Kruger effect in action.
You have decades upon decades of experience on how to approach software development and solve problems. You know the right questions to ask.
The actual non-programmers I see on Reddit are having discussions about topics such as “I don’t believe that technical debt is a real thing” and “how can I go back in time if Claude Code destroyed my code”.
The current AI hype is fueled by public markets, and as they found out during the pandemic, the first one to blink and acknowledge the elephant in the room loses, bigly.
So, even in the face of a devastating demonstration of "AI" ineffectiveness (which I personally haven't seen, despite things being, well, entirely underwhelming), we may very well stuck in this cycle for a while yet...
Lol someone doesn't understand how the power structure system works "the golden rule". There is a saying if you owe the bank 100k you have a problem. If you owe the bank ten million the bank has a problem. OpenAI and the other players have made this bubble so big that there is no way the power system will allow themselves to take the hit. Expect some sort of tax subsided bailout in the near future.
But there is so much real economic value being created - not speculation, but actual business processes - billions of dollars - it’s hard to seriously defend the claim that LLMs are “failures” in any practical sense.
Doesn’t mean we aren’t headed for a winter of sobering reality… but it doesn’t invalidate the disruption either.
Is there really a clear-cut distinction between the two in today's VC and acquisition based economy?
"We just cured cancer! All cancer! With a simple pill!"
"But you promised it would rejuvenate everyone to the metabolism of a 20 year old and make us biologically immortal!" New headline: "After spending billions, project to achieve immortality has little to show..."
Pretty much all tech progress is like this now because the hype is always pushed to insane levels.
With LLMs we have, at the very least, solved natural language processing on computers. This is absolutely huge. We've also largely solved the "fuzzy input problem" that is intrinsic to pretty much all computing.
The argument that computational complexity has something to do with this could have merit but the article certainly doesn’t give indication as to why. Is the brain NP complete? Maybe maybe not. I could see many arguments about why modern research will fail to create AGI but just hand waving “reality is NP-hard” is not enough.
The fact is: something fundamental has changed that enables a computer to pretty effectively understand natural language. That’s a discovery on the scale of the internet or google search and shouldn’t be discounted… and usage proves it. In 2 years there is a platform with billions of users. On top of that huge fields of new research are making leaps and bounds with novel methods utilizing AI for chemistry, computational geometry, biology etc.
It’s a paradigm shift.
You understand how the tech works right? It's statistics and tokens. The computer understands nothing. Creating "understanding" would be a breakthrough.
deadbabe•19m ago
What we should underscore though, is that even if there is a new AI winter, the world isn’t going back to what it was before AI. This is it, forever.
Generations ahead will gaslight themselves into thinking this AI world is better, because who wants to grow up knowing they live in a shitty era full of slop? Don’t believe it.
7thaccount•10m ago
I think we'll continue to see anything be automated that can be automated in a way that reduces head count. So you have the dumb AI as a first line of defense and lay off half the customer service you had before.
In the meantime, fewer and fewer jobs (especially entry level), a rising poor class as the middle class is eliminated and a greater wealth gap than ever before. The markets are going to also collapse from this AI bubble. It's just a matter of when.
cardanome•4m ago
It could very well that the current generation of AI has poisoned the well for any future endeavors of creating AI. You can't trivially filter out the AI slop and humans are less likely to make their handcrafted content freely available for training. In fact violating GPL code to train models on it might be ruled to be illegal as well generally stricter rules on which data you are allowed to use for training.
We might have reached a local optimum that is very difficult to escape from. There might be a long, long AI winter ahead of us, for better or worse.
> the world isn’t going back to what it was before AI. This is it, forever.
I feel this so much. I though my longing for the pre-smartphone days was bad but damn we have lost so much.