frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

A New AI Winter Is Coming

https://taranis.ie/llms-are-a-failure-a-new-ai-winter-is-coming/
35•voxleone•37m ago

Comments

deadbabe•19m ago
Modern LLMs could be like the equivalent of those steam powered toys the Romans had in their time. Steam tech went through a long winter before finally being fully utilized for the Industrial Revolution. We’ll probably never see the true AI revolution in our lifetime, only glimpses of what could be, through toys like LLMs.

What we should underscore though, is that even if there is a new AI winter, the world isn’t going back to what it was before AI. This is it, forever.

Generations ahead will gaslight themselves into thinking this AI world is better, because who wants to grow up knowing they live in a shitty era full of slop? Don’t believe it.

7thaccount•10m ago
LLMs are useful tools, but certainly have big limitations.

I think we'll continue to see anything be automated that can be automated in a way that reduces head count. So you have the dumb AI as a first line of defense and lay off half the customer service you had before.

In the meantime, fewer and fewer jobs (especially entry level), a rising poor class as the middle class is eliminated and a greater wealth gap than ever before. The markets are going to also collapse from this AI bubble. It's just a matter of when.

cardanome•4m ago
The development of LLM's required access to huge amounts of decent training data.

It could very well that the current generation of AI has poisoned the well for any future endeavors of creating AI. You can't trivially filter out the AI slop and humans are less likely to make their handcrafted content freely available for training. In fact violating GPL code to train models on it might be ruled to be illegal as well generally stricter rules on which data you are allowed to use for training.

We might have reached a local optimum that is very difficult to escape from. There might be a long, long AI winter ahead of us, for better or worse.

> the world isn’t going back to what it was before AI. This is it, forever.

I feel this so much. I though my longing for the pre-smartphone days was bad but damn we have lost so much.

kunley•18m ago
Can't wait
teaearlgraycold•17m ago
LLMs have failed to live up to the hype, but they haven't failed outright.
HardCodedBias•7m ago
Two claims here:

1) LLMs have failed to live up to the hype.

Maybe. Depends upon's who's hype. But I think it is fine to say that we don't have AGI today (however that is defined) and that some people hyped that up.

2) LLMs haven't failed outright

I think that this is a vast understatement.

LLMs have been a wild success. At big tech over 40% of checked in code is LLM generated. At smaller companies the proportion is larger. ChatGPT has over 800 million weekly active users.

Students throughout the world, and especially in the developed world are using "AI" at 85-90% (from some surveys).

Between 40% of professionals and 90% (depending upon survey and profession) are using "AI".

This is 3 years after the launch of ChatGPT (and the capabilities of chatGPT 3.5 were so limited compared to today that it is a shame that they get bundled together in our discussions). I would say instead of "failed outright" that they are the most successful consumer product of all time (so far).

moltar•15m ago
> The technology is essentially a failure

Really? I derive a ton of value from it. For me it’s a phenomenal advancement and not a failure at all.

tarr11•14m ago
This has convinced many non-programmers that they can program, but the results are consistently disastrous, because it still requires genuine expertise to spot the hallucinations.

I've been programming for 30+ years and now a people manager. Claude Code has enabled me to code again and I'm several times more productive than I ever was as an IC in the 2000s and 2010s. I suspect this person hasn't really tried the most recent generation, it is quite impressive and works very well if you do know what you are doing

agubelu•12m ago
Isn't that what the author means?

"it still requires genuine expertise to spot the hallucinations"

"works very well if you do know what you are doing"

hombre_fatal•3m ago
But it can work well even if you don't know what you are doing (or don't look at the impl).

For example, build a TUI or GUI with Claude Code while only giving it feedback on the UX.

Hallucinations that lead to code that doesn't work just get fixed.

Lionga•10m ago
It seems to work well if you DONT really know what you are doing. Because you can not spot the issues.

If you know what you are doing it works kind of mid. You see how anything more then a prototype will create lots of issues in the long run.

Dunning-Kruger effect in action.

chomp•10m ago
For toy and low effort coding it works fantastic. I can smash out changes and PRs fantastically quick, and they’re mostly correct. However, certain problem domains and tough problems cause it to spin its wheels worse than a junior programmer. Especially if some of the back and forth troubleshooting goes longer than one context compaction. Then it can forget the context of what it’s tried in the past, and goes back to square one (it may know that it tried something, but it won’t know the exact details).
stingraycharles•9m ago
If you’ve been programming for 30+ years, you definitely don’t fall under the category of “non-programmers”.

You have decades upon decades of experience on how to approach software development and solve problems. You know the right questions to ask.

The actual non-programmers I see on Reddit are having discussions about topics such as “I don’t believe that technical debt is a real thing” and “how can I go back in time if Claude Code destroyed my code”.

ZeroConcerns•12m ago
Well, the original "AI winter" was caused by defense contracts running out without anything to show for it -- turns out, the generals of the time could only be fooled by Eliza clones for so long...

The current AI hype is fueled by public markets, and as they found out during the pandemic, the first one to blink and acknowledge the elephant in the room loses, bigly.

So, even in the face of a devastating demonstration of "AI" ineffectiveness (which I personally haven't seen, despite things being, well, entirely underwhelming), we may very well stuck in this cycle for a while yet...

citizenpaul•11m ago
>Expect OpenAI to crash, hard, with investors losing their shirts.

Lol someone doesn't understand how the power structure system works "the golden rule". There is a saying if you owe the bank 100k you have a problem. If you owe the bank ten million the bank has a problem. OpenAI and the other players have made this bubble so big that there is no way the power system will allow themselves to take the hit. Expect some sort of tax subsided bailout in the near future.

aroman•11m ago
When the hype is infinite (technological singularity and utopia), any reality will be a let down.

But there is so much real economic value being created - not speculation, but actual business processes - billions of dollars - it’s hard to seriously defend the claim that LLMs are “failures” in any practical sense.

Doesn’t mean we aren’t headed for a winter of sobering reality… but it doesn’t invalidate the disruption either.

n4r9•8m ago
> not speculation, but actual business processes

Is there really a clear-cut distinction between the two in today's VC and acquisition based economy?

emp17344•8m ago
Other than inflated tech stocks making money off the promise of AI, what real economic impact has it actually had? I recall plenty of articles claiming that companies are having trouble actually manifesting the promised ROI.
api•6m ago
This is why I hate hype.

"We just cured cancer! All cancer! With a simple pill!"

"But you promised it would rejuvenate everyone to the metabolism of a 20 year old and make us biologically immortal!" New headline: "After spending billions, project to achieve immortality has little to show..."

Pretty much all tech progress is like this now because the hype is always pushed to insane levels.

With LLMs we have, at the very least, solved natural language processing on computers. This is absolutely huge. We've also largely solved the "fuzzy input problem" that is intrinsic to pretty much all computing.

keybored•5m ago
Hype Infinity is a form of apologia that I haven’t seen before.
stanfordkid•9m ago
This article uses the computational complexity hammer way to hard, discounts huge progress in every field of AI outside of the hot trend of transformers and LLMs. Nobody is saying the future of AI is autoregressive and this article pretty much ignores any of the research that has been posted here around diffusion based text generation or how it can be combined with autoregressive methods… discounts multi-modal models entirely. He also pretty much discounts everything that’s happened with AlphaFold, Alpha Go etc. reinforcement learning etc.

The argument that computational complexity has something to do with this could have merit but the article certainly doesn’t give indication as to why. Is the brain NP complete? Maybe maybe not. I could see many arguments about why modern research will fail to create AGI but just hand waving “reality is NP-hard” is not enough.

The fact is: something fundamental has changed that enables a computer to pretty effectively understand natural language. That’s a discovery on the scale of the internet or google search and shouldn’t be discounted… and usage proves it. In 2 years there is a platform with billions of users. On top of that huge fields of new research are making leaps and bounds with novel methods utilizing AI for chemistry, computational geometry, biology etc.

It’s a paradigm shift.

gishh•6m ago
> something fundamental has changed that enables a computer to pretty effectively understand natural language.

You understand how the tech works right? It's statistics and tokens. The computer understands nothing. Creating "understanding" would be a breakthrough.

paganel•7m ago
I've been reading about this supposed AI winter for at least 5 years by now, and in the meantime any AI-related stock has gone 10x and more.
neom•4m ago
Blog posts like this make me think model adoption and appropriate use case for the model is...lumpy at best. Every time I read something like it I wonder what tools they are using and how? Modern systems are not raw transformers. A raw transformer will “always output something,” they're right, but nobody deploys naked transformers. This is like claiming CPUs can’t do long division because the ALU doesn’t natively understand decimals. Also, a model is stat aprox trained on the empirical distribution of human knowledge work. It is not trying to compute the exact solution to NP complete problems? Nature does not require worst case complexity, real world cognitive tasks are not worst case NP hardness instances...
numbers_guy•2m ago
AlexNet was only released in 2011. The progress made in just 14 years has been insane. So while I do agree that we are approaching a "back to the drawing board" era, calling the past 14 years a "failure" is just not right.

Paper shows scientific foundation model learns general abstract physics concepts

https://arxiv.org/abs/2511.20798
1•iRoygbiv•2m ago•1 comments

We hacked Lovable to Vibe QA your apps

https://chromewebstore.google.com/detail/vibe-qa-ai-powered-testin/mnjjjnhaidpiaaknihingdmbnkoleffc
1•tarasyarema•3m ago•1 comments

Apple: STARFlow-V, a Normalizing Flow Model for Causal Video Generation

https://huggingface.co/apple/starflow
1•eyk19•5m ago•0 comments

DeepSeek-v3.2: Pushing the Frontier of Open Large Language Models

https://cas-bridge.xethub.hf.co/xet-bridge-us/692cfec93b25b81d09307b94/2d0aa38511b9df084d12a00fe0...
1•airstrike•5m ago•0 comments

An AI model trained on prison phone calls now looks for planned crimes in calls

https://www.technologyreview.com/2025/12/01/1128591/an-ai-model-trained-on-prison-phone-calls-is-...
1•rbanffy•6m ago•0 comments

Specification Grounding: The Missing Link in Vibe Coding

https://unstract.com/blog/specification-grounding-vibe-coding/
1•naren87•6m ago•0 comments

After 40 years of adventure games, Ron Gilbert pivots to outrunning Death

https://arstechnica.com/gaming/2025/12/after-40-years-of-adventure-games-ron-gilbert-pivots-to-ou...
1•mikhael•6m ago•0 comments

Self-Organized Criticality

https://en.wikipedia.org/wiki/Self-organized_criticality
1•indigodaddy•6m ago•1 comments

Did JWST Find an Exomoon or a Starspot?

https://www.universetoday.com/articles/did-jwst-find-an-exomoon-or-a-starspot
1•rbanffy•6m ago•0 comments

DSpico: Open-Source Nintendo DS Flashcart

https://github.com/LNH-team/dspico-hardware
1•akyuu•7m ago•0 comments

VSCode Tasks files used in new malware campaign

https://opensourcemalware.com/blog/contagious-interview-vscode
2•6mile•7m ago•0 comments

UAE Shifts Space Focus to Economy and Security

https://aviationweek.com/space/budget-policy-regulation/uae-shifts-space-focus-economy-security
1•mooreds•8m ago•1 comments

Science of cycling still largely mysterious (2016)

https://www.cbc.ca/news/science/science-of-cycling-still-mysterious-1.3699012
1•mooreds•8m ago•0 comments

Why I'm Betting Against the AGI Hype

https://www.notesfromthecircus.com/p/why-im-betting-against-the-agi-hype
2•flail•10m ago•0 comments

"There's Just No Reason to Deal with Young Employees"

https://nymag.com/intelligencer/article/ai-replacing-entry-level-jobs-gen-z-careers.html
4•mooreds•10m ago•1 comments

Low PNR Entropy: I accessed all airline bookings via simple math

https://alexschapiro.com/blog/security/vulnerability/2025/11/20/avelo-airline-reservation-api-vul...
3•bearsyankees•12m ago•1 comments

WhatsApp Ending Support For KaiOS

https://faq.whatsapp.com/420008397294796
2•j4nek•13m ago•0 comments

Android RCS Archival on Pixel

https://blog.google/products/android-enterprise/rcs-archival/
1•brycehalley•14m ago•0 comments

Show HN: RFC Hub

https://rfchub.app/
1•tlhunter•15m ago•0 comments

Critical Infrastructure: Bundestag Passes NIS2 Law

https://www.heise.de/en/news/Critical-Infrastructure-Bundestag-Passes-NIS2-Law-11078054.html
2•doener•15m ago•0 comments

Body Weight-Specific Molecular Responses to Chronic Orange Juice Consumption

https://onlinelibrary.wiley.com/doi/10.1002/mnfr.70299
1•PaulHoule•17m ago•0 comments

Touching the Elephant – TPUs

https://considerthebulldog.com/tte-tpu/
2•alivetoad•18m ago•0 comments

Proposed Price Increases for Sourcehut

https://sourcehut.org/blog/2025-12-01-proposed-pricing-changes/
2•dpatterbee•20m ago•0 comments

Tips for Configuring Neovim for Claude Code

https://xata.io/blog/configuring-neovim-coding-agents
1•tudorg•20m ago•0 comments

Lawmakers Suggest Follow-Up Boat Strike Could Be a War Crime

https://www.nytimes.com/2025/11/30/us/politics/trump-boat-strikes-war-crime.html
1•donsupreme•22m ago•0 comments

I shipped multi-tenant SaaS in 15 days with AI. Here's everything that broke

https://sentientnotes.substack.com/p/what-vibecoding-a-real-saas-taught
2•roynal•22m ago•1 comments

Show HN: Rust-based ultra-low latency streaming framework – Wingfoil

https://github.com/wingfoil-io/wingfoil
2•terraplanetary•23m ago•1 comments

Loopmaster

https://loopmaster.xyz/
1•fredley•23m ago•0 comments

Last Supper (Defense Industry)

https://en.wikipedia.org/wiki/Last_Supper_(defense_industry)
1•thunderbong•24m ago•0 comments

Learning Not to Know

https://stevenscrawls.com/learning-not-to-know/
1•surprisetalk•24m ago•0 comments