frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Scientists Confirm Widespread Microplastics in Milk and Cheese

https://www.foodandwine.com/microplastics-milk-and-cheese-2025-study-11827170
1•donsupreme•38s ago•0 comments

Free Software Hasn't Won

https://dorotac.eu/posts/fosswon/
1•LorenDB•3m ago•0 comments

California's "Opt Me Out Act" Makes Browser-Based Opt-Out a Baseline

https://captaincompliance.com/education/californias-opt-me-out-act-makes-browser-based-opt-out-a-...
2•richartruddie•9m ago•1 comments

Show HN: Osmea – open-source Flutter Architecture for E-commerce Apps

https://osmea.masterfabric.co
2•nurLife•10m ago•0 comments

Silicon Valley: The Musical

https://www.svmusical.com/
1•scottfits•11m ago•0 comments

The new best free mind map tool

https://pathmind.app/home/
1•WTCAE•12m ago•0 comments

An initial investigation into WDDM on ReactOS

https://reactos.org/blogs/investigating-wddm/
2•LorenDB•14m ago•0 comments

We are different from all other humans in history

https://www.forkingpaths.co/p/we-are-different-from-all-other-humans-ad0
1•pseudolus•17m ago•0 comments

Agent Learning via Early Experience

https://arxiv.org/abs/2510.08558
1•jonbaer•18m ago•0 comments

Help Identify Fake/Scam Investors or Website from Switzerland, Dubai, and Beyond

https://www.escamly.com/
1•Bikashhh•23m ago•1 comments

The End Of An Era: The Mac division undergoes an inconceivable reorganization

https://folklore.org/The_End_Of_An_Era.html
2•stmw•25m ago•0 comments

Show HN: Music Visualizer with Animated Color Themes Created via ChatGPT Prompts

https://github.com/sylwekkominek/SpectrumAnalyzer
1•sylwekkominek•29m ago•1 comments

Oracle roared into AI gold rush, but its taking on huge amounts of debt to do so

https://www.barrons.com/articles/larry-ellison-oracle-56e03912
1•zerosizedweasle•29m ago•1 comments

MAML – a new configuration language (similar to JSON, YAML, and TOML)

https://maml.dev/
2•birdculture•30m ago•0 comments

Women taking Meta to task after their baby loss

https://www.bbc.co.uk/news/articles/ce8450380zyo
1•afandian•35m ago•0 comments

Hackers exploit a blind spot by hiding malware inside DNS records

https://arstechnica.com/security/2025/07/hackers-exploit-a-blind-spot-by-hiding-malware-inside-dn...
2•alwillis•37m ago•1 comments

#2. How Germany Is Losing the Battle for the Brightest Minds

https://gersemann.substack.com/p/2-how-germany-is-losing-the-battle
2•paulpauper•38m ago•0 comments

New archaeology tranche for Emergent Ventures

https://marginalrevolution.com/marginalrevolution/2025/10/new-archaeology-tranche-for-emergent-ve...
1•paulpauper•38m ago•0 comments

The Ruby Annotation Element

https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/ruby
2•amadeuspagel•47m ago•0 comments

Rapid rise of private club and travel teams found in youth sports

https://phys.org/news/2025-09-rapid-private-club-teams-youth.html
2•PaulHoule•50m ago•1 comments

Learn how Google Maps and mapping software works intuitively

https://www.secretsofmaps.com/
1•adas4044•51m ago•3 comments

Show HN: Watts Up – a bike trainer-powered, arcade browser game

https://github.com/jsattler/wattsup
1•jsattler•52m ago•0 comments

Trusting builds with Bazel remote execution

https://jmmv.dev/2025/09/bazel-remote-execution.html
2•jmmv•52m ago•3 comments

Django: Django-HTTP-compression – Adam Johnson

https://adamj.eu/tech/2025/10/10/introducing-django-http-compression/
1•todsacerdoti•53m ago•0 comments

Composeable stream processing: reactive dataflow graphs in Python

https://github.com/Point72/csp
1•timkpaine•54m ago•1 comments

Barron Trump tipped for top job at TikTok after dad tells users they 'owe' him

https://www.independent.co.uk/news/world/americas/us-politics/barron-trump-tiktok-job-trump-adven...
10•voxadam•55m ago•1 comments

Show HN: Promptlet – Mac app to help you stop typing "ultrathink" over and over

https://www.josh.ing/promptlet
1•jshchnz•55m ago•0 comments

Celebrating OpenCQRS 1.0

https://docs.eventsourcingdb.io/blog/2025/10/13/celebrating-opencqrs-10/
1•goloroden•55m ago•0 comments

Assessing microplastic contamination in milk and dairy products

https://www.nature.com/articles/s41538-025-00506-8
1•mikhael•56m ago•0 comments

Why, in 2025, do we still need a 3rd party app to write a REST API with Django?

https://emma.has-a.blog/articles/why-do-we-need-an-external-app-for-rest.html
1•Bogdanp•57m ago•0 comments
Open in hackernews

After the AI boom: what might we be left with?

https://blog.robbowley.net/2025/10/12/after-the-ai-boom-what-might-we-be-left-with/
63•imasl42•2h ago

Comments

arisAlexis•1h ago
The singularity. I don't think most authors of articles like these understand what the AI build up is about, they think it's another fad tool.
mjhay•1h ago
You might want to listen to what people besides Sam Altman and Ray Kurweil say, at least once in a while.
gizmo686•1h ago
We've had AI booms before. In terms of capabilities, this one is following exactly the same trajectory. Human researchers come up with some sort of breakthrough improvement to AI methods, that results in exponential like growth of capability as we both solve the low hanging fruit the method offers at it, and scale up the compute and data available to the limits of what is useful for the method. Then, capabilities start to plateau, and there is a long tail of the new techniques being applied in specific situations as they get combined with domain specific tuning and architecture.

We are well into this process already. Core chat capabilities have pretty much stalled out. But most of the attempts at application are still very thin layers over chat bots.

abathologist•1h ago
Indeed, the critics can only be so critical because they are not convinced of the revealed truth that we are materializing machine god. How irrational.
dotnet00•1h ago
I'm hoping that it's sarcasm to be invoking a machine god while calling the non-believers irrational.
dinobones•1h ago
GPUs still won't be cheap
lifestyleguru•1h ago
What will be the next thing eating all GPUs, after crypto and now AI?
tobias3•54m ago
The autonomous drones fighting in the next war (let's hope not...).
WalterSear•49m ago
IMHO, it will be autonomous robotics - one way or another.
mikert89•1h ago
I cant believe people still arent grasping the profound implications of computers that can talk and make decisions.
Mistletoe•1h ago
How do we make money on it, especially if massive amounts of the population lose their jobs?
mikert89•1h ago
dude again, we have computers that can talk and make decisions. We have birthed something here. We have something, this is big.
mjhay•1h ago
Even by HN standards, this is just an incredible comment. You’d think it’s satire, but I doubt it.
rootusrootus•1h ago
One of the reasons I like to swing by HN on the weekend is that the flavor of the comments is a lot spicier. For better or worse.
cactusplant7374•1h ago
Is that a thing now?
noir_lord•1h ago
Has been for a while, the difference isn’t huge but it does seem to be a difference.

Slightly different cohorts.

mikert89•1h ago
ive been on HN for about 15 years :)
mikert89•1h ago
see i was thinking my comments didnt go far enough in describing what we are witnessing
skywhopper•1h ago
The last thing we need is a bunch of random chatter, but that’s all these things can do. They can’t make decisions because they don’t actually relate to the real world. Humans (like you, apparently) may think a talking computer is a magic oracle but you’re fooling yourself.
dgacmu•1h ago
I had a computer that could talk and make decisions in 1982. It sounded pretty robotic and the decisions were 1982-level AI: Lots of if statements.

I'm not really trying to be snarky; I'm trying to point out to you that you're being really vague. And that when you actually get really, really concrete about what we have it ... starts to seem a little less magical than saying "computers that talk and think". Computers that are really quite good at sampling from a distribution of high-likelihood next language tokens based upon complex and long context window is still a pretty incredible thing, but it seems a little less likely to put us all out of a job in the next 10 years.

dcminter•1h ago
Define "make decisions" such that an 'if' statement does not qualify but an llm does.

LLMs may be a stepping stone to AGI. It's impressive tech. But nobody's proven anything like that yet, and you're running on pure faith not facts here.

mikert89•57m ago
i mean, if given the choice between using a coding agent like the codex ui, or a CS intern, I would take the coding agent every time. to me its self evident whats going on
dcminter•49m ago
Well frankly your lack of concrete arguments makes it seem a lot like you don't actually understand what's going on here.

I'm enjoying the new LLM based tooling a lot, but nothing about it suggests that we're in any way near to AGI because it's very much a one trick pony so far.

When we see generative AI that updates its weights in real time (currently an intractible problem) as part of the feedback loop then things might get very interesting. Until then it's just another tool in the box. CS interns learn.

rootusrootus•14m ago
I get a new batch of CS interns for my team every year, and I use Claude Code every day. I think Claude is pretty amazing and it definitely provides value for me, but I would choose the intern every time. I am really skeptical of any claims of getting the kind of productivity and capability growth out of an LLM that would make it an adequate replacement for a human developer.
cactusplant7374•1h ago
You should post something more substantial than this.
mikert89•56m ago
its a problem of imagination, "situational awareness". People are simply not aware of what we have discovered, their minds cannot see beyond a chatbox. thats not even to mention the smoothness of the loss functions the big ai labs are seeing, the smooth progression of the scaling laws. its all there, it progresses daily
falcor84•1h ago
You know the quote "it is easier to imagine an end to the world than an end to capitalism"? Well, AI has allowed me to start imagining the end of capitalism. I'm not claiming we're necessarily very close, but I can clearly see the way from here to a post-scarcity society.
Peritract•1h ago
How do you square that with all current AI development being intensely capitalistic?
rootusrootus•21m ago
> I can clearly see the way from here to a post-scarcity society.

I would be interested to hear the way that you see. I don't have any problem seeing a huge number of roadblocks to post-scarcity that AI won't solve, but I am open to a different perspective.

gosub100•14m ago
That's hypothetically possible if a government somehow forced corporations to redistribute their wealth. But a civil war is equally likely. Where society is destroyed after poor people with guns have their say.
b_e_n_t_o_n•1h ago
What are they?
mikert89•1h ago
how could i even begin to list them, is the point of my original comment
dcminter•1h ago
If they're that profound you should be able to come up with one example though, right?

Not that I think you're wrong, but come on - make the case!

I have the very unoriginal view that - yes, it's a (huge) bubble but also, just like the dot com bubble, the tevhnology is a big deal - but it's not obvious to see what will stand and fall in the aftermath.

Remember that Sun Microsystems, a very established pre-dot com business, rose to huge heights on the bubble and was then smashed by the fall when it popped. Who's the AI bubble's Sun and who's its Amazon? Place your bets...

b_e_n_t_o_n•1h ago
Hahahaha right
lgas•1h ago
Just pick one then, since so far you've conveyed nothing at all about them, so we're all left to wonder what you might be thinking of.
softwaredoug•1h ago
You can have a bubble, and still have profound impact from AI. See also the dotcom boom.
mikert89•1h ago
who cares about a bubble? we are on the cusp of intelligent machines. The implications will last for hundreds of years, maybe impact the trajectory of humanity
mjr00•1h ago
> we are on the cusp of intelligent machines.

Nah, we aren't. There's a reason the output of generative AI is called slop.

skywhopper•1h ago
Except we aren’t. They aren’t thinking, and they can’t actually make decisions. They generate language. That feels magical at first but in the end it’s not going to solve human problems.
bigyabai•1h ago
> we are on the cusp of intelligent machines.

Extraordinary claims demand extraordinary evidence. We have machines that talk, which is corollary to nothing.

fruitworks•1h ago
Sometimes I wonder if reason is partially a product of manipulating language.
card_zero•7m ago
It's nice that you say partially, that's bit different from every other HN comment that wondered this. Yeah, probably partially, as in, you have reason, you add in language, you get more reason.
dcminter•1h ago
> we are on the cusp of intelligent machines

That's an extremely speculative view that has been fashionable at several points in the last 50 years.

Gattopardo•1h ago
My dude, it's literally just fancy autocomplete and isn't intelligent at all.
rusk•1h ago
Clippy with a Bachelors in web search
forgotusername6•1h ago
What makes you so sure that you aren't just fancy autocomplete?
alganet•1h ago
I am sure that if I am a fancy auto-complete, I'm way fancier than LLMs. A whole different category of fancy way above their league. Not just me, but any living human is.
abathologist•1h ago
I am so sure because of the self-evidence of my experience, the results of 2 millennia of analysis into the nature of cognition and experience, and consideration of the material composition of our organism (we obviously have lots of critical analog components, which are not selecting tokens, but instead connecting with flows from other continua).

Prediction is obviously involved in certain forms of cognition, but it obviously isn't all there is to the kinds of beings we are.

daytonix•1h ago
brother please
lifestyleguru•1h ago
Matrix is calling you on the rotary dial phone.
stanac•1h ago
Is that why we no longer have telephone boots? They want to prevent Neo from jumping in and out of Matrix?
lifestyleguru•1h ago
Oh god, you know it.
card_zero•11m ago
Telephone ... boots?

Napkin scribbles

maxglute•1h ago
Bubble burst means current technical AI approach economic deadend, if the most resourced tech companies in the world can't afford to maintain AI improvment then it's probably not going to happen because public likely isn't going to let state spend $$$$ in lieu of services on sovereign Ai projects that will make them unemployed.
parineum•1h ago
I don't think anyone underestimates that and a lot of people can't wait to see it.
mikert89•1h ago
anyone mentioning a bubble is underestimating the gravity of whats going on
quesera•1h ago
I'm old enough to have heard this before, once or thrice.

It's always different this time.

More seriously: there are decent arguments that say that LLMs have an upper bound of usefulness and that we're not necessarily closer to transcending that with a different AI technology than we were 10 or 30 years ago.

The LLMs we have, even if they are approaching an upper bound, are a big deal. They're very interesting and have lots of applications. These applications might be net-negative or net-positive, it will probably vary by circumstance. But they might not become what you're extrapolating them into.

ff2400t•1h ago
I think you aren't understanding the meaning of the world bubble here. No one can deny the impact LLM can have but it still has limits. And the term bubble is used here as an economic phenomenon. This is for the money that openai is planning on spending which they don't have. So much money is being l poured here, but most users won't pay the outrageous sums of money that will actually be needed for these LLM to run, the break even points looks so far off that you can't even think about actual profitability. After the bubble bursts we will still have all the research done, the hardware left and smaller llms for people to use with on device stuff.
mikert89•1h ago
the real innovation is that neural networks are generalized learning machines. LLMs are neural networks on human language. The implications of world models + LLMs will take them farther
KPGv2•1h ago
The neural net was invented in the 1940s, and LLMs were created in the 1960s. It's 2025 and we're still using 80yo architecture. Call me cynical, but I don't understand how we're going to avoid the physical limitations of GPUs and data to train AIs on. We've pretty much exhausted the latter, and the former is going to hit sooner rather than later. We'll be left at that point with an approach that hasn't changed much since WW2, and our only solution is going to hit a physical limit law.

Even in 2002, my CS profs were talking about how GAI was a long time off bc we had been trying for decades to innovate on neural nets and LLMs and nothing better had been created despite some of the smartest people on the planet trying.

mikert89•59m ago
they didnt have the compute or the data to make use of NNs. but theoretically NNs made sense even back then, and many people thought they could give rise to intelligent machines. they were probably right, and its a shame they didnt live to see whats happening right now
KPGv2•47m ago
> they didnt have the compute or the data to make use of NNs

The compute and data are both limitations of NNs.

We've already gotten really close to the data limit (we aren't generating enough useful content as a species and the existing stuff has all been slurped up).

Standard laws of physics restrict the compute side, just like how we know we will hit it with CPUs. Eventually, you just cannot put things closer together that generate more heat because they interfere with each other because we hit the physical laws re miniaturization.

No, GAI will require new architectures no one has thought of in nearly a century.

mikert89•17m ago
dude who cares about data and compute limits. those can be solved with human ingenuity. the ambiguity of creating a generalized learning algorithm has been solved. a digital god has been summoned
muldvarp•58m ago
The internet was world changing and the dotcom bubble was still a bubble.
Starlevel004•1h ago
The self checkout machines at the supermarket can talk and make decisions. I don't see them revolutionising the world.
mikert89•1h ago
think bigger, because this certainly is. change on the order of years means nothing
Starlevel004•1h ago
Sorry, I don't believe in Calvinism.
Legend2440•1h ago
1. That’s not remotely the same, and you know it.

2. The category of computerized machines (of which self checkouts are one example) has absolutely revolutionized the world. Computerization is the defining technology of the last twenty years.

alganet•1h ago
What is that category and what other machines are in it?
givemeethekeys•1h ago
> I don't see them revolutionising the world.

They revolutionized supermarkets.

KPGv2•1h ago
In what way?

I would really like to hear you explain how they revolutionized supermarkets.

I use them every day, and my shopping experience is served far better by going to a place that is smaller than one that has automated checkout machines. (Smaller means so much faster.)

Hell, if you go to Costco, the automated checkout line moves slower than the ones manned by experienced workers.

Starlevel004•45m ago
Unless you happen to be some sort of rodent that feeds off of discarded grains, the supermarket is not the world.

And for small baskets, sure, but it was scan as you shop that really changed supermarkets and those things thankfully do not talk.

ogogmad•58m ago
This is the most perfect troll comment I've ever seen. Bravo.
dcminter•45m ago
I think it's worth engaging with even if this guy's a troll (not saying he is) because it's not that freakish a view in the real world. What are the arguments to counter this kind of blind enthusiasm?
flyinglizard•1h ago
From a software development perspective, the more I think of it, the more I understand it's just another abstraction layer. Before that came high level languages and JVM of sorts, before that came the compiler, before that came the assembler.

Outside of the software world it's mostly a (much!) better Google.

Between now and a Star Trek world, there's so much to build that we can use any help we can get.

mikert89•1h ago
yeah we have fuzzy computer interfaces now, instead of "hard coded" apis.
elorant•1h ago
Speech to text and vice versa exists for over a decade. Where's the life altering application from that?
KPGv2•1h ago
> Speech to text and vice versa exists for over a decade.

Indeed. I was using speech to text three decades ago. Dragon Naturally Speaking was released in the 90s.

AstroBen•1h ago
..except they can't

It's blatantly obvious to see if you work with something you personally have a lot of expertise in. They're effectively advanced search engines. Useful sure.. but they're not anywhere close to "making decisions"

gosub100•11m ago
An RNG can "make a decision" lol.
zkmon•1h ago
Maybe you are not grasping the convergence effect of the overall socio-political-economic trends that could actually label AI output as abhorrent plastic pollution or atleast not a high priority for public good.
paufernandez•1h ago
In my case I fully grasp what such a future could be, but I don't think we are on the path to that, I believe people are too optimistic, i.e. they just believe instead of being truly skeptical.

From where I look at it, LLMs are flawed in many ways, and people who see progress as inevitable do not have a mental model of the foundation of those systems to be able to extrapolate. Also, people do not know any other forms of AI or have though hard about this stuff on their own.

The most problematic things are:

1) LLMs are probabilistic and a continuous function, forced by gradient descent. (Just having a "temperature" seems so crazy to me.) We need to merge symbolic and discrete forms of AI. Hallucinations are the elephant in the room. They should not be put under the rug. They should just not be there in the first place! If we try to cover them with a layer of varnish, the cost will be very large in the long run (it already is: step-by-step reasoning, mixture of experts, RAG, etc. are all varnish, in my opinion)

2) Even if generalization seems ok, I think it is still really far from where it should be, since humans need exponentially less data and generalize to concepts way more abstract than AI systems. This is related to HASA and ISA relations. Current AI systems do not have any of that. Hierarchy is supposed to be the depth of the network, but it is a guess at best.

3) We are just putting layer upon layer of complexity instead of simplifying. It is the victory of the complexifiers and it is motivated by the rush to win the race. However, I am not so sure that, even if the goal seems so close now, we are going to reach it. What are we gonna do? Keep adding another order of magnitude of compute on top of the last one to move forward? That's the bubble that I see. I think that that is not solving AI at all. And I'm almost sure that a much better way of doing AI is possible, but we have fallen into a bad attractor just because Ilya was very determined.

We need new models, way simpler, symbolic and continuous at the same time (i.e. symbolic that simulate continuous), non-gradient descent learning (just store stuff like a database), HAS-A hierarchies to attend to different levels of structure, IS-A taxonomies as a way to generalize deeply, etc, etc, etc.

Even if we make progress by brute forcing it with resources, there is so much work to simplify and find new ideas that I still don't understand why people are so optimistic.

mikert89•47m ago
symbols and concepts are just collections of neurons that fire with the correct activation. its all about the bitter lesson, human beings cannot design ai, they can only find the most general equations, most general loss function, and push data in. and thats what we have, and thats why its a big deal. The LLM is just a manifestation of a much broader discovery, a generalized learning algorithm. it worked on language because of the information density, but with more compute, we may be able to push in more general sensory data...
pixl97•45m ago
Symbolic AI is mostly dead, we spend a lot of time and money on it and got complex and fragile systems that are far worse than LLMs.
ogogmad•37m ago
Not sure this is a good counterpoint in defence of LLMs, but I'm reminded of how Unix people explain why (in their experience) data should be encoded, stored and transmitted as text instead of something more seemingly natural like binary. It's because text provides more ways to read and transform it, IN SPITE of its obvious inefficiency. LLMs are the ultimate Unix text transformation filter. They are extremely flexible out-of-the-box, and friendly towards experimentation.
ACCount37•28m ago
Symbolic AI is dead. Either stop trying to dig out and reanimate its corpse, or move the goalposts like Gary Marcus did - and start saying "LLMs with a Python interpreter beat LLMs without, and Python is symbolic, so symbolic AI won, GG".

Hallucinations are incredibly fucking overrated as a problem. They are a consequence of the LLM in question not having a good enough internal model of its own knowledge, which is downstream from how they're trained. Plenty of things could be done to improve on that - and there is no fundamental limitation that would prevent LLMs from matching human hallucination rates - which are significantly above zero.

There is a lot of "transformer LLMs are flawed" going around, and a lot of alternative architectures being proposed, or even trained and demonstrated. But so far? There's nothing that would actually outperform transformer LLMs at their strengths. Most alternatives are sidegrades at best.

For how "naive" transformer LLMs seem, they sure set a high bar.

Saying "I know better" is quite easy. Backing that up is really hard.

rafavento•1h ago
Computers have been able to talk and make decisions from the beginning. Maybe you meant mimicking humans?
mikert89•46m ago
mimick is quite a loaded word
dotnet00•1h ago
Reminds me of crypto/Web-3.0 hype. Lots of bluster about changing economic systems, offering people freedom and wealth, only to mostly be scams, and coming with too serious inherent drawbacks/costs to solve many of the big problems it promises to solve.

In the end leaving the world changed, but not as meaningfully or positively as promised.

mikert89•1h ago
the difference is the impact of crypto was always hypothetical, chatgpt can be used, explored, and if you are creative enough, levered in ways as the ultimate tool
dotnet00•56m ago
You've done nothing but reuse the Sam Altman/Elon Musk playbook of making wild and extremely vague statements.

Maybe say something concrete? What's a positive real world impact of LLMs where they aren't hideously expensive and error prone to the point of near uselessness? Something that isn't just the equivalent of a crypto-bro saying that their system for semi-regulated speculation (totally not a rugpull!) will end the tyranny of the banks.

ogogmad•52m ago
So you're saying that modern LLMs are a just like crypto/Web3, except in all the ways they're not, so they must be useless.

---

Less flippantly, they are excellent for self-studying university-level topics. It's like being able to ask questions to a personal tutor/professor.

zirror•27m ago
But you need to verify everything unless it’s self evident. The number of times CoPilot (Sonnett 4) still hallucinates Browser APIs is astonishing. Imaging trying to learn something that can’t be checked easily, like Egyptian archeology or something.
bluesnowmonkey•5m ago
You have to verify everything from human developers too. They hallucinate APIs when they try to write code from memory. So we have:

  - documentation
  - design reviews
  - type systems
  - code review
  - unit tests
  - continuous integration
  - integration testing
  - Q&A process
  - etc.
It turns out when include all these processes, teams of error-prone human developers can produce complex working software. Mostly -- sometimes there are bugs. Kind of a lot actually. But we get things done.

Is it not the same with AI? With the right processes you can get consistent results from inconsistent tools.

mikert89•51m ago
they speak in generalities because the models are profoundly general, a general learning system. below someone asked me to list the capabilities, its the wrong question to ask. its like asking what a baby can do
throwawa14223•13m ago
ChatGPT is just as useless as a shitcoin and just like a shitcoin the sooner we stop burning electricity on LLMs the better.
Findecanor•1h ago
“The ability to speak does not make you intelligent.”
mikert89•1h ago
again, only a few years ago, the concept of a real time voice conversation with a computer was straight out of science fiction
dcminter•42m ago
This is true. The old original series (and later) Star Trek computers being able to interpret normal idiomatic humam speech and act upon it was, to those in the know, hilariously unrealistic until very suddenly just recently it wasn't. Pretty cool.
mikert89•17m ago
pretty much all of the classical ideas of what an ai could do, can be done with our existing capabilities. and yet, people continue to live as if the world has not changed
dcminter•13m ago
"AI" has been doing that since the 1950s though. The problem is that each time we define something and say "only an intelligent machine can X" we find out that X is woefully inadequate as an example of real intelligence. Like hilariously so. e.g. "play chess" - seemed perfectly reasonable at the time, but clearly 1980s budget chess computers are not "intelligent" in any very useful way regardless of how Sci Fi they were in the 40s.

So why's it different this time?

dcminter•44m ago
"Empty vessels make the loudest noise" as my headmaster used to rather pointedly quote to me from time to time.
gosub100•18m ago
I can't believe I still have to do my own laundry and dishes. Like that's some how way more powerful than the models of a megawatt powered data center and millions of dollars in 3nm silicon can conquer.
flyinglizard•1h ago
I admit to only being in this industry for three decades now, and only designing and implementing the thermal/power control algo of an AI chip family for three years in that time, but it's the first time I hear of chips "wearing under high intensity use".
bsaul•1h ago
thanks for that comment. I know absolutely nothing about chip designs, but i too was under the assumption that chips, like anything, wear out. And the more you use them, the more they do.

Is the wear so small that it’s simply negligible ?

flyinglizard•1h ago
As long as you keep temperatures and currents in check, there's no reason for a chip under load to fare worse than an idle chip. Eventually, maybe, but not in the 5-10 year lifespan expected of semiconductors.
sam_bristow•57m ago
Wasn't there a phenomenon with the GPUs being retired from crypto mining operations being basically cooked after a couple of years. Likely because they weren't keeping temperatures in check and just pushing the cards to their limits.
cyberax•55m ago
Chips absolutely do wear out. Dopants electromigrate and higher temperatures make it faster. Discrete components like capacitors also tend to fail over time.

Is it going to be that significant though? No idea.

ACCount37•4m ago
Depends on the design, and how hard you push it.

Just ask Intel what happened to 14th gen.

It's not normally an issue, but the edge cases can be very sharp. Otherwise, the bigger concern is the hardware becoming obsolete because of new generations being significantly more power efficient. Over a few years, the power+cooling+location bill of a high end CPU running at 90% utilization can cost more than the CPU itself.

archerx•1h ago
I believe the next step will be robotics and getting A.I. to interact with the physical world at human fidelity.

Maybe we can finally have a Rosie from the Jetsons.

blibble•1h ago
> Maybe we can finally have a Rosie from the Jetsons.

just what I want, a mobile Alexa that spews ads and spies on me 24/7

trollbridge•52m ago
Robots still haven’t come close to replicating human and animal touch as a sense, and LLMs don’t do anything to help with that.
scellus•1h ago
He writes as if only datacenters and network equipment remain after the AI bubble bursts. Like there won't be any AI models anymore, nothing left after the big training runs and trillion-dollar R&D, and no inference served.
Juliate•1h ago
The point is, after the bubble burst, will there be enough funds, cash flow and... a viable market, to make these still run?
muldvarp•52m ago
Inference is not that expensive. I'd argue that most models are already useful enough that people will pay to run them.
rootusrootus•32m ago
At $20/month for Claude, I'm satisfied. I'll keep paying that for what I get from it, even if it never improves again.
rjh29•1h ago
Who's going to pay to run those models? They are currently running at a huge loss.
antonvs•1h ago
I run models for coding on my own machines. They’re a trivial expense compared to what I earn from the work I do.

The “at a loss” scenario comes from (1) training costs and (2) companies selling tokens below market to get market share. Neither of those imply that people won’t run models in future. Training new frontier-class models could potentially become an issue, but even that seems unlikely given what these models are capable of.

Juliate•1h ago
Ok, running them locally, that's definitely a thing.

But then, without this huge financial and tech bubble that's driven by these huge companies:

1/ will those models evolve, or new models appear, for a fraction of the cost of building them today?

2/ will GPU (or their replacement) also cost a fraction of what they cost today, so that they are still integrated in end-user processors, so that those model can run efficiently?

azeirah•1h ago
Given the popularity and activity and pace of innovation seen on /r/LocalLLaMa, I do think models will keep improving. Likely not at the same pace as they are today, but those people love tinkering but it's mostly enthusiasts with a budget for a fancy setup in a garage, independent researchers and smaller businesses doing research there.

These people won't sit still and models will keep getting better as well as cheaper to run.

surgical_fire•25m ago
It's unclear if people would pay the price to use them if they were not below market.

I have access to quite a few models, and I use them here and there. They are sort of useful, sometimes. But I don't pay directly for any of them. Honestly, I wouldn't.

qgin•1h ago
The models get more efficient every year and consumer chips get more capable every year. A GPT-5 level model will be on every phone running locally in 5 years.
swarnie•1h ago
Can i sign up for an alterative future please? This one sounds horrendous.
harvey9•1h ago
I can run quite useful models on my PC. Might not change the world but I got a usable transcript of an old foreign language TV show and then machine translated to English. It is not as good as professional subtitles but i wasn't willing to pay the cost of that option.
joshuahedlund•34m ago
Won’t those models gradually become outdated (for anything related to events that happen after the model was trained, new code languages or framework versions, etc) if no one is around to continually re-train them?
surgical_fire•31m ago
"we will be left with local models that can be sort of useful but also sort of sucks" is not really a great proposition for the obscene amount of money being invested in this.
quesera•1h ago
Running the models is cheap. That will be worthwhile even if the bubble pops hard. Not for all of the silly stuff we do today, but for some of it.

Creating new LLMs might be out of reach for all but very well-capitalized organizations with clear intentions, and governments.

There might be a viable market for SLMs though. Why does my model need to know about the Boer wars to generate usable code?

logicchains•1h ago
They're not running at a loss. Training runs at a loss, but the models are profitable to serve if you don't need to continuously train new models.
WalterSear•53m ago
Anthropic said their inference is cash positive. I would be very surprised if this isn't the norm.
surgical_fire•28m ago
I would be surprised if they are being honest.
cactusplant7374•1h ago
> Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use.

How about chips during the dotcom period? What was their lifespan?

bc569a80a344f9c•1h ago
That’s irrelevant because the primary assets left after the dotcom bubble was fiber in the ground with a lifetime measure in decades.
cactusplant7374•1h ago
The author of the article is comparing past and present. It's not irrelevant to the article.
bc569a80a344f9c•1h ago
The whole point of the article is that the dotcom era produced long term assets that stayed valuable for decades after the bubble burst, and argues that the AI era is producing short term assets that won’t be of much use if the bubble bursts.
stanac•1h ago
More or less every high-end hardware becomes obsolete, or in other words becomes second class. First difference is that at least networking hardware could be used for years, compute/storage servers became obsolete faster than networking. Second is scale. Google summary says that current investments are 17x greater than dot com investments. It may be wrong about the number but investments are at least on an order of magnitude larger.

Maybe in next decade we will have cheap gaming cloud offerings built on repurposed GPUs.

mrcwinn•1h ago
Without moralizing or assuming the worst intentions of oligarchs, globalists, evil capitalists, and so on, I still don’t understand how a consumption based economy continues to fund the build out (oil->Saudi Arabia->LPs->OpenAI) when the technology likely removes the income of its consumers. Help me understand.
antonvs•1h ago
Just channeling amoral billionaires here, so don’t shoot the messenger, but if everything is automated by machines that you control, you no longer need to farm humans for capital.

Not saying that’s even remotely realistic over the next century, but it does seem to be how some of these people think. Excessive wealth destroys intelligence, it doesn’t enhance it, as countless examples show.

dotnet00•1h ago
It looks like they're planning on funding it through circular purchase agreements and looting the rest of the world.
coderenegade•14m ago
Nothing in capitalism suggests that consumers have to be human. In fact, the whole enshittification trend suggests that traditional consumers are less economically relevant than they've ever been.
dbg31415•1h ago
I hope we'll get back to building things that actually matter -- solutions that help real people; products that are enjoyable to use, and are satisfying to build.

As the noise fades, and with luck, the obsession with slapping "AI" on everything will fade with it. Too many hype-driven CEOs are chasing anything but substance.

Some AI tools may survive because they're genuinely useful, but I worry that most won't be cost-effective without heavy subsidies.

Once the easy money dries up, the real engineers and builders will still be here, quietly making things that work.

Altman's plea -- "Come on guys, we just need a few trillion more!" -- and that error-riddled AI slide deck will be the meme that marks the top of the market.

novaRom•1h ago
Democracy, personal freedoms, and rule of law are things that matter, but I am afraid we cannot get back to them quickly without significant efforts. We need first to get back to sanity. In authoritarian society AI is a tool of control, do we want it to be like that?
zkmon•1h ago
Also, the eco-system plays the biggest controlling role in the bubble and its aftermath. Ecosystem of social, political and business developments. Dotcom aftermath still had the wind from all the ecosytem trends that brought the dotcom back with bigger force. If the post AI hype world still has high priority for these talking bots, then maybe it's comparable to dotcom. If the world has other bigger basic issues that need attention, then yes, it could become a pile of silent silicon.
maxglute•1h ago
AI chips and bespoke data centers are closer to tulips than rail or fiber in terms of deprecated assets. They're not fungible stranded assets with long shelf life or room for improvment. Bubble bursting also points to current model towards AI is inherently not economically viable, i.e. we'll still be inferencing off existing models but won't be pouring 100billions into improving them. TLDR much more all or nothing gambit than past infra booms.
bitmasher9•1h ago
> GPUs that have a 1-3 year lifespan

In 10 years GPUs will have a lifespan for 5-7 years. The rate of improvement on this front has been slowing down faster then CPU.

enord•1h ago
Wait… are you betting on exponential or logarithmic returns?
stanac•1h ago
There was a video on YT (gamers nexus I think) and an excel sheet comparing jumps in performance gains between each new nvidia generation. They are becoming smaller and smaller, probably now driven by AI boom where most of the silicon is used for data centers. Regardless of that, I have a feeling we are approaching the ceiling of chip performance. Just comparing PS3 and PS4 and then PS4 with PS5, performance jump is smaller and size of the hardware has become enormous and GPUs are more and more power hungry. If generational jumps were good enough we wouldn't need more power and cooling and big boxes for desktop PCs that can hold long graphic cards.
bobthepanda•1h ago
We also have hit ceilings of performance demand. As an example, 8K TVs never really got off the ground because your average consumer could give a hoot. Vision Pro is a flop because AR/VR are super niche. Crypto is more gambling than asset class. Etc.

What is interesting is that it seems like the ever larger sums of money sloshing around are resulting in bigger, faster hype cycles. We are already seeing some companies face issues after blowback from adopting AI too fast.

nemomarx•1h ago
After heavy use, though? I don't think they mean aging out of being cutting edge but actually starting to fail sooner after being used in DCs.
trenchpilgrim•1h ago
They're mostly solid state parts. The parts that do wear out like fans are easily replaced by hobbyists.
nemomarx•1h ago
I swear during the earlier waves of bitcoin mining (before good ASICs came out) people ran them overclocked and did cause damage. Used GPUs were pretty unreliable for a while there.
trenchpilgrim•51m ago
1. Miners were clocking them beyond the factory approved speeds - something not needed for AI, where the bottleneck is usually VRAM, not clock speed.

2. While comprehensive studies were never done, some tech channels did some testing and found used GPUs to be generally reliable or easily repairable, when scamming was excluded. https://youtu.be/UFytB3bb1P8

pclmulqdq•55m ago
This is not correct. Solid-state parts wear out like mechanical parts, especially when you run them hot. The mechanism of this wear-out comes from things like electromigration of materials and high-energy electrons literally knocking atoms out of place.
trenchpilgrim•55m ago
Those parts take decades to wear out, excepting a manufacturing defect.
bee_rider•1h ago
The full quote is:

> Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use.

So it isn’t entirely tied to the rate of obsolescence, these things apparently get worn down from the workloads.

In terms of performance improvement, it is slightly complicated, right? It turns out that it was possible to do ML training on existing GPGPU. Then there was spurt of improvement as they go after the low-hanging fruit for that application…

If we’re talking about what we might be left with after the bubble pops, the rate of obsolescence doesn’t seem that relevant anyway. The chips as they are after the pop will be usable for the next thing or not, it is hard to guess.

bitmasher9•1h ago
The failure rate of GPUs in professional datacenter environments is over estimated by the general public because of the large number of overlocked and undercooled cards used for GPU mining that hit eBay.
bradleyjg•42m ago
What’s the strategy when one does die? It’s just left in place until it’s worth it to pull the entire rack?
lossolo•54m ago
I'm still waiting to get a used Nvidia A100 80 GB (released in 2020) for well under $10,000.
CuriouslyC•1h ago
This is where we're headed: https://sibylline.dev/articles/2025-10-12-ai-is-too-big-to-f...
jdalgetty•32m ago
Yikes
Havoc•17m ago
Pretty wild read. Thinking similar - for better or worse this is a full send, at least for the US centric part of the world.
bubblelicious•7m ago
Great take! Certainly resonates with me a lot

- this is war path funding

- this is geopolitics; and it’s arguably a rational and responsible play

- we should expect to see more nationalization

- whatever is at the other end of this seems like it will be extreme

And, the only way out is through

alganet•1h ago
One key difference in all of this is that people were not predicting the dotcom bubble, so there was a surplus left after it popped. It was a surprise.

This AI bubble already has lots of people with their forks and knifes waiting to capitalize on a myriad of possible surpluses after the burst. There's speculation on top of _the next bubble_ and how it will form, even before this one pops.

That is absolutely disgusting, by the way.

rz2k•1h ago
Local/open-weight models are already incredibly competent. Right now a Mac Studio with 256GB can be found for less than $5000, and an equivalent workstation will likely be 50% cheaper in a year. If anything that price is higher because of the boom, rather than subsidized by a potential bubble. It can run a 8bit quant of GPT-OSS 120B, or 4bit quant of GLM-4.6 using only an extra 100-200W. That energy use comes out to about 100 Joules or 1/4 Wh per query and response, and is already competitive with the power efficiency of even Google's offerings.

I think that people doing work in many professions with these offline tools alone could more than double their productivity compared to their productivity two years ago. Furthermore if the usage was shared in order to lower idle time, such as 20 machines for 100 workers, the initial capital outlay is even lower.

Perhaps investors will not see the returns they expect, but it is difficult to image how even the current state of AI doesn't vastly change the economy. There could be significant business failures among cloud providers and attempts to rapidly increase the cost of admission to closed models, but there's essentially no possibility of productivity regressing to a pre-AI levels.

tyleo•36m ago
I have an M4 MBP and I also think Apple is set up quite nicely to take real advantage of local models.

They already work on the most expensive Apple hardware. I expect that price to come down in the next few years.

It’s really just the UX that’s bad but that’s solvable.

Apple isn’t having to pay for each users power and use either. They sell hardware once and folks pay with their own electricity to run it.

notepad0x90•51m ago
The AI "bubble" won't burst just like the "internet bubble" didn't burst.

the dotcom bubble was a result of investors jumping on the hype train all at once and then getting off of it all at once.

Yes, investors will eventually find another hype train to jump on, but unlike 2000, we have tons of more retail investors and AI is also not a brand new tech sector, it's built upon the existing well established and "too big to fail" internet/ecommerce infrastructure. Random companies slapping AI on things will fail but all the real AI use cases will only expand and require more and more resources.

OpenAI alone just hit 800M MAU. That will easily double in a few years. There will be adjustments,corrections and adaptations of course but the value and wealth it generates is very real.

I'm no seer, I can't predict the future but I don't see a massive popping of some unified AI bubble anytime soon.

ACCount37•8m ago
People don't have the grasp of just how insane that "800M MAU and still growing" figure is. CEOs would kill people with their own bare hands to get this kind of userbase.

OpenAI has ~4B of revenue already, and they aren't even monetizing aggressively. Facebook has an infinite money glitch, and can afford to put billions in the ground in pursuit of moonshots and Zuck's own vanity projects. Google is Google, and xAI is Elon Musk. The most vulnerable frontier lab is probably Anthropic, and Anthropic is still backed by Amazon and, counterintuitively, also Google.

At the same time: there is a glut of questionable AI startups, extreme failure rate is likely - but they aren't the bulk of the market, not by a long shot. The bulk of the "AI money" is concentrated at either the frontier labs themselves, or companies providing equipment and services to them.

The only way I see for the "bubble to pop" is for multiple frontier labs to get fucked at the same time, and I just don't see that happening as it is.

dcminter•6m ago
The dot com crash was a thing though. The bubble burst. It's just that there was real value there so some of the companies survived and prospered.

Figuring out which was which was absolutely not possible at the time. Not many people foresaw Sun Microsystems as being a victim and nor was it obvious that Amazon would be a victor.

I wouldn't bet my life savings on OpenAI.

andy99•36m ago
Cell phones have been through how many generations between the 80s and now? All the past generations are obsolete, but the investment in improving the technology (which is really a continuation of WWII era RF engineering) means we have readily available low cost miniature comms equipment. It doesn’t matter that the capex on individual phones was wasted.

Same for GPUs/LLMs? At some point things will mature and we’ll be left with plentiful, cheap, high end LLM access, on the back of the investment that has been made. Whether or not it’s running on legacy GPUs, like some 90s fiber still carries traffic, is meaningless. It’s what the investment unlocks.

Havoc•16m ago
The demand for tokens isn't going anywhere, so the hardware will be used.

...whether it is profitable is another matter

deadbabe•9m ago
I think we will enter a neo-Luddite era at some point post-AI boom where it suddenly becomes fashionable to live one’s life with simple retro style technology, and social networks and much of the internet will just become places for bitter old people to complain amongst themselves and share stupid memes. Social media was cool when it was more genuine, but it got increasingly fake, and now with AI it could reach peak-fake. If people want genuine, what is more genuine than the real world?

It will become cool for you to become inaccessible, unreachable, no one knowing your location or what you’re doing. People might carry around little beeper type devices that bounce small pre-defined messages around on encrypted radio mesh networks to say stuff like “I’m okay” or “I love you”, and that’s it. Maybe they are used for contactless payments as well.

People won’t really bother searching the web anymore they’ll just ask AI to pull up whatever information they need.

The question is, with social media on the decline, with the internet no longer used for recreational purposes, what else are people going to do? Feels like the consumer tech sector will shrink dramatically, meaning that most tech written will be made to create “hard value” instead of soft. Think anything having to do with movement of data and matter, or money.

Much of the tech world and government plans are built on the assumption that people will just continue using tech to its maximum utility, even when it is clearly bad for them, but what if that simply weren’t the case? Then a lot of things fall apart.