frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Wireguard FPGA

https://github.com/chili-chips-ba/wireguard-fpga
267•hasheddan•5h ago•69 comments

Edge AI for Beginners

https://github.com/microsoft/edgeai-for-beginners
51•bakigul•1h ago•14 comments

Free Software Hasn't Won

https://dorotac.eu/posts/fosswon/
19•LorenDB•47m ago•14 comments

Ask HN: What are you working on? (October 2025)

56•david927•2h ago•103 comments

Emacs agent-shell (powered by ACP)

https://xenodium.com/introducing-agent-shell
69•Karrot_Kream•2h ago•5 comments

Completing a BASIC language interpreter in 2025

https://nanochess.org/ecs_basic_2.html
44•nanochess•3h ago•1 comments

MAML – a new configuration language (similar to JSON, YAML, and TOML)

https://maml.dev/
15•birdculture•1h ago•9 comments

Three ways formally verified code can go wrong in practice

https://buttondown.com/hillelwayne/archive/three-ways-formally-verified-code-can-go-wrong-in/
25•todsacerdoti•16h ago•10 comments

Macro Splats 2025

https://danybittel.ch/macro.html
354•danybittel•12h ago•59 comments

A whirlwind introduction to dataflow graphs (2018)

https://fgiesen.wordpress.com/2018/03/05/a-whirlwind-introduction-to-dataflow-graphs/
14•shoo•1d ago•0 comments

Tiny Teams Playbook

https://www.latent.space/p/tiny
42•tilt•4d ago•13 comments

Bird Photographer of the Year Gives a Lesson in Planning and Patience

https://www.thisiscolossal.com/2025/09/2025-bird-photographer-of-the-year-contest/
21•surprisetalk•6d ago•4 comments

Constraint satisfaction to optimize item selection for bundles in Minecraft

https://www.robw.fyi/2025/10/12/using-constraint-satisfaction-to-optimize-item-selection-for-bund...
11•someguy101010•4h ago•4 comments

A years-long Turkish alphabet bug in the Kotlin compiler

https://sam-cooper.medium.com/the-country-that-broke-kotlin-84bdd0afb237
43•Bogdanp•5h ago•44 comments

Rcyl – a recycled plastic urban bike

https://rcyl.bike/en/the-bike/
15•smartmic•3h ago•15 comments

Show HN: I built a simple ambient sound app with no ads or subscriptions

https://ambisounds.app/
65•alpaca121•7h ago•30 comments

AdapTive-LeArning Speculator System (ATLAS): Faster LLM inference

https://www.together.ai/blog/adaptive-learning-speculator-system-atlas
184•alecco•14h ago•43 comments

oavif: Faster target quality image compression

https://giannirosato.com/blog/post/oavif/
12•computerbuster•6h ago•2 comments

Addictive-like behavioural traits in pet dogs with extreme motivation for toys

https://www.nature.com/articles/s41598-025-18636-0
128•wallflower•6h ago•84 comments

3D-Printed Automatic Weather Station

https://3dpaws.comet.ucar.edu
7•hyperbovine•3d ago•0 comments

Schleswig-Holstein completes migration to open source email

https://news.itsfoss.com/schleswig-holstein-email-system-migration/
290•sebastian_z•7h ago•92 comments

How I'm using Helix editor

https://rushter.com/blog/helix-editor/
168•f311a•6h ago•49 comments

Loko Scheme: bare metal optimizing Scheme compiler

https://scheme.fail/
143•dTal•5d ago•14 comments

The neurons that let us see what isn't there

https://arstechnica.com/science/2025/10/the-neurons-that-let-us-see-what-isnt-there/
22•rbanffy•5d ago•1 comments

HP1345A (and wargames) (2017)

https://phk.freebsd.dk/hacks/Wargames/
25•rbanffy•3h ago•0 comments

Nostr and ATProto (2024)

https://shreyanjain.net/2024/07/05/nostr-and-atproto.html
111•sph•13h ago•56 comments

Meta Superintelligence Labs' first paper is about RAG

https://paddedinputs.substack.com/p/meta-superintelligences-surprising
389•skadamat•23h ago•220 comments

After the AI boom: what might we be left with?

https://blog.robbowley.net/2025/10/12/after-the-ai-boom-what-might-we-be-left-with/
78•imasl42•2h ago•217 comments

Ridley Scott's Prometheus and Alien: Covenant – Contemporary Horror of AI (2020)

https://www.ejumpcut.org/archive/jc58.2018/AlpertAlienPrequels/index.html
49•measurablefunc•5h ago•34 comments

The Flummoxagon

https://n-e-r-v-o-u-s.com/blog/?p=9827
106•robinhouston•5d ago•24 comments
Open in hackernews

After the AI boom: what might we be left with?

https://blog.robbowley.net/2025/10/12/after-the-ai-boom-what-might-we-be-left-with/
78•imasl42•2h ago

Comments

arisAlexis•2h ago
The singularity. I don't think most authors of articles like these understand what the AI build up is about, they think it's another fad tool.
mjhay•2h ago
You might want to listen to what people besides Sam Altman and Ray Kurweil say, at least once in a while.
gizmo686•2h ago
We've had AI booms before. In terms of capabilities, this one is following exactly the same trajectory. Human researchers come up with some sort of breakthrough improvement to AI methods, that results in exponential like growth of capability as we both solve the low hanging fruit the method offers at it, and scale up the compute and data available to the limits of what is useful for the method. Then, capabilities start to plateau, and there is a long tail of the new techniques being applied in specific situations as they get combined with domain specific tuning and architecture.

We are well into this process already. Core chat capabilities have pretty much stalled out. But most of the attempts at application are still very thin layers over chat bots.

abathologist•1h ago
Indeed, the critics can only be so critical because they are not convinced of the revealed truth that we are materializing machine god. How irrational.
dotnet00•1h ago
I'm hoping that it's sarcasm to be invoking a machine god while calling the non-believers irrational.
dinobones•2h ago
GPUs still won't be cheap
lifestyleguru•1h ago
What will be the next thing eating all GPUs, after crypto and now AI?
tobias3•1h ago
The autonomous drones fighting in the next war (let's hope not...).
WalterSear•1h ago
IMHO, it will be autonomous robotics - one way or another.
willis936•28m ago
I can't tell you the name, but it will be another scam all the same.
mikert89•2h ago
I cant believe people still arent grasping the profound implications of computers that can talk and make decisions.
Mistletoe•2h ago
How do we make money on it, especially if massive amounts of the population lose their jobs?
mikert89•2h ago
dude again, we have computers that can talk and make decisions. We have birthed something here. We have something, this is big.
mjhay•2h ago
Even by HN standards, this is just an incredible comment. You’d think it’s satire, but I doubt it.
rootusrootus•2h ago
One of the reasons I like to swing by HN on the weekend is that the flavor of the comments is a lot spicier. For better or worse.
cactusplant7374•2h ago
Is that a thing now?
noir_lord•2h ago
Has been for a while, the difference isn’t huge but it does seem to be a difference.

Slightly different cohorts.

mikert89•1h ago
ive been on HN for about 15 years :)
mikert89•2h ago
see i was thinking my comments didnt go far enough in describing what we are witnessing
skywhopper•2h ago
The last thing we need is a bunch of random chatter, but that’s all these things can do. They can’t make decisions because they don’t actually relate to the real world. Humans (like you, apparently) may think a talking computer is a magic oracle but you’re fooling yourself.
dgacmu•2h ago
I had a computer that could talk and make decisions in 1982. It sounded pretty robotic and the decisions were 1982-level AI: Lots of if statements.

I'm not really trying to be snarky; I'm trying to point out to you that you're being really vague. And that when you actually get really, really concrete about what we have it ... starts to seem a little less magical than saying "computers that talk and think". Computers that are really quite good at sampling from a distribution of high-likelihood next language tokens based upon complex and long context window is still a pretty incredible thing, but it seems a little less likely to put us all out of a job in the next 10 years.

pixl97•37m ago
>I had a computer that could talk and make decisions in 1982.

And it became and industry that as completely and totally changed the world. The world was just so analog back then.

>starts to seem a little less magical than saying "computers that talk and think"

Computer thinking will never become magical. As soon as we figure something out it becomes "oh that is just X". It is human thinking that will become less magical over time.

dcminter•2h ago
Define "make decisions" such that an 'if' statement does not qualify but an llm does.

LLMs may be a stepping stone to AGI. It's impressive tech. But nobody's proven anything like that yet, and you're running on pure faith not facts here.

mikert89•1h ago
i mean, if given the choice between using a coding agent like the codex ui, or a CS intern, I would take the coding agent every time. to me its self evident whats going on
dcminter•1h ago
Well frankly your lack of concrete arguments makes it seem a lot like you don't actually understand what's going on here.

I'm enjoying the new LLM based tooling a lot, but nothing about it suggests that we're in any way near to AGI because it's very much a one trick pony so far.

When we see generative AI that updates its weights in real time (currently an intractible problem) as part of the feedback loop then things might get very interesting. Until then it's just another tool in the box. CS interns learn.

rootusrootus•58m ago
I get a new batch of CS interns for my team every year, and I use Claude Code every day. I think Claude is pretty amazing and it definitely provides value for me, but I would choose the intern every time. I am really skeptical of any claims of getting the kind of productivity and capability growth out of an LLM that would make it an adequate replacement for a human developer.
cactusplant7374•2h ago
You should post something more substantial than this.
mikert89•1h ago
its a problem of imagination, "situational awareness". People are simply not aware of what we have discovered, their minds cannot see beyond a chatbox. thats not even to mention the smoothness of the loss functions the big ai labs are seeing, the smooth progression of the scaling laws. its all there, it progresses daily
falcor84•2h ago
You know the quote "it is easier to imagine an end to the world than an end to capitalism"? Well, AI has allowed me to start imagining the end of capitalism. I'm not claiming we're necessarily very close, but I can clearly see the way from here to a post-scarcity society.
Peritract•2h ago
How do you square that with all current AI development being intensely capitalistic?
pixl97•40m ago
All kinds of things commit suicide, intentional and unintentional.
rootusrootus•1h ago
> I can clearly see the way from here to a post-scarcity society.

I would be interested to hear the way that you see. I don't have any problem seeing a huge number of roadblocks to post-scarcity that AI won't solve, but I am open to a different perspective.

gosub100•58m ago
That's hypothetically possible if a government somehow forced corporations to redistribute their wealth. But a civil war is equally likely. Where society is destroyed after poor people with guns have their say.
b_e_n_t_o_n•2h ago
What are they?
mikert89•2h ago
how could i even begin to list them, is the point of my original comment
dcminter•2h ago
If they're that profound you should be able to come up with one example though, right?

Not that I think you're wrong, but come on - make the case!

I have the very unoriginal view that - yes, it's a (huge) bubble but also, just like the dot com bubble, the tevhnology is a big deal - but it's not obvious to see what will stand and fall in the aftermath.

Remember that Sun Microsystems, a very established pre-dot com business, rose to huge heights on the bubble and was then smashed by the fall when it popped. Who's the AI bubble's Sun and who's its Amazon? Place your bets...

b_e_n_t_o_n•2h ago
Hahahaha right
lgas•2h ago
Just pick one then, since so far you've conveyed nothing at all about them, so we're all left to wonder what you might be thinking of.
softwaredoug•2h ago
You can have a bubble, and still have profound impact from AI. See also the dotcom boom.
mikert89•2h ago
who cares about a bubble? we are on the cusp of intelligent machines. The implications will last for hundreds of years, maybe impact the trajectory of humanity
mjr00•2h ago
> we are on the cusp of intelligent machines.

Nah, we aren't. There's a reason the output of generative AI is called slop.

skywhopper•2h ago
Except we aren’t. They aren’t thinking, and they can’t actually make decisions. They generate language. That feels magical at first but in the end it’s not going to solve human problems.
bigyabai•2h ago
> we are on the cusp of intelligent machines.

Extraordinary claims demand extraordinary evidence. We have machines that talk, which is corollary to nothing.

fruitworks•2h ago
Sometimes I wonder if reason is partially a product of manipulating language.
card_zero•51m ago
It's nice that you say partially, that's bit different from every other HN comment that wondered this. Yeah, probably partially, as in, you have reason, you add in language, you get more reason.
dcminter•2h ago
> we are on the cusp of intelligent machines

That's an extremely speculative view that has been fashionable at several points in the last 50 years.

squidbeak•22m ago
How often in the last 50 years have those machines done what these machines do?
bigyabai•7m ago
Define "do" in this context. If you mean hardware-accelerated matmul, then machines have been doing that for half a century.
dcminter•5m ago
On every occasion you could have made exactly the same point.
tim333•4m ago
Things like getting gold in the math olympiad and 120 on iq tests are kinda cuspy and not been there in 49 of the last 50 years.
Gattopardo•2h ago
My dude, it's literally just fancy autocomplete and isn't intelligent at all.
rusk•2h ago
Clippy with a Bachelors in web search
forgotusername6•2h ago
What makes you so sure that you aren't just fancy autocomplete?
alganet•2h ago
I am sure that if I am a fancy auto-complete, I'm way fancier than LLMs. A whole different category of fancy way above their league. Not just me, but any living human is.
abathologist•2h ago
I am so sure because of the self-evidence of my experience, the results of 2 millennia of analysis into the nature of cognition and experience, and consideration of the material composition of our organism (we obviously have lots of critical analog components, which are not selecting tokens, but instead connecting with flows from other continua).

Prediction is obviously involved in certain forms of cognition, but it obviously isn't all there is to the kinds of beings we are.

daytonix•2h ago
brother please
lifestyleguru•2h ago
Matrix is calling you on the rotary dial phone.
stanac•2h ago
Is that why we no longer have telephone boots? They want to prevent Neo from jumping in and out of Matrix?
lifestyleguru•1h ago
Oh god, you know it.
card_zero•55m ago
Telephone ... boots?

Napkin scribbles

maxglute•2h ago
Bubble burst means current technical AI approach economic deadend, if the most resourced tech companies in the world can't afford to maintain AI improvment then it's probably not going to happen because public likely isn't going to let state spend $$$$ in lieu of services on sovereign Ai projects that will make them unemployed.
parineum•2h ago
I don't think anyone underestimates that and a lot of people can't wait to see it.
mikert89•2h ago
anyone mentioning a bubble is underestimating the gravity of whats going on
quesera•2h ago
I'm old enough to have heard this before, once or thrice.

It's always different this time.

More seriously: there are decent arguments that say that LLMs have an upper bound of usefulness and that we're not necessarily closer to transcending that with a different AI technology than we were 10 or 30 years ago.

The LLMs we have, even if they are approaching an upper bound, are a big deal. They're very interesting and have lots of applications. These applications might be net-negative or net-positive, it will probably vary by circumstance. But they might not become what you're extrapolating them into.

ff2400t•2h ago
I think you aren't understanding the meaning of the world bubble here. No one can deny the impact LLM can have but it still has limits. And the term bubble is used here as an economic phenomenon. This is for the money that openai is planning on spending which they don't have. So much money is being l poured here, but most users won't pay the outrageous sums of money that will actually be needed for these LLM to run, the break even points looks so far off that you can't even think about actual profitability. After the bubble bursts we will still have all the research done, the hardware left and smaller llms for people to use with on device stuff.
mikert89•2h ago
the real innovation is that neural networks are generalized learning machines. LLMs are neural networks on human language. The implications of world models + LLMs will take them farther
KPGv2•1h ago
The neural net was invented in the 1940s, and LLMs were created in the 1960s. It's 2025 and we're still using 80yo architecture. Call me cynical, but I don't understand how we're going to avoid the physical limitations of GPUs and data to train AIs on. We've pretty much exhausted the latter, and the former is going to hit sooner rather than later. We'll be left at that point with an approach that hasn't changed much since WW2, and our only solution is going to hit a physical limit law.

Even in 2002, my CS profs were talking about how GAI was a long time off bc we had been trying for decades to innovate on neural nets and LLMs and nothing better had been created despite some of the smartest people on the planet trying.

mikert89•1h ago
they didnt have the compute or the data to make use of NNs. but theoretically NNs made sense even back then, and many people thought they could give rise to intelligent machines. they were probably right, and its a shame they didnt live to see whats happening right now
KPGv2•1h ago
> they didnt have the compute or the data to make use of NNs

The compute and data are both limitations of NNs.

We've already gotten really close to the data limit (we aren't generating enough useful content as a species and the existing stuff has all been slurped up).

Standard laws of physics restrict the compute side, just like how we know we will hit it with CPUs. Eventually, you just cannot put things closer together that generate more heat because they interfere with each other because we hit the physical laws re miniaturization.

No, GAI will require new architectures no one has thought of in nearly a century.

mikert89•1h ago
dude who cares about data and compute limits. those can be solved with human ingenuity. the ambiguity of creating a generalized learning algorithm has been solved. a digital god has been summoned
muldvarp•1h ago
The internet was world changing and the dotcom bubble was still a bubble.
Starlevel004•2h ago
The self checkout machines at the supermarket can talk and make decisions. I don't see them revolutionising the world.
mikert89•2h ago
think bigger, because this certainly is. change on the order of years means nothing
Starlevel004•2h ago
Sorry, I don't believe in Calvinism.
Legend2440•2h ago
1. That’s not remotely the same, and you know it.

2. The category of computerized machines (of which self checkouts are one example) has absolutely revolutionized the world. Computerization is the defining technology of the last twenty years.

alganet•2h ago
What is that category and what other machines are in it?
givemeethekeys•2h ago
> I don't see them revolutionising the world.

They revolutionized supermarkets.

KPGv2•2h ago
In what way?

I would really like to hear you explain how they revolutionized supermarkets.

I use them every day, and my shopping experience is served far better by going to a place that is smaller than one that has automated checkout machines. (Smaller means so much faster.)

Hell, if you go to Costco, the automated checkout line moves slower than the ones manned by experienced workers.

Starlevel004•1h ago
Unless you happen to be some sort of rodent that feeds off of discarded grains, the supermarket is not the world.

And for small baskets, sure, but it was scan as you shop that really changed supermarkets and those things thankfully do not talk.

ogogmad•1h ago
This is the most perfect troll comment I've ever seen. Bravo.
dcminter•1h ago
I think it's worth engaging with even if this guy's a troll (not saying he is) because it's not that freakish a view in the real world. What are the arguments to counter this kind of blind enthusiasm?
flyinglizard•2h ago
From a software development perspective, the more I think of it, the more I understand it's just another abstraction layer. Before that came high level languages and JVM of sorts, before that came the compiler, before that came the assembler.

Outside of the software world it's mostly a (much!) better Google.

Between now and a Star Trek world, there's so much to build that we can use any help we can get.

mikert89•2h ago
yeah we have fuzzy computer interfaces now, instead of "hard coded" apis.
elorant•2h ago
Speech to text and vice versa exists for over a decade. Where's the life altering application from that?
KPGv2•1h ago
> Speech to text and vice versa exists for over a decade.

Indeed. I was using speech to text three decades ago. Dragon Naturally Speaking was released in the 90s.

lxgr•6m ago
Then you hopefully remember how “natural” it actually was.
AstroBen•2h ago
..except they can't

It's blatantly obvious to see if you work with something you personally have a lot of expertise in. They're effectively advanced search engines. Useful sure.. but they're not anywhere close to "making decisions"

gosub100•55m ago
An RNG can "make a decision" lol.
zkmon•2h ago
Maybe you are not grasping the convergence effect of the overall socio-political-economic trends that could actually label AI output as abhorrent plastic pollution or atleast not a high priority for public good.
paufernandez•1h ago
In my case I fully grasp what such a future could be, but I don't think we are on the path to that, I believe people are too optimistic, i.e. they just believe instead of being truly skeptical.

From where I look at it, LLMs are flawed in many ways, and people who see progress as inevitable do not have a mental model of the foundation of those systems to be able to extrapolate. Also, people do not know any other forms of AI or have though hard about this stuff on their own.

The most problematic things are:

1) LLMs are probabilistic and a continuous function, forced by gradient descent. (Just having a "temperature" seems so crazy to me.) We need to merge symbolic and discrete forms of AI. Hallucinations are the elephant in the room. They should not be put under the rug. They should just not be there in the first place! If we try to cover them with a layer of varnish, the cost will be very large in the long run (it already is: step-by-step reasoning, mixture of experts, RAG, etc. are all varnish, in my opinion)

2) Even if generalization seems ok, I think it is still really far from where it should be, since humans need exponentially less data and generalize to concepts way more abstract than AI systems. This is related to HASA and ISA relations. Current AI systems do not have any of that. Hierarchy is supposed to be the depth of the network, but it is a guess at best.

3) We are just putting layer upon layer of complexity instead of simplifying. It is the victory of the complexifiers and it is motivated by the rush to win the race. However, I am not so sure that, even if the goal seems so close now, we are going to reach it. What are we gonna do? Keep adding another order of magnitude of compute on top of the last one to move forward? That's the bubble that I see. I think that that is not solving AI at all. And I'm almost sure that a much better way of doing AI is possible, but we have fallen into a bad attractor just because Ilya was very determined.

We need new models, way simpler, symbolic and continuous at the same time (i.e. symbolic that simulate continuous), non-gradient descent learning (just store stuff like a database), HAS-A hierarchies to attend to different levels of structure, IS-A taxonomies as a way to generalize deeply, etc, etc, etc.

Even if we make progress by brute forcing it with resources, there is so much work to simplify and find new ideas that I still don't understand why people are so optimistic.

mikert89•1h ago
symbols and concepts are just collections of neurons that fire with the correct activation. its all about the bitter lesson, human beings cannot design ai, they can only find the most general equations, most general loss function, and push data in. and thats what we have, and thats why its a big deal. The LLM is just a manifestation of a much broader discovery, a generalized learning algorithm. it worked on language because of the information density, but with more compute, we may be able to push in more general sensory data...
pixl97•1h ago
Symbolic AI is mostly dead, we spend a lot of time and money on it and got complex and fragile systems that are far worse than LLMs.
ogogmad•1h ago
Not sure this is a good counterpoint in defence of LLMs, but I'm reminded of how Unix people explain why (in their experience) data should be encoded, stored and transmitted as text instead of something more seemingly natural like binary. It's because text provides more ways to read and transform it, IN SPITE of its obvious inefficiency. LLMs are the ultimate Unix text transformation filter. They are extremely flexible out-of-the-box, and friendly towards experimentation.
ACCount37•1h ago
Symbolic AI is dead. Either stop trying to dig out and reanimate its corpse, or move the goalposts like Gary Marcus did - and start saying "LLMs with a Python interpreter beat LLMs without, and Python is symbolic, so symbolic AI won, GG".

Hallucinations are incredibly fucking overrated as a problem. They are a consequence of the LLM in question not having a good enough internal model of its own knowledge, which is downstream from how they're trained. Plenty of things could be done to improve on that - and there is no fundamental limitation that would prevent LLMs from matching human hallucination rates - which are significantly above zero.

There is a lot of "transformer LLMs are flawed" going around, and a lot of alternative architectures being proposed, or even trained and demonstrated. But so far? There's nothing that would actually outperform transformer LLMs at their strengths. Most alternatives are sidegrades at best.

For how "naive" transformer LLMs seem, they sure set a high bar.

Saying "I know better" is quite easy. Backing that up is really hard.

rafavento•1h ago
Computers have been able to talk and make decisions from the beginning. Maybe you meant mimicking humans?
mikert89•1h ago
mimick is quite a loaded word
dotnet00•1h ago
Reminds me of crypto/Web-3.0 hype. Lots of bluster about changing economic systems, offering people freedom and wealth, only to mostly be scams, and coming with too serious inherent drawbacks/costs to solve many of the big problems it promises to solve.

In the end leaving the world changed, but not as meaningfully or positively as promised.

mikert89•1h ago
the difference is the impact of crypto was always hypothetical, chatgpt can be used, explored, and if you are creative enough, levered in ways as the ultimate tool
dotnet00•1h ago
You've done nothing but reuse the Sam Altman/Elon Musk playbook of making wild and extremely vague statements.

Maybe say something concrete? What's a positive real world impact of LLMs where they aren't hideously expensive and error prone to the point of near uselessness? Something that isn't just the equivalent of a crypto-bro saying that their system for semi-regulated speculation (totally not a rugpull!) will end the tyranny of the banks.

ogogmad•1h ago
So you're saying that modern LLMs are a just like crypto/Web3, except in all the ways they're not, so they must be useless.

---

Less flippantly, they are excellent for self-studying university-level topics. It's like being able to ask questions to a personal tutor/professor.

zirror•1h ago
But you need to verify everything unless it’s self evident. The number of times CoPilot (Sonnett 4) still hallucinates Browser APIs is astonishing. Imaging trying to learn something that can’t be checked easily, like Egyptian archeology or something.
bluesnowmonkey•49m ago
You have to verify everything from human developers too. They hallucinate APIs when they try to write code from memory. So we have:

  - documentation
  - design reviews
  - type systems
  - code review
  - unit tests
  - continuous integration
  - integration testing
  - Q&A process
  - etc.
It turns out when include all these processes, teams of error-prone human developers can produce complex working software. Mostly -- sometimes there are bugs. Kind of a lot actually. But we get things done.

Is it not the same with AI? With the right processes you can get consistent results from inconsistent tools.

dotnet00•25m ago
Taking the example of egyptian archeology, if you're reading the work of someone who is well regarded as an expert in the field, you can trust their word a lot more than you can trust the word of an AI, even if the AI is provided the text you're reading.

This is a pretty massive difference between the two, and your narrative is part of why AI is proving to be so harmful for education in general. Delusional dreamers and greedy CEOs talking about AI being able to do "PhD level work" have potentially ruined a significant chunk of the next generation into thinking they are genuinely learning from asking AI "a few questions" and taking the answers at face value instead of struggling through the material to build true understanding.

lxgr•9m ago
The vast majority of people trying to do any given thing simply don’t have access to experts in the field, though.

I’ll take a potential solution I can validate over no idea whatsoever of my own any day.

lxgr•11m ago
The trick is to put them in contexts where they can validate their purported solutions and then iterate on them.
mikert89•1h ago
they speak in generalities because the models are profoundly general, a general learning system. below someone asked me to list the capabilities, its the wrong question to ask. its like asking what a baby can do
throwawa14223•57m ago
ChatGPT is just as useless as a shitcoin and just like a shitcoin the sooner we stop burning electricity on LLMs the better.
Findecanor•1h ago
“The ability to speak does not make you intelligent.”
mikert89•1h ago
again, only a few years ago, the concept of a real time voice conversation with a computer was straight out of science fiction
dcminter•1h ago
This is true. The old original series (and later) Star Trek computers being able to interpret normal idiomatic humam speech and act upon it was, to those in the know, hilariously unrealistic until very suddenly just recently it wasn't. Pretty cool.
mikert89•1h ago
pretty much all of the classical ideas of what an ai could do, can be done with our existing capabilities. and yet, people continue to live as if the world has not changed
dcminter•57m ago
"AI" has been doing that since the 1950s though. The problem is that each time we define something and say "only an intelligent machine can X" we find out that X is woefully inadequate as an example of real intelligence. Like hilariously so. e.g. "play chess" - seemed perfectly reasonable at the time, but clearly 1980s budget chess computers are not "intelligent" in any very useful way regardless of how Sci Fi they were in the 40s.

So why's it different this time?

mikert89•22m ago
yeah im in agreement, ai will eventually do everything a human being can
dcminter•7m ago
Perhaps, but why are you so convinced we're so close when we weren't all the other times?
dcminter•1h ago
"Empty vessels make the loudest noise" as my headmaster used to rather pointedly quote to me from time to time.
gosub100•1h ago
I can't believe I still have to do my own laundry and dishes. Like that's some how way more powerful than the models of a megawatt powered data center and millions of dollars in 3nm silicon can conquer.
jiggawatts•11m ago
… by hand? With water you heated on a wood fire, the ashes of which you turned into potash so you can make your own soap?

Or did you pop your laundry into a machine and your dishes into another one and press a button?

flyinglizard•2h ago
I admit to only being in this industry for three decades now, and only designing and implementing the thermal/power control algo of an AI chip family for three years in that time, but it's the first time I hear of chips "wearing under high intensity use".
bsaul•2h ago
thanks for that comment. I know absolutely nothing about chip designs, but i too was under the assumption that chips, like anything, wear out. And the more you use them, the more they do.

Is the wear so small that it’s simply negligible ?

flyinglizard•2h ago
As long as you keep temperatures and currents in check, there's no reason for a chip under load to fare worse than an idle chip. Eventually, maybe, but not in the 5-10 year lifespan expected of semiconductors.
sam_bristow•1h ago
Wasn't there a phenomenon with the GPUs being retired from crypto mining operations being basically cooked after a couple of years. Likely because they weren't keeping temperatures in check and just pushing the cards to their limits.
cyberax•1h ago
Chips absolutely do wear out. Dopants electromigrate and higher temperatures make it faster. Discrete components like capacitors also tend to fail over time.

Is it going to be that significant though? No idea.

ACCount37•48m ago
Depends on the design, and how hard you push it.

Just ask Intel what happened to 14th gen.

It's not normally an issue, but the edge cases can be very sharp. Otherwise, the bigger concern is the hardware becoming obsolete because of new generations being significantly more power efficient. Over a few years, the power+cooling+location bill of a high end CPU running at 90% utilization can cost more than the CPU itself.

pixl97•23m ago
Honestly it depends on a whole lot. If they are running 'very' hot, yea they burn out faster. If they have lots of cooling and heating cycles, yea, they wear out faster.

But with that said machines that run at a pretty constant thermal load within range of their capacitors can run a very long time.

archerx•2h ago
I believe the next step will be robotics and getting A.I. to interact with the physical world at human fidelity.

Maybe we can finally have a Rosie from the Jetsons.

blibble•1h ago
> Maybe we can finally have a Rosie from the Jetsons.

just what I want, a mobile Alexa that spews ads and spies on me 24/7

trollbridge•1h ago
Robots still haven’t come close to replicating human and animal touch as a sense, and LLMs don’t do anything to help with that.
pixl97•24m ago
I mean, even the human brain is broken up in different parts and the parts that do touch are insanely old compared to higher thinking. The LLM parts tell the robot parts the plate should go to the kitchen.
scellus•2h ago
He writes as if only datacenters and network equipment remain after the AI bubble bursts. Like there won't be any AI models anymore, nothing left after the big training runs and trillion-dollar R&D, and no inference served.
Juliate•2h ago
The point is, after the bubble burst, will there be enough funds, cash flow and... a viable market, to make these still run?
muldvarp•1h ago
Inference is not that expensive. I'd argue that most models are already useful enough that people will pay to run them.
rootusrootus•1h ago
At $20/month for Claude, I'm satisfied. I'll keep paying that for what I get from it, even if it never improves again.
rjh29•2h ago
Who's going to pay to run those models? They are currently running at a huge loss.
antonvs•2h ago
I run models for coding on my own machines. They’re a trivial expense compared to what I earn from the work I do.

The “at a loss” scenario comes from (1) training costs and (2) companies selling tokens below market to get market share. Neither of those imply that people won’t run models in future. Training new frontier-class models could potentially become an issue, but even that seems unlikely given what these models are capable of.

Juliate•2h ago
Ok, running them locally, that's definitely a thing.

But then, without this huge financial and tech bubble that's driven by these huge companies:

1/ will those models evolve, or new models appear, for a fraction of the cost of building them today?

2/ will GPU (or their replacement) also cost a fraction of what they cost today, so that they are still integrated in end-user processors, so that those model can run efficiently?

azeirah•2h ago
Given the popularity and activity and pace of innovation seen on /r/LocalLLaMa, I do think models will keep improving. Likely not at the same pace as they are today, but those people love tinkering but it's mostly enthusiasts with a budget for a fancy setup in a garage, independent researchers and smaller businesses doing research there.

These people won't sit still and models will keep getting better as well as cheaper to run.

surgical_fire•1h ago
It's unclear if people would pay the price to use them if they were not below market.

I have access to quite a few models, and I use them here and there. They are sort of useful, sometimes. But I don't pay directly for any of them. Honestly, I wouldn't.

qgin•2h ago
The models get more efficient every year and consumer chips get more capable every year. A GPT-5 level model will be on every phone running locally in 5 years.
swarnie•2h ago
Can i sign up for an alterative future please? This one sounds horrendous.
harvey9•2h ago
I can run quite useful models on my PC. Might not change the world but I got a usable transcript of an old foreign language TV show and then machine translated to English. It is not as good as professional subtitles but i wasn't willing to pay the cost of that option.
joshuahedlund•1h ago
Won’t those models gradually become outdated (for anything related to events that happen after the model was trained, new code languages or framework versions, etc) if no one is around to continually re-train them?
surgical_fire•1h ago
"we will be left with local models that can be sort of useful but also sort of sucks" is not really a great proposition for the obscene amount of money being invested in this.
quesera•2h ago
Running the models is cheap. That will be worthwhile even if the bubble pops hard. Not for all of the silly stuff we do today, but for some of it.

Creating new LLMs might be out of reach for all but very well-capitalized organizations with clear intentions, and governments.

There might be a viable market for SLMs though. Why does my model need to know about the Boer wars to generate usable code?

logicchains•2h ago
They're not running at a loss. Training runs at a loss, but the models are profitable to serve if you don't need to continuously train new models.
jayd16•18m ago
But you do or you're missing current events, right?
WalterSear•1h ago
Anthropic said their inference is cash positive. I would be very surprised if this isn't the norm.
surgical_fire•1h ago
I would be surprised if they are being honest.
squidbeak•28m ago
I'd be more surprised if they didn't know their own business costs.
mike_hearn•22m ago
There's a gazillion use cases for these things in business that aren't even beginning to be tapped yet. Demand for tokens should be practically unlimited for many years to come. Some of those ideas won't be financially viable but a lot will.

Consider how much software is out there that can now be translated into every (human) language continuously, opening up new customers and markets that were previously being ignored due to the logistical complexity and cost of hiring human translation teams. Inferencing that stuff is a no brainer but there's a lot of workflow and integration needed first which takes time.

cactusplant7374•2h ago
> Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use.

How about chips during the dotcom period? What was their lifespan?

bc569a80a344f9c•2h ago
That’s irrelevant because the primary assets left after the dotcom bubble was fiber in the ground with a lifetime measure in decades.
cactusplant7374•2h ago
The author of the article is comparing past and present. It's not irrelevant to the article.
bc569a80a344f9c•2h ago
The whole point of the article is that the dotcom era produced long term assets that stayed valuable for decades after the bubble burst, and argues that the AI era is producing short term assets that won’t be of much use if the bubble bursts.
cactusplant7374•30m ago
But the chips of the dotcom era were not long-term assets. The author appears to claim that.
bc569a80a344f9c•9m ago
Where? I scanned the article again. I can’t seem to find that.
stanac•2h ago
More or less every high-end hardware becomes obsolete, or in other words becomes second class. First difference is that at least networking hardware could be used for years, compute/storage servers became obsolete faster than networking. Second is scale. Google summary says that current investments are 17x greater than dot com investments. It may be wrong about the number but investments are at least on an order of magnitude larger.

Maybe in next decade we will have cheap gaming cloud offerings built on repurposed GPUs.

pixl97•28m ago
> It may be wrong about the number but investments are at least on an order of magnitude larger.

Which is exactly what we expect if technological efficiency is increasing over time. Saying we've invested 1000x in aluminum plants and research over the first steel plants means we've had massive technological growth since then. It's probably better that it's actually moving around in an economy than just being used to consolidate more industries.

>compute/storage servers became obsolete faster than networking

In the 90s extremely rapidly. In the 00s much less rapidly. And by the 10's servers and storage especially the solid components like boards lasted a decade or more. The reason the servers became obsolete in the 90s is much faster units came out fast and were much faster, not that the hardware died. In the 2010-2020 era I repurposed tons of data center hardware to onsite computers for small businesses. I'm guessing a whole lot less of that hardware 'went away' then you'd expect.

mrcwinn•2h ago
Without moralizing or assuming the worst intentions of oligarchs, globalists, evil capitalists, and so on, I still don’t understand how a consumption based economy continues to fund the build out (oil->Saudi Arabia->LPs->OpenAI) when the technology likely removes the income of its consumers. Help me understand.
antonvs•2h ago
Just channeling amoral billionaires here, so don’t shoot the messenger, but if everything is automated by machines that you control, you no longer need to farm humans for capital.

Not saying that’s even remotely realistic over the next century, but it does seem to be how some of these people think. Excessive wealth destroys intelligence, it doesn’t enhance it, as countless examples show.

dotnet00•1h ago
It looks like they're planning on funding it through circular purchase agreements and looting the rest of the world.
coderenegade•58m ago
Nothing in capitalism suggests that consumers have to be human. In fact, the whole enshittification trend suggests that traditional consumers are less economically relevant than they've ever been.
dbg31415•2h ago
I hope we'll get back to building things that actually matter -- solutions that help real people; products that are enjoyable to use, and are satisfying to build.

As the noise fades, and with luck, the obsession with slapping "AI" on everything will fade with it. Too many hype-driven CEOs are chasing anything but substance.

Some AI tools may survive because they're genuinely useful, but I worry that most won't be cost-effective without heavy subsidies.

Once the easy money dries up, the real engineers and builders will still be here, quietly making things that work.

Altman's plea -- "Come on guys, we just need a few trillion more!" -- and that error-riddled AI slide deck will be the meme that marks the top of the market.

novaRom•2h ago
Democracy, personal freedoms, and rule of law are things that matter, but I am afraid we cannot get back to them quickly without significant efforts. We need first to get back to sanity. In authoritarian society AI is a tool of control, do we want it to be like that?
zkmon•2h ago
Also, the eco-system plays the biggest controlling role in the bubble and its aftermath. Ecosystem of social, political and business developments. Dotcom aftermath still had the wind from all the ecosytem trends that brought the dotcom back with bigger force. If the post AI hype world still has high priority for these talking bots, then maybe it's comparable to dotcom. If the world has other bigger basic issues that need attention, then yes, it could become a pile of silent silicon.
maxglute•2h ago
AI chips and bespoke data centers are closer to tulips than rail or fiber in terms of deprecated assets. They're not fungible stranded assets with long shelf life or room for improvment. Bubble bursting also points to current model towards AI is inherently not economically viable, i.e. we'll still be inferencing off existing models but won't be pouring 100billions into improving them. TLDR much more all or nothing gambit than past infra booms.
bitmasher9•2h ago
> GPUs that have a 1-3 year lifespan

In 10 years GPUs will have a lifespan for 5-7 years. The rate of improvement on this front has been slowing down faster then CPU.

enord•2h ago
Wait… are you betting on exponential or logarithmic returns?
stanac•2h ago
There was a video on YT (gamers nexus I think) and an excel sheet comparing jumps in performance gains between each new nvidia generation. They are becoming smaller and smaller, probably now driven by AI boom where most of the silicon is used for data centers. Regardless of that, I have a feeling we are approaching the ceiling of chip performance. Just comparing PS3 and PS4 and then PS4 with PS5, performance jump is smaller and size of the hardware has become enormous and GPUs are more and more power hungry. If generational jumps were good enough we wouldn't need more power and cooling and big boxes for desktop PCs that can hold long graphic cards.
bobthepanda•1h ago
We also have hit ceilings of performance demand. As an example, 8K TVs never really got off the ground because your average consumer could give a hoot. Vision Pro is a flop because AR/VR are super niche. Crypto is more gambling than asset class. Etc.

What is interesting is that it seems like the ever larger sums of money sloshing around are resulting in bigger, faster hype cycles. We are already seeing some companies face issues after blowback from adopting AI too fast.

nemomarx•2h ago
After heavy use, though? I don't think they mean aging out of being cutting edge but actually starting to fail sooner after being used in DCs.
trenchpilgrim•2h ago
They're mostly solid state parts. The parts that do wear out like fans are easily replaced by hobbyists.
nemomarx•1h ago
I swear during the earlier waves of bitcoin mining (before good ASICs came out) people ran them overclocked and did cause damage. Used GPUs were pretty unreliable for a while there.
trenchpilgrim•1h ago
1. Miners were clocking them beyond the factory approved speeds - something not needed for AI, where the bottleneck is usually VRAM, not clock speed.

2. While comprehensive studies were never done, some tech channels did some testing and found used GPUs to be generally reliable or easily repairable, when scamming was excluded. https://youtu.be/UFytB3bb1P8

pclmulqdq•1h ago
This is not correct. Solid-state parts wear out like mechanical parts, especially when you run them hot. The mechanism of this wear-out comes from things like electromigration of materials and high-energy electrons literally knocking atoms out of place.
trenchpilgrim•1h ago
Those parts take decades to wear out, excepting a manufacturing defect.
pclmulqdq•33m ago
That is not correct when wires are small and you run things hot. The physics of <10 nm transistor channels is very different than it is for 100 nm+ transistors.
trenchpilgrim•25m ago
Your assertions are at odds with modern computers - including Nvidia datacenter GPUs - still working fine after many, many years. If not for 1. improved power efficiency on new models and 2. Nvidia'a warranty coverage expiring, datacenters could continue running those GPUs for a long time.
pclmulqdq•22m ago
Which GPUs have been running for decades that you're referring to? The A100s that are 4 years old? MTBFs for GPUs are about 5-10 years, and that's not about fans. AWS and the other clouds have a 5-8 year depreciation calendar for computers. That is not "decades."

You can keep a server running for 10-15 years, but usually you do that only when the server is in a good environment and has had a light load.

trenchpilgrim•9m ago
> Which GPUs have been running for decades that you're referring to? The A100s that are 4 years old? MTBFs for GPUs are about 5-10 years, and that's not about fans.

I said solid state components last decades. 10nm transistors have a thing for over 10 years now and other than manufacturer defect don't show any signs of wearing out from age.

> MTBFs for GPUs are about 5-10 years, and that's not about fans.

That sounds about the right time for a repaste.

> AWS and the other clouds have a 5-8 year depreciation calendar for computers.

Because the manufacturer warranties run out after that + it becomes cost efficient to upgrade to lower power technology. Not because the chips are physically broken.

bee_rider•1h ago
The full quote is:

> Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use.

So it isn’t entirely tied to the rate of obsolescence, these things apparently get worn down from the workloads.

In terms of performance improvement, it is slightly complicated, right? It turns out that it was possible to do ML training on existing GPGPU. Then there was spurt of improvement as they go after the low-hanging fruit for that application…

If we’re talking about what we might be left with after the bubble pops, the rate of obsolescence doesn’t seem that relevant anyway. The chips as they are after the pop will be usable for the next thing or not, it is hard to guess.

bitmasher9•1h ago
The failure rate of GPUs in professional datacenter environments is over estimated by the general public because of the large number of overlocked and undercooled cards used for GPU mining that hit eBay.
bradleyjg•1h ago
What’s the strategy when one does die? It’s just left in place until it’s worth it to pull the entire rack?
lossolo•1h ago
I'm still waiting to get a used Nvidia A100 80 GB (released in 2020) for well under $10,000.
laluser•44m ago
The reason for not keeping them too much longer than a few years is that at the end of that timespan you can purchase GPUs with > 2x performance, but for the same amount of power. At some point, even though the fleet has been depreciated, they become too expensive to operate vs. what is on the market.
mike_hearn•15m ago
There's some telephone game being played here.

The three year number was a surprisingly low figure sourced to some anonymous Google engineer. Most people were assuming at least 5 years and maybe more. BUT, Google then went on record to deny that the three year figure was accurate. They could have just ignored it, so it seems likely that three years is too low.

Now I read 1-3 years? Where did one year come from?

GPU lifespan is I suspect also affected by whether it's used for training or inference. Inference loads can be made very smooth and don't experience the kind of massive power drops and spikes that training can generate.

trenchpilgrim•7m ago
> Where did one year come from?

Perhaps the author confused "new GPU comes out" with "old GPU is obsolete and needs replacement"

CuriouslyC•2h ago
This is where we're headed: https://sibylline.dev/articles/2025-10-12-ai-is-too-big-to-f...
jdalgetty•1h ago
Yikes
Havoc•1h ago
Pretty wild read. Thinking similar - for better or worse this is a full send, at least for the US centric part of the world.
bubblelicious•51m ago
Great take! Certainly resonates with me a lot

- this is war path funding

- this is geopolitics; and it’s arguably a rational and responsible play

- we should expect to see more nationalization

- whatever is at the other end of this seems like it will be extreme

And, the only way out is through

Belphemur•37m ago
Quite an interesting read Basically saying we're in war time economy with a race to super intelligence. Whichever super power does it first has won the game.

Seeing the last tariffs and what China done about the rare earth minerals (and also the deal the US made with Ukraine for said minerals), the article might have a point that the super power will cripple each other to be the first with the super intelligence. And you also need money for it so tariffs.

zerof1l•36m ago
Very US-centric article. Written by insecure people who are clinging to power and money desperately.

I don't see how some kind of big breakthrough is going to happen with the current model designs. The superintelligence, if it will ever be created, will require a new breakthrough in model architecture. We've pretty much hit the limit of what is possible with current LLMs. The improvements are marginal at this point.

Secondly, hypothetically, the US achieves superintelligence, what is stopping China from doing the same in a month or two, for example?

Even if China achieves a big breakthrough first, it may benefit the rest of the world.

CuriouslyC•30m ago
It's a US/China centered article, because that's the game, Europe is not a meaningful player and everyone else is going to get sucked into orbit of one of the superpowers.

If you read the article carefully, I work hard to keep my priors and the priors of the people in question separate, as their actions may be rational under their priors, but irrational under other priors, and I feel it's worth understanding that nuance.

I'm curious where you got the writer "clinging to power and money desperately."

Also, to be fair, I envy Europe right now, but we can't take that path.

mrbungie•27m ago
> If you think I'm being hyperbolic calling out a future of brutal serfdom. Keep in mind we basically have widespread serfdom now; a big chunk of Americans are in debt and living paycheck to paycheck. The only thing keeping it from being official is the lack of debtor's prison. Think about how much worse things will be with 10% inflation, 25% unemployment and the highest income inequality in history. This is fertile ground for a revolution, and historically the elites would have taken a step back to try and make the game seem less rigged as a self-preservation tactic, but this time is different. As far as I can tell, the tech oligarchs don't care because they're banking on their private island fortresses and an army of terminators to keep the populace in line.

This is suggesting an "end of history" situation. After Fukuyama, we know there is no such thing.

I'm not sure if there is a single strong thesis (as this one tries to be) on how this will end economically and geopolitically. This is hard to predict, much less to bet on.

CuriouslyC•25m ago
I'm not proposing and end of history, but things can remain in stable equilibrium for longer than you'd expect (just look at sharks!). If we slide into stable dystopia now, my guess is that there will be a black swan at some point that shakes us out of it and continues evolution, but we could still be in for 50 years of suffering.
mrbungie•10m ago
> but we could still be in for 50 years of suffering.

I mean if you are talking about USA itself falling into dystopic metastability in such a situation, maybe, but even so I think it misses some nuance. I don't see every other country following USA into oblivion, and also I don't see the USA bending the knee to techno-kings and in the process giving up real influence for some bet on total influence.

The only mechanism I see for reaching complete stability (or at least metastability) in that situation is idiocracy / idiotic authoritarianism, i.e. Trump/his minions actually grabbing power for decades and/or complete corruption of USA institutions.

mike_hearn•25m ago
Comparing to China is tricky because Chinese investment is almost by default bubbley in the sense of misallocating capital. That's what heavily state-directed economies tend to do and China's still a communist country. Especially they tend to over-invest in industrial capacity. China's massively overbuilt electricity grid and HSR network would be easily identified as a bubble if it were in the west, but when it's state officials making the investment decisions we tend to not think of it as a bubble (partly because there's often never a moment at which sanity reasserts itself).

I read an article today in which western business leaders went to China and were wowed by "dark factories" where everything is 100% automated. Lots of photos of factories full of humanoid robots too. Mentioned only further down the article: that happens because the Chinese government has started massively distorting the economy in favor of automation projects. It's widely known that one of the hardest parts of planning a factory is figuring out what to automate and what to use human labour for. Over-automating can be expensive as you lose agility and especially if you have access to cheap labour the costs and opportunity costs of automation can end up not worth it. It's a tricky balance that requires a lot of expertise and experience. But obviously if the government just flat out reimburses you 1/5th of your spending on industrial robots, suddenly it can make sense to automate stuff that maybe in reality should not have been automated.

BTW I'm not sure the Kuppy figures are correct. There's a lot of hidden assumptions about lifespan of the equipment and how valuable inferencing on smaller/older models will be over time that are difficult to know today.

CuriouslyC•16m ago
All fair points, and it's hard to know exactly how robust the Chinese system will turn out to be, however I would argue that their bets are paying off overall, so even if there is some capital misallocation, overall their hit rate in important areas has been good, while we've been dumping capital in "Uber for X, AirBnB for X, ..."
mike_hearn•51s ago
Well, their bets haven't been paying off, Chinese local/state level governments are in huge amounts of debt due to a massive real estate bubble, and lots of subsidies that don't pay back. It's a systematic problem, for instance their HSR lines are losing a ton of money too.

https://www.reddit.com/r/Amtrak/comments/1hnvl3d/chinese_hsr...

https://merics.org/en/report/beyond-overcapacity-chinese-sty...

It's easy to think of Uber/AirBnB style apps as trivialities, but this is the mistake communist countries always make. They struggle to properly invest in consumer goods because only heavy industry is legible to the planners. China has had too low domestic spending for a long time. USSR had the same issue, way too many steel mills and nowhere near enough quality of life stuff for ordinary people. It killed them in the end; Yeltsin's loyalty to communist ideology famously collapsed when he mounted a surprise visit to an American supermarket on a diplomatic mission to NASA. The wealth and variety of goods on sale crushed him and he was in tears on the flight home. A few years later he would end up president of Russia leading it out of communist times.

grafmax•24m ago
Im not convinced the geopolitical rivalry between US and China is a given. To a large degree it’s been manufactured - Friday’s antics a case in point.

The US indeed seems destined to fall behind due to decades of economic mismanagement under neoliberalism while China’s public investment has proved to be the wise choice. Yet this fact wounds the pride of many in the US, particularly its leaders, so it now lashes out in a way that hastens its decline.

The AI supremacy bet proposed is nuts. Prior to every societal transition the seeds of that transition were already present. We can see that already with AI: social media echo chambers, polarization, invading one’s own cities, oligarchy, mass surveillance.

So I think the author’s other proposed scenario is right - mass serfdom. The solution to that isn’t magical thinking but building mass solidarity. If you look at history and our present circumstances, our best bet to restore sanity to our society is mass strikes.

I think we are going to get there one way or another. Unfortunately things are probably going to have to get a lot more painful before enough people wake up to what we need to do.

CuriouslyC•19m ago
I think prior to Trump, Europe could have been mediator to a benevolent China. Now the hawks are ascendant in Beijing. Trump has shown that his ego pushes him to escalate to try and show others who the "big man" is, this will not end well with China, and I'm not sure he's wise enough to accept that.

Do you really prefer brutal serfdom to the AI supremacy scenario? From where I sit, people have mixed (trending positive) feelings about AI, and hard negative feelings about being in debt and living paycheck to paycheck. I'd like to understand your position here more.

jayd16•3m ago
Well we could still vote out the corrupt and actually invest in US infrastructure but I guess that's crazier than hanging all our hopes on AI serfdom.
bpt3•2m ago
> brutal serfdom

You can't seriously believe that spending all your income each month while living in the country with the highest standard of living in history is "serfdom."

Hyperbolic nonsense like this makes the rest of the article hard to take seriously, not that I agree with most of it anyway.

alganet•2h ago
One key difference in all of this is that people were not predicting the dotcom bubble, so there was a surplus left after it popped. It was a surprise.

This AI bubble already has lots of people with their forks and knifes waiting to capitalize on a myriad of possible surpluses after the burst. There's speculation on top of _the next bubble_ and how it will form, even before this one pops.

That is absolutely disgusting, by the way.

pixl97•17m ago
> of _the next bubble_ and how it will form

This is how humans have worked with pretty much every area of expansion in at least the last 500 years and probably longer. It's especially noticeable now because the amount of excess capital in the world from technological expansion makes it very noticeable and a lot of the limitations we know of in physics have been ran into, so further work gets very expensive.

If you want to stop the bubbles you have to pretty much end capitalism, if which capitalists will fight you about. If AI replaces human thinking and robots human labor that 'solves' the human capital problem but opens up a whole new field of dangerous new ones.

rz2k•1h ago
Local/open-weight models are already incredibly competent. Right now a Mac Studio with 256GB can be found for less than $5000, and an equivalent workstation will likely be 50% cheaper in a year. If anything that price is higher because of the boom, rather than subsidized by a potential bubble. It can run a 8bit quant of GPT-OSS 120B, or 4bit quant of GLM-4.6 using only an extra 100-200W. That energy use comes out to about 100 Joules or 1/4 Wh per query and response, and is already competitive with the power efficiency of even Google's offerings.

I think that people doing work in many professions with these offline tools alone could more than double their productivity compared to their productivity two years ago. Furthermore if the usage was shared in order to lower idle time, such as 20 machines for 100 workers, the initial capital outlay is even lower.

Perhaps investors will not see the returns they expect, but it is difficult to image how even the current state of AI doesn't vastly change the economy. There could be significant business failures among cloud providers and attempts to rapidly increase the cost of admission to closed models, but there's essentially no possibility of productivity regressing to a pre-AI levels.

tyleo•1h ago
I have an M4 MBP and I also think Apple is set up quite nicely to take real advantage of local models.

They already work on the most expensive Apple hardware. I expect that price to come down in the next few years.

It’s really just the UX that’s bad but that’s solvable.

Apple isn’t having to pay for each users power and use either. They sell hardware once and folks pay with their own electricity to run it.

hiq•4m ago
Your comment made me realize that there's also the benefit of not having to handle the hardware depreciation, it's pushed to the customer. And now Apple has renewed arguments to sell better machines more often ("you had ChatGPT 3-like performance locally last year, now you can get ChatGPT 4-like performance if you buy the new model").

I know folks who still use some old Apple laptops, maybe 5+ years old, since they don't see the point in changing (and indeed if you don't work in IT and don't play video games or other power-demanding jobs, I'm not sure it's worth it). Having new models with some performant local LLM built-in might change this for the average user.

notepad0x90•1h ago
The AI "bubble" won't burst just like the "internet bubble" didn't burst.

the dotcom bubble was a result of investors jumping on the hype train all at once and then getting off of it all at once.

Yes, investors will eventually find another hype train to jump on, but unlike 2000, we have tons of more retail investors and AI is also not a brand new tech sector, it's built upon the existing well established and "too big to fail" internet/ecommerce infrastructure. Random companies slapping AI on things will fail but all the real AI use cases will only expand and require more and more resources.

OpenAI alone just hit 800M MAU. That will easily double in a few years. There will be adjustments,corrections and adaptations of course but the value and wealth it generates is very real.

I'm no seer, I can't predict the future but I don't see a massive popping of some unified AI bubble anytime soon.

ACCount37•52m ago
People don't have the grasp of just how insane that "800M MAU and still growing" figure is. CEOs would kill people with their own bare hands to get this kind of userbase.

OpenAI has ~4B of revenue already, and they aren't even monetizing aggressively. Facebook has an infinite money glitch, and can afford to put billions in the ground in pursuit of moonshots and Zuck's own vanity projects. Google is Google, and xAI is Elon Musk. The most vulnerable frontier lab is probably Anthropic, and Anthropic is still backed by Amazon and, counterintuitively, also Google.

At the same time: there is a glut of questionable AI startups, extreme failure rate is likely - but they aren't the bulk of the market, not by a long shot. The bulk of the "AI money" is concentrated at either the frontier labs themselves, or companies providing equipment and services to them.

The only way I see for the "bubble to pop" is for multiple frontier labs to get fucked at the same time, and I just don't see that happening as it is.

dcminter•50m ago
The dot com crash was a thing though. The bubble burst. It's just that there was real value there so some of the companies survived and prospered.

Figuring out which was which was absolutely not possible at the time. Not many people foresaw Sun Microsystems as being a victim and nor was it obvious that Amazon would be a victor.

I wouldn't bet my life savings on OpenAI.

andy99•1h ago
Cell phones have been through how many generations between the 80s and now? All the past generations are obsolete, but the investment in improving the technology (which is really a continuation of WWII era RF engineering) means we have readily available low cost miniature comms equipment. It doesn’t matter that the capex on individual phones was wasted.

Same for GPUs/LLMs? At some point things will mature and we’ll be left with plentiful, cheap, high end LLM access, on the back of the investment that has been made. Whether or not it’s running on legacy GPUs, like some 90s fiber still carries traffic, is meaningless. It’s what the investment unlocks.

Havoc•1h ago
The demand for tokens isn't going anywhere, so the hardware will be used.

...whether it is profitable is another matter

deadbabe•53m ago
I think we will enter a neo-Luddite era at some point post-AI boom where it suddenly becomes fashionable to live one’s life with simple retro style technology, and social networks and much of the internet will just become places for bitter old people to complain amongst themselves and share stupid memes. Social media was cool when it was more genuine, but it got increasingly fake, and now with AI it could reach peak-fake. If people want genuine, what is more genuine than the real world?

It will become cool for you to become inaccessible, unreachable, no one knowing your location or what you’re doing. People might carry around little beeper type devices that bounce small pre-defined messages around on encrypted radio mesh networks to say stuff like “I’m okay” or “I love you”, and that’s it. Maybe they are used for contactless payments as well.

People won’t really bother searching the web anymore they’ll just ask AI to pull up whatever information they need.

The question is, with social media on the decline, with the internet no longer used for recreational purposes, what else are people going to do? Feels like the consumer tech sector will shrink dramatically, meaning that most tech written will be made to create “hard value” instead of soft. Think anything having to do with movement of data and matter, or money.

Much of the tech world and government plans are built on the assumption that people will just continue using tech to its maximum utility, even when it is clearly bad for them, but what if that simply weren’t the case? Then a lot of things fall apart.

firefoxd•22m ago
I wrote a similar article (not published yet), but my conclusion was "Free GPUs for everyone" or at least cheap ones. Right now H100 are very specialized for the AI pipeline, but so were GPUs before the AI boom. I expect we will find good use for them.
pizzly•9m ago
If there is a downturn in AI use due to a bubble then the countries that have built up their energy infrastructure using renewal energy and nuclear (both have decade long returns after the initial investment) will have cheaper electricity which will lead to a future competitive advantage. Gas powered power plants on the other hand require constant gas to convert to electricity. The price of gas would become the price of electricity regardless and thus very little advantage.