frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
1•geox•1m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•1m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
1•jerpint•2m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•3m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
1•breadwithjam•6m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•7m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•8m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•10m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•10m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•10m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
2•vkelk•11m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•12m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•13m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•14m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•17m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•18m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•19m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•20m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•23m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•26m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•27m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•28m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•28m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•29m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•32m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•32m ago•1 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•34m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•35m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•36m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•36m ago•0 comments
Open in hackernews

How the AI Bubble Will Pop

https://www.derekthompson.org/p/this-is-how-the-ai-bubble-will-pop
214•hdvr•4mo ago

Comments

rimeice•4mo ago
> Some people think artificial intelligence will be the most important technology of the 21st century.

I don’t, I think a workable fusion reactor will be the most important technology of the 21st century.

mattmaroon•4mo ago
Because it’ll power AI!
baq•4mo ago
if AI won't be a fad like everything else, we're going to need these, pronto

...and it does seem this time that we aren't even in the huge overcapacity part of the bubble yet, and won't be for a year or two.

gizajob•4mo ago
We’ll probably need to innovate one of those to power the immense requirements of AI chatbots.
geerlingguy•4mo ago
I think we'll need a few hundred, if spending continues like it has this year.
gizajob•4mo ago
My vision of fusion power comes from Sim City 2000 where one plant just powers absolutely everything forever.
_heimdall•4mo ago
What makes you think we'll have fusion reactors in the 21st century?
rimeice•4mo ago
I don’t think we’ll have the choice.
fundatus•4mo ago
> I like fusion, really. I’ve talked to some of luminaries that work in the field, they’re great people. I love the technology and the physics behind it.

> But fusion as a power source is never going to happen. Not because it can’t, because it won’t. Because no matter how hard you try, it’s always going to cost more than the solutions we already have.

https://matter2energy.wordpress.com/2012/10/26/why-fusion-wi...

rimeice•4mo ago
Yeh current tech is expensive and would likely be uncompetitive. At the very very end of that article is the key to this though:

> I fully support a pure research program for radically different approaches to fusion.

tim333•4mo ago
We've got by without them so far and solar is cracking along.
_heimdall•4mo ago
That's not how invention works though. Something has to be technically possible and we have to discover how to do it in a viable way.
Quarrelsome•4mo ago
ITER apparently fires up in 2039.
_heimdall•4mo ago
That date means nothing though. We have yet to figure out how to run a fusion reactor for any meaningful period of time and we haven't figured out how to do it profitably.

Setting a date for when one opens is just a pipe dream, they don't know how to get there yet.

Quarrelsome•4mo ago
isn't the ROI on current AI investment kinda similar though? Both are built on elements of hope.
tim333•4mo ago
Helion has one under construction https://www.reuters.com/business/energy/helion-energy-starts...

Whether it works or not is of course another matter.

_heimdall•4mo ago
Is it a fusion reactor if it can't maintain a fusion reaction and generator energy?
tim333•4mo ago
A non functioning fusion reactor? So far I think they've achieved fusion but not net energy.
ponector•4mo ago
What makes people think we'll have AGI in the 21st century? LLM is not AI, and as far away from AGI as self-parking car.
tbrownaw•4mo ago
We'll finally have electricity that's too cheap to meter.
lopis•4mo ago
And we'll use it all to run more crypto/AI/next thing
fundatus•4mo ago
Why would it be too cheap to meter? You're still heating up water and putting it through a turbine. We've been doing that for ages (just different sources of energy for the heating up part) and we still meter energy because these things cost money and need lots of maintenance.
tbrownaw•4mo ago
But that's the whole reason fusion is so important. Just like it was the whole reason fission was so important.

https://www.nrc.gov/reading-rm/basic-ref/students/history-10...

Perz1val•4mo ago
As we have more and more solars, we see rises for being connected to the grid more and more while electricity stays relatively cheap. Fusion won't change that, somebody has to pay for the guy reconnecting cables after a storm
DengistKhan•4mo ago
Residential will still cost more somehow.
Bigsy•4mo ago
Deepmind are working on solving the plasma control issue at the moment, I suspect they're probably using a bit of AI.... and I wouldn't put it past them to crack it.
robinhoode•4mo ago
This is the thing with AI: We can always come up with a new architecture with different inputs & outputs to solve lots of problems that couldn't be solved before.

People equating AI with other single-problem-solving technologies are clearly not seeing the bigger picture.

ForHackernews•4mo ago
Can we? Why haven't we, then? What are the big problems that were unsolvable before and now we can solve them with AI?

Auto-tagging of photos, generating derivative images and winning at Go, I will give you. There's been some progress on protein folding, I heard?

Where's the 21st century equivalent of the steam locomotive or the sewing machine?

fragmede•4mo ago
They did/are(?).

> Accelerating fusion science through learned plasma control

https://deepmind.google/discover/blog/accelerating-fusion-sc...

(2022)

fundatus•4mo ago
I mean, we already have a giant working fusion reactor (the sun) and we can even harvest it's energy (solar, wind, etc)! That's pretty awesome.
NoGravitas•4mo ago
Using gravitational containment rather than magnetic containment is a pretty cool approach.
jacknews•4mo ago
How so?

The maximum possible benefit of fusion (aside from the science gained in the attempt) is cheap energy.

We'll get very cheap energy just by massively rolling out existing solar panels (maybe some at sea), and other renewables, HVDC and batteries/storage.

Fusion is almost certain to be uneconomical in comparison if it's even feasible technically.

AI, is already dramtically impacting some fields, including science (eg deepfold), and AGI would be a step-change.

rimeice•4mo ago
Cheap, _limitless_ energy from fusion could solve almost every geopolitical/environmental issue we face today. Europe is acutely aware of this at the moment and it's why China and America are investing mega bucks. We will eventually run out of finite energy sources. Even if we do capture the max capacity possible from renewables with 100% efficiency, our energy consumption rates increasing at current rates will eventually exceed this max capacity. Those rates are accelerating. We really have no choice.
myrmidon•4mo ago
There is zero reason to assume that fusion power will ever be the cheapest source of energy. At the very least, you have to deal with a sizeable vacuum chamber, big magnets to control the plasma and massive neutron flux (turning your fusion plant into radioactive waste over time), none of which is cheap.

I'd say limitless energy from fusion plants is about as likely as e-scooters getting replaced by hoverboards. Maybe next millenium.

jacknews•4mo ago
I mean, the limit to renewables is to capture all the energy from the sun, and maybe the heat of the earth.

But then you start to have some issues with global warming (the temperature at which energy input = energy radiated away)

We probably don't want to release more energy than that.

hx8•4mo ago
Fusion at 100% grid scale might be better for the environment than solar at 100% grid scale.

It might be nice if at the end of the 21st century that is something we care.

BrokenCogs•4mo ago
Time travel will be the most important invention of the 21st century ;)
nemo•4mo ago
Time travel was the most important invention of the 1800s too, but that goes to show how bad resolving the temporal paradox issue is, now that entire history is gone.
seydor•4mo ago
but people say that AI will spit out that fusion reactor, ergo AI investment is prior in the ordo investimendi or whatever it would be called (by an AI)
throw0101d•4mo ago
Perhaps worth remembering that 'over-enthusiasm' for new technologies dates back to (at least) canal-mania:

* https://en.wikipedia.org/wiki/Technological_Revolutions_and_...

* https://en.wikipedia.org/wiki/Canal_Mania

leojfc•4mo ago
Absolutely. Years ago I found this book on the topic really eye-opening:

- https://www.amazon.co.uk/Technological-Revolutions-Financial...

The process of _actually_ benefitting from technological improvements is not a straight line, and often requires some external intervention.

e.g. it’s interesting to note that the rising power of specific groups of workers as a result of industrialisation + unionisation then arguably led to things like the 5-day week and the 8-hour day.

I think if (if!) there’s a positive version of what comes from all this, it’s that the same dynamic might emerge. There’s already lots more WFH of course, and some experiments with 4-day weeks. But a lot of resistance too.

_heimdall•4mo ago
My understanding is that the 40 hour work week (and similar) was talked about for centuries by workers groups but only became a thing once governments during WWI found that longer days didn't necessarily increase output proportionally.

For a 4 day week to really happen st scale, I'd expect we similarly need the government to decide to roll it out rather than workers groups pushing it from the bottom up.

throw0101d•4mo ago
> My understanding is that the 40 hour work week (and similar) was talked about for […]

See perhaps:

* https://en.wikipedia.org/wiki/Eight-hour_day_movement

Generally it only really started being talked about when "workers" became a thing, specifically with the Industrial Revolution. Before that a good portion of work was either agricultural or domestic, so talk of 'shifts' didn't really make much sense.

_heimdall•4mo ago
Oh sure, a standard shift doesn't make much sense unless you're an employee. My point was specifically about the 40 hour standard we use now though. We didn't get a 40-hour week because workers demanded it, we got it because wartime governments decided that was the "right" balance of labor and output.
throw0101d•4mo ago
> https://www.amazon.co.uk/Technological-Revolutions-Financial...

Yes, that is the first link of my/GP post.

idiotsecant•4mo ago
Importantly, however, canals did end up changing the world.

Most new tech is like that - a period of mania, followed by a long tail of actual adoption where the world quietly changes

foofoo12•4mo ago
There's a good podcast on the Suez and Panama canal: https://omny.fm/shows/cautionary-tales-with-tim-harford/the-...
hemloc_io•4mo ago
The most frustrating thing to me about this most recent rash of biz guy doubting the future of AI articles is the required mention that AI, specifically an LLM based approach to AGI, is important even if the numbers don't make sense today.

Why is that the case? There's plenty of people in the field who have made convincing arguments that it's a dead end and fundamentally we'll need to do something else to achieve AGI.

Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.

I'm not a hater, it could be true, but it seems to be gospel and I'm not sure why.

Mapping to 2001 feels silly to me, when we've had other bubbles in the past that led to nothing of real substance.

LLMs are cool, but if they can't be relied on to do real work maybe they're not change the world cool? More like 30-40B market cool.

EDIT: Just to be clear here. I'm mostly talking about "agents"

It's nice to have something that can function as a good Google replacement especially since regular websites have gotten SEOified over the years. Even better if we have internal Search/Chat or whatever.

I use Glean at work and it's great.

There's some value in summarizing/brainstorming too etc. My point isn't that LLMs et al aren't useful.

The existing value though doesn't justify the multi-trillion dollar buildout plans. What does is the attempt to replace all white collar labor with agents.

That's the world changing part, not running a pretty successful biz, with a useful product. That's the part where I haven't seen meaningful adoption.

This is currently pitched as something that will have nonzero chance of destroying all human life, we can't settle for "Eh it's a bit better than Google and it makes our programmers like 10% more efficient at writing code."

aurareturn•4mo ago

  Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I have a friend who works at PwC doing M&A. This friend told me she can't work without ChatGPT anymore. PwC has an internal AI chat implementation.

Where does this notion that LLMs have no value outside of programming come from? ChatGPT released data showing that programming is just a tiny fraction of queries people do.

admissionsguy•4mo ago
When your work consists of writing stuff disconnected from reality it surely helps to have it written automatically.
patapong•4mo ago
On the other hand, it's a hundreds-of-billions of dollars market...
Gerardo1•4mo ago
What is?
NoGravitas•4mo ago
Writing stuff disconnected from reality, I assume.
Etheryte•4mo ago
The recent MIT report on the state of AI in business feels relevant here [0]:

> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.

There's no doubt that you'll find anecdotal evidence both for and against in all variations, what's much more interesting than anecdotes is the aggregate.

[0] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...

crazygringo•4mo ago
No. The aggregate is useless. What matters is the 5% that have positive return.

In the first few years of any new technology, most people investing it lose money because the transition and experimentation costs are higher than the initial returns.

But as time goes on, best practices emerge, investments get paid off, and steady profits emerge.

Gerardo1•4mo ago
On the provider end, yes. Not on the consumer end.

These are business customers buying a consumer-facing product.

crazygringo•4mo ago
No, on the consumer end. The whole point is that the 5% profitable is going to turn to 10%, 25%, 50%, 75% as companies work out how to use AI profitably.

It always takes time to figure out how to profitably utilize any technological improvement and pay off the upfront costs. This is no exception.

johnnyanmac•4mo ago
Can we both at least agree that 95% of comapnies investing and failing in a technology with 400b+ dollars of investment constitutes a bubble popping? I pretty much agree with you otherwise and that is what the article comes down to as well:

>I believe both sides are right. Like the 19th century railroads and the 20th century broadband Internet build-out, AI will rise first, crash second, and eventually change the world.

sigbottle•4mo ago
I think it's true that AI does deliver real value. It's helped me understand domains quickly, be a better google search, given me code snippets and found obscure bugs, etc. In that regard, it's a positive on the world.

I also think it's true that AI is nowhere near AGI level. It's definitely not currently capable of doing my job, not by a long shot.

I also think that, throwing trillions of dollars at AI for a "a better google search, code snippet generator, and obscure bug finder" is a contentious question, and a lot of people oppose it for that reason.

I personally still think it's kind of crazy we have a technology to do things that we didn't have just ~2 years before, even if it just stagnates right here. Still going to be using it every day, even if I admittedly hate a lot of parts of it (for example, "thinking models" get stuck in local minima way too quickly).

At the same time, don't know if it's worth trillions of dollars, at least right now.

So all claims on this thread can be very much true at the same time, just depends on your perspective.

johnnyanmac•4mo ago
I have my criticisms of LLM's, but anyone in 2025 trying to sell you AGI is selling you a bridge made of snake oil. The aspect of the job market won't even be the biggest question the day we truly achieve AGI.

>At the same time, don't know if it's worth trillions of dollars, at least right now.

The revenue numbers sure don't think so. And I don't think this economy can support "trillions" of spending even if it wanted to. That's why the bubble will pop, IMO.

jt2190•4mo ago
That report also mentions individual employees using their own personal subscriptions for work, and points to it as a good model for organizations to use when rolling out the tech (i.e. just make the tools available and encourage/teach staff how they work). That sure doesn’t make it sound like “zero return” is a permanent state.
Workaccount2•4mo ago
Ah yes, the study that everyone posts but nobody reads

>Behind the disappointing enterprise deployment numbers lies a surprising reality: AI is already transforming work, just not through official channels. Our research uncovered a thriving "shadow AI economy" where employees use personal ChatGPT accounts, Claude subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval.

>The scale is remarkable. While only 40% of companies say they purchased an official LLM subscription, workers from over 90% of the companies we surveyed reported regular use of personal AI tools for work tasks. In fact, almost every single person used an LLM in some form for their work.

johnnyanmac•4mo ago
IT nightmares aside, this only makes the issue worse, if +it is so widespread to the point where some are sneaking to use it personally, and they still can't make a business more productive/profitable: well, that bubble is awfully wobbly
lowsong•4mo ago
This says more about PwC and what M&A people do all day than it does about ChatGPT.
hshdhdhj4444•4mo ago
> This friend told me she can't work without ChatGPT anymore.

Is she more productive though?

People who smoke cigarettes will be unable to work without their regular smoke breaks. Doesn’t mean smoking cigarettes is good for working.

Personally I am an AI booster and I think even LLMs can take us much farther. But people on both sides need to stop accepting claims uncritically.

aurareturn•4mo ago
No she’s less productive. She just use it because she wants to do less work, be less likely to get promoted, and have to stay in the office longer to finish her work.

/s

What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder? Like ChatGPT is some kind of addiction similar to drugs? Is it so hard to believe that ChatGPT is actually productive?

echelon•4mo ago
Serious thought.

What if people are using LLMs to achieve the same productivity with more cost to the business and less time spent working?

This, to me, feels incredibly plausible.

Get an email? ChatGPT the response. Relax and browse socials for an hour. Repeat.

"My boss thinks I'm using AI to be more productive. In reality, I'm using our ChatGPT subscription to slack off."

That three day report still takes three days, wink wink.

AI can be a tool for 10xers to go 12x, but more likely it's also that AI is the best slack off tool for slackers to go from 0.5x to 0.1x.

And the businesses with AI mandates for employees probably have no idea.

Anecdotally, I've seen it happen to good engineers. Good code turning into flocks of seagulls, stacks of scope 10-deep, variables that go nowhere. Tell me you've seen it too.

mrbungie•4mo ago
That's Jevons paradox for you.
johnnyanmac•4mo ago
Yeah, I think this is why it's more important to shift the question to "is the team/business more productive". If a 0.5xer manager is pushing 0.1x work and a 1xer teammate needs to become a 1.5xer to fix the slop, then we have this phenomenon where the manager can feel way more productive, while the team under him is spending more time just to fix or throw out his slop.

Both their perspectives are technically right. But we'll either have burned out workers or a lagging schedule as a result in the long term. I miss when we thought more long term about projects.

freehorse•4mo ago
It is the kind of question that takes into account that people thinking that they are more productive does not imply that they actually are. This happens in a wide range of contexts, from AI to drugs.
1123581321•4mo ago
It isn’t a question asked by people generally suspicious of productivity claims. It’s only asked by LLM skeptics, about LLMs.
rimunroe•4mo ago
That doesn’t seem to me like a good reason to dismiss the question, and especially not that strongly/aggressively. We’re supposed to assume good intentions on this site. I can think of any number of reasons one might feel more productive but in the end not be going much faster. It would be nice to know more about the subject of the question’s experience and what they’re going off of.
1123581321•4mo ago
You’re right; I’m rereading and it’s rude. Thanks.
happymellon•4mo ago
It absolutely is a question people ask when suspicious of productivity claims.

Lots of things claim to make people more productive. Lots of things make people believe they are more productive. Lots of things fail to provide evidence of increasing productivity.

This "just believe me" mentality normally comes from scams.

danaris•4mo ago
As a counterexample to your assertion, I've seen it a lot on both sides of the RTO discourse.
rimunroe•4mo ago
This is another example of the phenomenon they’re describing, not a counterexample.
danaris•4mo ago
...The post I replied to specifically said "It [questioning people's self-evaluation of productivity] is only asked by LLM skeptics, about LLMs".

Naming another example outside of LLM skeptics asking it, about LLMs, is inherently a counterexample.

rimunroe•4mo ago
Wow you're completely right and I just completely forgot who you were replying to. I thought you were replying to the person the person you were actually replying to was replying to. Sorry about both my mistake and my previous sentence's convolution!
freehorse•4mo ago
Maybe you are not aware of such kinds of topics, but yes it is asked often. It is asked for stimulants, for microdosing psychedelics, for behavioural interventions or workplace policies/processses. Whenever there are any kind of productivity claims, it is asked, and it should be asked.
johnnyanmac•4mo ago
>It isn’t a question asked by people generally suspicious of productivity claims.

Why not? If you ever got an AI generated email or had to code-review anything vibecoded, you're going to be suspicious on who's "more productive". I've read reports and studies and it feels like the "more productive" people tend to be pushing more work onto people below or beside them to fix the generated mess.

I do believe there are productive ways to use this tech, but it does not seem like many people these days has the discipline to establish a proper workflow.

tbrownaw•4mo ago
It's not that hard to review how much you actually got done and check whether it matches how much it felt like you were getting done.
fragmede•4mo ago
The problem isn't a delta between what got done and how much it felt like got done. The problem is it's not known how it would have taken you to do what got done unless you do it twice. Once by hand and once with an LLM, and then compare. Unfortunately, regardless of what you find, HN will be rushing to say N=1, so there's little incentive to report on any individual results.
emp17344•4mo ago
In fact, when this was studied, it was found that using AI actually makes developers less productive:

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

freehorse•4mo ago
To do that properly, one needs some kind of control, which is hard to do with one person. It should be doable with proper effort, but far from trivial, because it is not enough to measure what you actually did in one condition, you have to compare it with sth. And then there can be a lot of noise for n=1: when you use LLMs, maybe you happen to have to solve harder tasks. So you need at least to do it over quite a lot of time, or make sure the difficulty of tasks is similar. If you have a group of people, you can put them into groups instead and thus not care as much for these parameters, because you can assume that when you average this "noise" will cancel out.
mrbungie•4mo ago
> What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder?

That's just an appeal to masses / bandwagon fallacy.

> Is it so hard to believe that ChatGPT is actually productive?

We need data, not beliefs and current data is conflicting. ffs.

spicyusername•4mo ago
I mean... there are many situations in life where people are bad judges of the facts. Dating, finances, health, etc, etc, etc.

It's not that hard to imagine that your friend feels more productive than she actually is. I'm not saying it's true, but it's plausible. The anecdata coming out of programming is mostly that people are only more productive in certain narrow use cases and much less productive in everything else, relative to just doing the work themselves with their sleeves rolled up.

But man to seeing all that code gets spit out on the screen FEEL amazing, even if I'm going to spend the next few hours needing to edit it, for the next few months managing the technical debt I didn't notice when I merged it.

bakugo•4mo ago
You're working under the assumption that punching a prompt into ChatGPT and getting up to grab some coffee while it spits out thousands of tokens of meaningless slop to be used as a substitute for something that you previously would've written yourself is a net upgrade for everyone involved. It's not. I can use ChatGPT to write 20 paragraph email replies that would've previously been a single manually written paragraph, but that doesn't mean I'm 20x more productive.

And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.

isamuel•4mo ago
> And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.

Come on, you can’t mean this in any kind of robust way. I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?” Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.

bakugo•4mo ago
> I can’t get my job done without a computer

Are you really comparing an LLM to a computer? Really? There are many jobs today that quite literally would not exist at all without computers. It's in no way comparable.

You use ChatGPT to do the things you were already doing faster and with less effort, at the cost of quality. You don't use it to do things you couldn't do at all before.

aurareturn•4mo ago
I can’t maintain my company’s Go codebase without chatgpt.
danaris•4mo ago
There's a big difference between needing a tool to do a job that only that tool can do, and needing a crutch to do something without using your own faculties.

LLMs are nothing like a computer for a programmer, or a saw for a carpenter. In the very best case, from what their biggest proponents have said, they can let you do more of what you already do with less effort.

If someone has used them enough that they can no longer work without them, it's not because they're just that indispensable: it's because that someone has let their natural faculties atrophy through disuse.

johnnyanmac•4mo ago
> you can’t mean this in any kind of robust way.

Why not?

>I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?”

It's very possible. I know people love bescmirching the "you won't always have a calculator" mentality. But if you're using a calculator for 2nd grade mental math, you may have degregaded too far. It varies on the task, of course.

>Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.

Depends on how it's making it easier. Phones are an excellent example. They make communication much easier and long distance communication possible. But if it gets to the point where you're texting someone in the next room instead of opening your door, you might be losing a piece of you somewhere.

KoolKat23•4mo ago
That's a very broad assumption.

It's no different to a manager that delegates, are they less of a manager because they entrust the work to someone else? No. So long as they do quality checks and take responsibility for the results, wheres the issue?

Work hard versus work smart. Busywork cuts both ways.

aurareturn•4mo ago
You’re assuming that there is zero quality check and that managers and clients will accept anything chatgpt generates.

Let’s be serious here. These are still professionals and they have a reputation. The few cases you hear online of AI slop in professional settings is the exception. Not the norm.

coderjames•4mo ago
> This friend told me she can't work without ChatGPT anymore.

It doesn't say she chooses to use it; it says she can't work without using it. At my workplace, senior leadership has mandated that software engineers use our internal AI chat tooling daily, they monitor the usage statistics, and are updating engineering leveling guides to include sufficient usage of AI being required for promotions. So I can't work without AI anymore, but it doesn't mean I choose to.

y0eswddl•4mo ago
There's literally a study out the shows when developers think LLMs are making them 20% faster, it turned out to be making them 20% less productive:

https://arxiv.org/abs/2507.09089

johnnyanmac•4mo ago
>Is it so hard to believe that ChatGPT is actually productive?

Given what I've seen in the educational sector: yes. Very hard. We already had this massive split in extremes between the highly educated and the ones who struggle. The last thing we need is to outsource the aspect of thinking to a billionaire tech company.

The slop you see in the workplace isn't encouraging either.

legucy•4mo ago
Cigarettes were/are a pretty lucrative business. It doesn’t matter if it’s better or worse, if it’s as addictive as tobacco, the investors will make back their money.
KoolKat23•4mo ago
Productive how and for who?

My own use case (financial analysis and data capture by the models). It takes away the grunt work, I can focus on the more pleasant aspects of the job, it also means I can produce better quality reports as I have additional time to look more closely. It also points out things I could have potentially missed.

Free time and boredom spurs creativity, some folks forget this.

I also have more free time, for myself, you're not going to see that on a corporate productivity chart.

Not everything in life is about making more money for some already wealthy shareholders, a point I feel sometimes lost in these discussions, I think some folks need some self-reflection on this point, their jobs don't actually change the world and thinking of the shareholders only gets you so far. (Not pointed at you, just speaking generally).

johnnyanmac•4mo ago
>Productive how and for who?

For me, quality is the biggest metric, not money. But time does play into the metric of quality.

The sad reality is that many use it as a shortcut to output slop. Which may be "productive" in a job where that busywork isn't critical for anyone but your paycheck. But those kinds of corners being cut seems anathema to proper engineering or any other mission critical duties.

>their jobs don't actually change the world and thinking of the shareholders only gets you so far.

I'm worried of seeing more cases like a lawyer submitting cases to a judge that never existed. There's ethical concerns about the casual chat apps, but I can leave that to others.

KoolKat23•4mo ago
I think this is not really the case, people see through that type of LLM use immediately (busywork). This is demonstrated in the fact that top-down implementations aren't working despite use amongst employees thriving.

People doing their jobs know how to use it effectively. Just because corporates aren't capturing that value for themselves doesn't mean it's low quality. It's being used in a way that is perhaps reflected as an improvement in the actual employees standing, and could be bridging existing outdated work processes. Often an employee is powerless to change these processes and KPI's are notoriously narrow in scope.

Hallucinations happen less frequently these days, and people are aware of the pitfalls so account for this. Literally in my own example above it means I have more time to actually check my own work (and it's work) and it also points out factors I might have missed as a human (this has absolutely happened multiple times already).

fragmede•4mo ago
> Doesn’t mean smoking cigarettes is good for working.

Fun fact; smoking likely is! There have been numerous studies into nicotine as a nootropic, eg https://pubmed.ncbi.nlm.nih.gov/1579636/#:~:text=Abstract,sh... which have found that nicotine improves attention and memory.

Shame about the lung cancer though.

iberator•4mo ago
Nicotine does not cause cancer. Smoke do
phatskat•3mo ago
Yes however nicotine can speed up the growth of existing cancers.
sotix•4mo ago
> Doesn’t mean smoking cigarettes is good for working.

Au contraire. Acute nicotine improves cognitive deficits in young adults with attention-deficit/hyperactivity disorder: https://www.sciencedirect.com/science/article/abs/pii/S00913...

> Non-smoking young adults with ADHD-C showed improvements in cognitive performance following nicotine administration in several domains that are central to ADHD. The results from this study support the hypothesis that cholinergic system activity may be important in the cognitive deficits of ADHD and may be a useful therapeutic target.

johnnyanmac•4mo ago
So the best interpretation is that it's like Adderal. Something to be carefully prescribed to with doctor-sanctioned doses. Not something you buy off the counter and smoke a pack a day of.
bakugo•4mo ago
> This friend told me she can't work without ChatGPT anymore.

This isn't a sign that ChatGPT has value as much as it is a sign that this person's work doesn't have value.

deadbabe•4mo ago
I find it’s mostly a sign of how lazy people get once you introduce them to some new technology that requires less effort for them.
manishsharan•4mo ago
Most developers can't do much work without an IDE and Chrome + Google.

Would you say that their work has no value?

johnnyanmac•4mo ago
This is probably the only place I can properly say "Programmers should be brought up with vim and man pages", so I'll say it here.

Anyways, IDE's don't try to offload the thinking for you, it's more like an abacus. You still need to work in it a while and learn the workflow before it's more efficient than a text editor + docs.

Chrome is a trickier aspect, because the reality is that a lot of modern docs completely suck. So you rely less on official documentation and more about how others have navigated an IDE and if those options work for you. I'd rather we make proper documentation than offload it into a black box that may or may not understand what it's spouting out to you, though.

aurareturn•4mo ago
What kind of logic is this?

ChatGPT automates much of my friend's work at PwC making her more productive --> not a sign that ChatGPT has any value

Farming machines automated much of what a farmer used to have to do by himself making him more productive --> not a sign that farming machines have any value

dsr_•4mo ago
The output of a farm is food or commodities to be turned into food.

The output of PwC -- whoops, here goes any chance of me working there -- is presentations and reports.

“We’re entering a bold new chapter driven by sharper thinking, deeper expertise and an unwavering focus on what’s next. We’re not here just to help clients keep pace, we’re here to bring them to the leading edge.”

That's on the front page of their website, describing what PwC does.

Now, what did PwC used to do? Accounting and auditing. Worthwhile things, but adjuncts to running a business properly, rather than producing goods and services.

aurareturn•4mo ago
The output of her work isn’t presentations and reports. The actual output is raising money and making successful deals. This requires convincing investors mostly which is very hard to do.

Look up what M&A is.

johnnyanmac•4mo ago
>Look how what M&A is.

Mergers and Aquisitions? If that's the right acronym I hate it even more, thank you.

But yes, I can see how automating the BS of corporate culture then using it to impress people (who also don't care anyway) by saying "I made this with AI" can be "productive". Not really a job I can do, though.

aurareturn•4mo ago
Classic software developer mindset. Thinks nothing is valuable except writing code.
johnnyanmac•4mo ago
If you saw my rant on monopolistic mergers and thought "he only cares about writing code", then it's clear who's really in the software mindset.
aurareturn•4mo ago
What?

If you think convincing investors to give you hundreds of millions is easier than writing code, you’re out of your mind.

nizsle•4mo ago
Greater output doesn't always equal greater productivity. In my days in the investing business we would have junior investment professionals putting together elaborate and detailed investment committee memos. When it came time to review a deal in the investment committee meetings we spent all our time trying to sift through the content of the memos and diligence done to date to identify the key risks and opportunities, with what felt like a 1:100 signal to noise ratio being typical. The productive element of the investment process was identifying the signal, not producing the content that too often buries the signal deeper. Imo, AI tools to date make it so much easier to create content which makes it harder to be productive.
thisisit•4mo ago
> This friend told me she can't work without ChatGPT anymore

I am curious what kind of work is she using ChatGPT such that she cannot do without it?

> ChatGPT released data showing that programming is just a tiny fraction of queries people do

People are using it as search engine, getting dating advice and everything under the sun. That doesn't mean there is business value - so to speak. If these people had to pay say $20 a month for this access, are they willing to do so?

The poster's point was that coding is an area which is paying consistently for LLM models so much that every model has a coding specific version. But we don't see same sort of specialized models for other areas and the adoption is low to nonexistent.

FridgeSeal•4mo ago
> what kind of work is she using ChatGPT such that she cannot do without it?

Given they said this person worked at PwC, I’m assuming it’s pointless generic consultant-slop.

Concretely it’s probably godawful slide decks.

johnnyanmac•4mo ago
>Where does this notion that LLMs have no value outside of programming come from?

Well this article cites 400b of spending for 12b of revenue. That's not zero value, but it definitely showing overvalue. We're not paying that level of money back with consumer level goods.

Now is B2B valuable? Maybe. But it's really tough valuating that with how businesses are operating c. 2025.

> ChatGPT released data showing that programming is just a tiny fraction of queries people do.

yes, but it's not 2010 anymore. Companies are already on ChatGPT's neck trying to get RoI's. They can't run insolvent for a decade at this level of spending like all the FAANG's did in yestr-decade.

kumarm•4mo ago
[even in programming it's inconclusive as to how much better/worse it makes programmers.]

Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.

Current AI tools may not beat the best programmer, they definitely improves average programmer efficiency.

piker•4mo ago
> Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.

Try changing something old in claude code (or codex etc) using a programming language you have used before. Your opinion might change drastically.

UrineSqueegee•4mo ago
this is literally how i maintain the code at my current position, if i didn't have copilot+ i would be cooked
Gerardo1•4mo ago
...what were you doing before?

That's bread and butter development work.

UrineSqueegee•4mo ago
didnt have to maintain any code before thankfully
aurareturn•4mo ago
Same here. Coworker left. I now maintain a bunch of Go code and I had never written any before.

I use copilot in agent mode.

Wowfunhappy•4mo ago
I have! Claude is great at taking a large open source project and adding some idiosyncratic feature I need.
lowsong•4mo ago
> using a programming language you have not used before

But why would I do that? Either I'm learning a new language in which case I want to be as hands-on as possible and the goal is to learn, not to produce. Or I want to produce something new in which case, obviously, I'd use a toolset I'm experienced in.

jbstack•4mo ago
There are plenty of scenarios where you want to work with a new language but you don't want to have to dedicate months/years of your life to becoming expert in it because you are only going to use it for a one-time project.

For example, perhaps I want to use a particular library which is only available in language X. Or maybe I'm writing an add-on for a piece of software that I use frequently. I don't necessarily want to become an expert in Elisp just to make a few tweaks to my Emacs setup, or in Javascript etc. to write a Firefox add-on. Or maybe I need to put up a quick website as a one-off but I know nothing about web technologies.

In none of these cases can I "use a toolset I'm experienced in" because that isn't available as an option, nor is it a worthwhile investment of time to become an expert in the toolset if I can avoid that.

piva00•4mo ago
The question is: is that value worth the US$400bi per year of investment sucking out all the money from other ventures?

It's a damn good tool, I use it, I've learned the pitfalls, it has value but the inflation of potential value is, by definition, a bubble...

player1234•4mo ago
It is not worth it and it is not even that impressive considering the cost.

If you told me that you would spend half a trillion and the best minds on reading the whole internet, then with some statistical innovation try to guess the probable output of an input. The way it works now seems about right, probably a bit disappointing even.

I would also say, it seems cool and you could do that, but why would you? At least when the training is done it is cheap to use right? No!? What the actual fuck!

philippta•4mo ago
> they definitely improves average programmer efficiency

Do we really need more efficient average programmers? Are we in a shortage of average software?

jennyholzer•4mo ago
> Are we in a shortage of average software?

Yes. The "true" average software quality is far, far lower than the average person perceives it to be. ChatGPT and other LLM tools have contributed massively to lowering average software quality.

layer8•4mo ago
I don’t understand how your three sentences mesh with each other. In any case, making the development of average software more efficient doesn’t by itself change anything about its quality. You just get more of it faster. I do agree that average software quality isn’t great, though I wouldn’t attribute it to LLMs (yet).
Cthulhu_•4mo ago
Average programmers do not produce average software; the former implements code, the latter is the full picture and is more about what to build, not how to build it. You don't get a better "what to build" by having above-average developers.

Anyway we don't need more efficient average programmers, time-to-market is rarely down to coding speed / efficiency and more down to "what to build". I don't think AI will make "average" software development work faster or better, case in point being decades of improvements in languages, frameworks and tools that all intend to speed up this process.

tymonPartyLate•4mo ago
I did just that and I ended up horribly regretting it. The project had to be coded in Rust, which I kind of understand but never worked with. Drunk on AI hype, I gave it step by step tasks and watched it produce the code. The first warning sign was that the code never compiled at the first attempt, but I ignored this, being mesmerized by the magic of the experience. Long story short, it gave me quick initial results despite my language handicap. But the project quickly turned into an overly complex, hard to navigate, brittle mess. I ended up reading the Rust in Action book and spending two weeks cleaning and simplifying the code. I had to learn how to configure the entire tool chain, understand various cargo deps and the ecosystem, setup ci/cd from scratch, .... There is no way around that.

It was Claude Code Opus 4.1 instead of Codex but IMO the differences are negligible.

fabian2k•4mo ago
AI can be quite impressive if the conditions are right for it. But it still fails at so many common things for me that I'm not sure if it's actually saving me time overall.

I just tried earlier today to get Copilot to make a simple refactor across ~30-40 files. Essentially changing one constructor parameter in all derived classes from a common base class and adding an import statement. In the end it managed ~80% of the job, but only after messing it up entirely first (waiting a few minutes), then asking again after 5 minutes of waiting if it really should do the thing and then missing a bunch of classes and randomly removing about 5 parenthesis from the files it edited.

Just one anecdote, but my experiences so far have been that the results vary dramatically and that AI is mostly useless in many of the situations I've tried to use it.

benregenspan•4mo ago
One thing I like for this type of refactoring scenario is asking it to write a codemod (which you can of course do yourself but there's a learning curve). Faster result that takes advantage of a deterministic tool.
manishsharan•4mo ago
This is exactly my experience. We wanted to modernize a java codebase by removing java JNDI global variables. This is a simple though tedious task. And we tried Claude Code and Gemini. Both of these results were hilarious.
Workaccount2•4mo ago
LLMs are awful at tedious tasks. Usually because it involves massive context.

You will have much more success if you can compartmentalize and use new LLM instances as often as possible.

layer8•4mo ago
How can I possibly assess the results in a programming language I haven’t used before? That’s almost the same as vibe coding.
jbstack•4mo ago
The same way you assess results in a programming language you have used before. In a more complicated project that might mean test suites. For a simple project (e.g. a Bash script) you might just run it and see if it does what you expect.
layer8•4mo ago
The way I assess results in a familiar programming language is by reviewing and reasoning through the code. Testing is necessary, but not sufficient by any means.
edanm•4mo ago
Out of curiosity, how do you assess software that you didn't write and just use, and that is closed source? Don't you just... use it? And see if it works?

Why this is inherently different?

majewsky•4mo ago
You are correct that this is indeed a mostly unsolved problem. In chapter 15 of "The Mythical Man-Month", Fred Brooks called for all program documentation to not only for how to use a program, but also for how to modify a program [1] and, relevant to this discussion, for how to believe a program. This was before automated tests and CI/CD were a thing, so he advocated for shipping testcases with the program that the user could review and execute at any time. It's now 50 years later, and this is one of the many lessons in that book that we've collectively not picked up on enough.

[1] Side-note: This was written at a time when selling software as a standalone product was not really a thing, so everything was open-source and the "how to modify" part was more about how to read and understand the code, e.g. architecture diagrams.

edanm•4mo ago
As you said, this is in a very different context. He was building an OS, which was sold to highly technical users running their own programs on it.

I'm talking about "shrinkwrap" software like Word or something. There's nothing even close to testing for that this is not just "system testing" it.

hemloc_io•4mo ago
Yeah I've used it for personal projects and it's 50/50 for me.

Some of the stuff generated I can't believe is actually good to work with long term, and I wonder about the economics of it. It's fun to get something vaguely workable quickly though.

Things like deepwiki are useful too for open source work.

For me though the core problem I have with AI programming tools is that they're targeting a problem that doesn't really exist outside of startups, not writing enough code, instead of the real part of inefficiency in any reasonably sized org, coordination problems.

Of course if you tried to solve coordination problems, then it would probably be a lot harder to sell to management because we'd have to do some collective introspection as to where they come from.

randomNumber7•4mo ago
> For me though the core problem I have with AI programming tools is that they're targeting a problem that doesn't really exist outside of startups

If you work in science it's great to have s.th. that spits out mediocre code for your experiments.

johnnyanmac•4mo ago
>Of course if you tried to solve coordination problems, then it would probably be a lot harder to sell to management because we'd have to do some collective introspection as to where they come in.

Sad but true. Better to sell to management and tagline it as "you don't need a whole team anymore.", or going so far as "you can do this all by yourself now!".

Sadly managers usually have more money to spend than the workers too, so it's more profitable.

tbrownaw•4mo ago
> Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.

So it looks best when the user isn't qualified to judge the quality of the results?

thisisit•4mo ago
> using a programming language you have not used before

haven't we established that if you are layman in an area AI can seem magical. Try doing something in your established area and you might get frustrated. It will give you the right answer with caveats - code which is too verbose, performance intensive or sometimes ignoring best security practices.

apercu•4mo ago
"Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers."

The business model is it is data collection about you on steroids, and that the winning company will eclipse Meta in value.

It's just more ad tech with multipliers, and it will continue to control thought, sway policy and decide elections. Just like social media does today.

viking123•4mo ago
I think there is real value, for instance nowadays I just use chatGPT as google replacement, brainstorming, and for coding stuff. It's quite useful and it would be hard to go back to time without this kind of tool. The 20 bucks a month is more than worth it.

Not sure though that do they make enough revenue and what will be the moat if more or less the best models will converge around the same level. For most normies, it might be hard to spot difference between gpt 5 and claude for instance. Okay for Grok the moat is that it doesn't pretend to be a pope and censor everything.

onlyrealcuzzo•4mo ago
> adoption is low to nonexistent outside of programming

Odd way to describe ChatGPT which has >1B users.

AI overviews have rolled out to ~3B users, Gemini has ~200M users, etc.

Adoption is far from low.

TheCapeGreek•4mo ago
> AI overviews have rolled out to ~3B users

Does that really count as adoption, when it has been introduced as a default feature?

onlyrealcuzzo•4mo ago
Yes, if people are interacting with them, which they are.

HN seems to think everyone is like in the bubble here, which thinks AI is completely useless and wants nothing to do with it.

Half the world is interacting with it on a regular basis already.

Are we anywhere near AGI? Probably not.

Does it matter? Probably not.

Inference costs are dropping like a rock, and usage is continuing to skyrocket.

majewsky•4mo ago
I am interacting with AI daily through Google products. YouTube is consistently giving me auto-translated titles that are either hilarious or wrong, and I desparately want to turn this bullshit off, but I can't, because it's not giving me an option.

That's the kind of adoption that should just be put up for adoption instead.

(And of course, the reason that I can tell that the auto-translated video titles are hilarious and/or wrong is because they are translating into a language that I speak from a language that I also speak, but apparently the YouTube app's dev team cannot fathom that a person might speak more than one language.)

ACCount37•4mo ago
Mostly agreed, but AI overviews are a very bad example. Google can just force feed its massive search user base whatever bullshit it damn pleases. Even if it has negative value to the users.

I don't actually think that AI overviews have "negative value" - they have their utility. There are cases where I stop my search right after reading the "AI overview". But "organic" adoption of ChatGPT or Claude or even Gemini and "forced" adoption of AI overviews are two different beasts.

neutronicus•4mo ago
My father (in his 70s) has started specifically looking for the AI overview, FWIW.

He has not engaged with any chatbot, but he thinks of himself as "using AI now" and thinks of it as a value-add.

raincole•4mo ago
> adoption is low to nonexistent outside of programming

In the last few months, every single non-programmer friend I've met has ChatGPT installed on their phone (N>10).

Out of all the people that I know enough to ask if they have ChatGPT installed, there is only one who doesn't have it (my dad).

I don't know how many of them are paying customers though. IIRC one of them was using ChatGPT to translate academic writing so I assume he has pro.

Gigachad•4mo ago
I’ve been trying out locally run models on my phone. The iPhone 17 is able to run some pretty nice models, but they lack access to up to date information from the web like ChatGPT has. Wonder if some company like Kagi would offer some api to let your local model plug in and run searches.
data-ottawa•4mo ago
Kagi do but it’s fairly expensive (for my taste) and you have to email them for access.

There are other companies that provide these tools for anything supporting MCP.

fxtentacle•4mo ago
That would be perfect. Especially because Kagi could also return search results as JSON to the AI if they control both sides of the interaction.
hermannj314•4mo ago
My daughter and her friends have their own paid chatgpt. She said she uses it to help with math homework and described to me exactly why I bought a $200 TI-92 in the 90s with a CAS.

Adoption is high with young people.

billy99k•4mo ago
"Where's the business value? "

Have you ever used an LLM? I use it every day to help me with research and completing technical reports (which used to be a lot more of my time).

Of course you can't just use it blindly, but it definitely adds value.

lm28469•4mo ago
Does it bring more value than it cost ? That's the real question.

Nobody doubt it works, everybody doubt Altboy when he asks $7 trillion

UpsideDownRide•4mo ago
This question is pretty hard to answer without knowing the actual costs.

Current offerings are usually worth more than they cost. But since the prices are not really reflective of the costs it gets pretty muddy if it is a value add or not.

sakesun•4mo ago
Have you read the article ? The cost is currently not justified for the benefit.
player1234•4mo ago
How did you measure this "a lot more of my time", please share your results and methodology!
aiisnotjustah•4mo ago
I don't know what you mean by this at all tbh.

I don't think the researchers at the top think LLM is AGI.

DeepMind and co are already working on world models.

The biggest bottleneck right now is compute compute and compute. If an experiement takes MONTH to train, you want a lot more compute. You need compute to optimize what you already have like LLMs and then again a lot of compute to try out new things.

All of the compute/Datacenters and GPUs are not LLM GPUs. They are ML capable GPUs.

mnky9800n•4mo ago
i think it is more like maps. before 2004, before google maps, the way we interacted with the spatial distribution of places and things was different. all these ai dev tools like claude code as well as tools for writing, etc. are going to change the way we interact with our computers.

but on the other side, the reason everyone is so gung ho on all this is because these models basically allow for the true personalization of everything. They can build up enough context about you in every instance of you doing things online that they can craft the perfect ad experience to maximize engagement and conversion. that is why everyone is so obsessed with this stuff. they don't care about AGI, they care about maintaining the current status quo where a large chunk of the money made on the internet is done by delivering ads that will get people to buy stuff.

UpsideDownRide•4mo ago
I think there is a good flipside too. LLMs potentially enable generating custom made tooling tailored just for you. If you can get/provide data it's pretty easy to cook up solutions.

As an example - I'd never bother with mobile app just for myself since it's too annoying to get into for a somewhat small thing. Now I can chug along and have LLM fill in quickly my missing basic in the area.

mnky9800n•4mo ago
Yes. In my opinion the promise of LLMs isn’t whatever AGI robots doing everything for you. It’s a kind of holodeck or computer like in Star Trek. A tool that simply is able to translate your ideas into a computational representation.
cornholio•4mo ago
It's silly to say that the only objective that will vindicate AI investments is AGI.

Current batch of deep learning models are fundamentally a technology for labor automation. This is immensely useful in itself, without the need to do AGI. The Sora2 capabilities are absolutely wild (see a great example here of what non-professional users are already able to create with it: https://www.youtube.com/watch?v=HXp8_w3XzgU )

So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.

The emerging reasoning capabilities are very promising, able to generate new theories and make scientific experiments in easy to test fields, such as in vitro drug creation. It doesn't matter if the LLM hallucinates 90% of the time, if it correctly reasons a single time and it can create even a single new cancer drug that passes the test.

These are all examples of massive, massive economic disruption by automating intellectual labor, that don't require strict AGI capabilities.

asdf1280•4mo ago
The problem is that it’s already commodified; there’s no moat. The general tech practice has been capture the market by burning vc money, then jack up prices to profit. All these companies are burning billions to generate a new model and users have already proven there is no brand loyalty. They just hop to the new one when it comes out. So no one can corner the market and when the VC money runs out they’ll have to jack up prices so much that they’ll kill their market
Wowfunhappy•4mo ago
> The problem is that it’s already commodified; there’s no moat.

From an economy-wide perspective, why does that matter?

> users have already proven there is no brand loyalty. They just hop to the new one when it comes out.

Great, that means there might be real competition! This generally keeps prices down, it doesn't push them up! It's true that VCs may end up unhappy, but will they be able to do anything about it?

frogperson•4mo ago
The most isnt with the llm crestors, its with nvidia, but even that is under seige by Chinese mskers.
squidbeak•4mo ago
Compute is the moat.
jennyholzer•4mo ago
You seem to be making an implicit claim that LLMs can create an effective cancer drug "10% of the time".

Smells like complete and total bullshit to me.

Edit: @eucyclos: I don't assume that Chat GPT and LLM tools have saved cancer researchers any time at all.

On the contrary, I assume that these tools have only made these critical researchers less productive, and made their internal communications more verbose and less effective.

eucyclos•4mo ago
How many hours writing emails does it have to save human cancer researchers for it to be effectively true?
cornholio•4mo ago
No, that's not the claim. The claim is that we will create a hypothetical LLM that, when tasked with a problem at the scientific frontier of molecular biology will, about 10% of the time, correctly reason about existing literature and reach conclusions that are valid or plausible to similar experts in the field.

Let's say you run that LLM one million times and get 100.000 valid reasoning chains. Let's say among them are variations on 1000 fundamentally new approaches and ideas, and out of those, you can actually synthesize in the laboratory 200 new candidate compounds, and out of those, 10 substance show strong in-vitro response, and then one of those completely cures some cancerous mice.

There you go, you have substantially automated the intellectual work of cancer research and you have one very promising compound you can start phase 1 trials that you didn't have before AI, and all without any AGI.

rafaelmn•4mo ago
If you calculate the investment into AI and then divide by say 100k that's how many man-years need to replace with AI to be cost effective as labor automation the numbers aren't that promising given the current level of capability.
Gerardo1•4mo ago
Don't even need to get too fancy with it. Open AI has publicly committed to ~$500B in spending over the next several years (nevermind even they don't expect to actually bring that much revenue in)

$500B/$100,000 is 5 million, or 167k 30-year careers.

The math is ludicrous, and the people saying it's fine are incomnprehensible to me.

Another comment on a similar post just said, no hyperbole, irony, or joke intended: "Just you switching away from Google is already justifying 1T infrastructure spend."

cornholio•4mo ago
I don't think I follow. 1 trillion total investment divided by 100k yields 10 million man years, or 300k man-careers.

Just the disruption we can already see in the software industry are easily of that magnitude.

rafaelmn•4mo ago
>Just the disruption we can already see in the software industry are easily of that magnitude.

WTF ? Where are you seeing that ?

Also no you can't calculate 100k over 30 years as 3M because you expect investment growth - lets say stock market average of 7 percent per year that investment must return like 24 million in 30 years otherwise its not worth it. That means 8 trillion in next 30 years if you look over that long of an investment period.

And who in the hell is going to capture 30 years of profit with model/compute investments made today.

The math only maths within short timeframes - hardware will get amortized in 5 years, model obsolete in even less. So best case scenario you have to displace 2 million people and capture their output to repay that. Not with future tech - with tech investments made today.

cornholio•4mo ago
The global employment in software development and adjacent is in the tens of millions. To say the impact of AI code automation will be, at max, a rounding error of just 1-2% of that is just silly; currently, the junior pipeline is almost frozen in the global north, entire batches of graduates can't find jobs in tech.

Sure, the financial math over 30 years does not follow elementary arithmetic, and if the development hits a wall tomorrow they will have trouble recovering the investment just from code automation tools.

But this is a clearly nonsense scenario, the tech is rapidly expanding to other fields that have obvious potential to automate. This is not a pie-in the sky future technology yet to be invented, it's obvious productization of latent capability, similar to the early internet days. There might be some overshoots but the latent potential is all there, the AI investments are looking to be the first movers in that enormously lucrative space and take, what seem to me, reasonable financial risks in light of the rewards.

My claim is not that AGI will soon be available, but that applying existing frontier models on the entire economy, in the form of mature, yet to be developed products, will easily generate disruption that has a present value in the trillions.

rafaelmn•4mo ago
You do understand that you don't replace a 100k developer and call it a day - you have to charge the same company 100k for your AI tools. No model is nowhere near close today - they are having trouble convincing enterprises to pay less than 100$ per employee. The current models do not math at all, the only way these investments work is if models get fundamentally better.
maeln•4mo ago
> Current batch of deep learning models are fundamentally a technology for labor automation. This is immensely useful in itself, without the need to do AGI. The Sora2 capabilities are absolutely wild (see a great example here of what non-professional users are already able to create with it: https://www.youtube.com/watch?v=HXp8_w3XzgU )

> So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.

Can Sora2 change the framing of a picture without changing the global scene ? Can it change the temperature of a specific light source ? Can it generate a 8k HDR footage suitable for re-framing and color grading ? Can it generate minute long video without loosing coherence ? Actually, can it generate a few seconds without having to reloop with the last frame and have these obnoxious cuts that the video you pointed has ? Can it reshoot the same exact scene with just one element altered ?

All the video models right now are only good at making short, low-res, barely post-processable video. The kind of stuff you see on social media. And considering the metrics on ai-generated video on social media right now, for the most part, nobody want to look at them. They might replace the bottom of the barrel of social media posting (hello cute puppy videos), but there is absolutely nothing indicating that they migth automate or upend any real industry (be used in the pipeline, yeah maybe, why not, automate ? Won't hold my breath).

And the argument of their future capabilities, well ... It's been 50+ years that we should have fusion in 20 years.

Btw, the same argument can be made for LLM and image-gen tech in any creative purposes. People severly underestimate just how much editing, re-work, purpose and pre-production steps are involved in any major creative endeavor. Most model are just severly ill suited for that work. They can be useful for some stuff (specificaly, for editing images, ai-driven image fill do work decently for exemple), but overall, as of right now, they are mostly good at making low quality content. Which is fine I guess, there is a market for it, but it was already a market that was not keen on spending money.

____mr____•4mo ago
> They might replace the bottom of the barrel of social media posting (hello cute puppy videos)

Lay off. Only respite I get from this hell world is cute Rottweiler videos

data-ottawa•4mo ago
This is very surface level criticism.

Qwen image and nano banana can both do that with images, there’s zero reason to think we can’t train video models for masking.

This feels a lot like critiquing stable diffusion over hands and text, which the new SOTA models all handle well.

One of the easiest iterations on these models is to add more training cases to the benchmarks. That’s a timeline of months, not comparable to forecasting progress over 20 years like fusion.

maeln•4mo ago
> This is very surface level criticism.

Is it now. I don't think being able to accurately and predictably make changes to a shot, a draft, a design is surface level in production.

> Qwen image and nano banana can both do that with images, there’s zero reason to think we can’t train video models for masking.

Tell them to change the tilt of the camera roughly 15 degree left without changing anything else in the scene and tell me if it works.

> This feels a lot like critiquing stable diffusion over hands and text, which the new SOTA models all handle well.

Well does a lot of heavy lifting there.

> One of the easiest iterations on these models is to add more training cases to the benchmarks. That’s a timeline of months, not comparable to forecasting progress over 20 years like fusion.

And what if the model itself is the limiting factor ? The entire tech ? Do we have any proof that in the future the current technologies might be able to handle the cases I spoke about ?

Also, one thing that I didn't mention in the first post. Assuming that the tech does come to the point I can be used to automate a lot of the production. If Throwing a few millions to buy a GPU cluster is enough to be able to "generate" a relatively high quality movie or series, the barrier to entry will be incredibly low. The cost will be driven down, the amount of production will be very high and overall it might not be a trillion dollar industry no more.

hemloc_io•4mo ago
Regardless of my opinions on if you're correct about this, I'm not an ML expert so who knows, I'd be very happy if we cured cancer so I hope you're correct and the video is a cool demo.

I don't believe the risk vs reward on investing a trillion dollars+ is the same when your thesis changes from "We just need more data/compute and we can automate all white collar work"

to

"If we can build a bunch of simulations and automate testing of them using ML then maybe we can find new drugs" or "automate personalized entertainment"

The move to RL has specifically made me skeptical of the size of the buildout.

codingdave•4mo ago
There is a middle ground where LLMs are used as a tool for specific use cases, but not applied universally to all problems. The high adoption of ChatGPT is the proof of this. General info, low accuracy requirements - perfect use case, and it shows.

The problem comes in when people then set expectations that a chat solution can solve non-chat problems. When people assume that generated content is the answer but haven't defined the problem.

We're not headed for AGI. We're also not going to just say, "oh, well, that was hype" and stop using LLMs. We are going to mature into an industry that understands when and where to apply the correct tools.

calmoo•4mo ago
FWIW Derek Thompson (the author of this blogpost) isn't exactly a 'business guy'
jennyholzer•4mo ago
If I'm not mistaken he's working with Ezra Klein to push the Democrats to embrace racism instead of popular economic measures.

Edit: I expect that these guys will try to make a J.D. Vance style Republican pivot in the next 4-8 years.

Second Edit:

Ezra Klein's recent interview with Ta-Nehisi Coates is very specifically why I expect he will pivot to being a Republican in the near future.

Listen closely. Ezra Klein will not under any circumstances utter the words "Black People".

Again and again, Coates brings up issues that Black People face in America, and Klein diverts by pretending that Coates is talking about Marginalized Groups in general or Trans People in particular.

Klein's political movement is about eradicating discussion of racial discrimination from the Democratic party.

Third Edit:

@calmoo: I think you're not listening to the nuances of my opinion, and instead having an intense emotional reaction to my well-justified claims of racism.

Wowfunhappy•4mo ago
We're very off topic, but if you're truly interested in Ezra Klein's worldview, I highly recommend his recent interview with Ta-Nehisi Coates. At minimum, I think you'll discover that Ezra's feelings are a lot more nuanced than you're making them out to be.

https://www.nytimes.com/2025/09/28/opinion/ezra-klein-podcas...

calmoo•4mo ago
I don't really want to discuss politics off the bat of my purely 'for your information' comment, but I think you're grossly misrepresenting Ezra Klein's worldview and not listening to the nuances of his opinion, and instead having an intense emotional reaction to his words. Take a step back and try to think a bit more rationally here.

Also your prediction of them making a JD vance republican pivot is extremely misguided. I would happily bet my life savings against that prediction.

joules77•4mo ago
Cause the story is no more about Business or Economics. This is more like the nuke arms race in the 1940s. Red Queen Dynamics.
Cthulhu_•4mo ago
Why would we want AGI? I've yet to read a convincing argument in favor (but granted, I never looked into it, I'm still at science-fiction doomerism). One thing that irks me is that people see it as inevitable, and that we have to pursue AGI because if we don't, someone else will. Or more bleak, if we don't actively pursue us, our malignant future AGI overlords will punish us for not bringing it into existence (roko's basilisk, the thing Musk and Grimes apparently bonded over because they're weird)
skybrian•4mo ago
Maybe you just haven’t heard of them? For example, just the other day I heard about a company using an LLM to provide advice to doctors. News to me.

https://www.prnewswire.com/news-releases/openevidence-the-fa...

> OpenEvidence is actively used across more than 10,000 hospitals and medical centers nationwide and by more than 40% of physicians in the United States who log in daily to make high-stakes clinical decisions at the point of care. OpenEvidence continues to grow by over 65,000 new verified U.S. clinician registrations each month. […] More than 100 million Americans this year will be treated by a doctor who used OpenEvidence.

More:

https://robertwachter.substack.com/p/medicines-ai-knowledge-...

datadrivenangel•4mo ago
I've had doctors google things in front of me. This may be an improvement.
kalap_ur•4mo ago
Likely not true re adoption. According to McKinsey November 2024 12% of employees in the US used AI for >30% of their daily tasks. I saw another research early this summer, it said that 40% of employees use AI. Adoption is already pretty relevant. The real question is: number of people x token requirement of their daily tasks equals how many tokens, and where are we there. Based on McK, we possibly around 17% unless remaining 50% of tasks requires just way more complexity, because then that would obviously mean the incremental tasks require maybe exponentially more tokens and then penetration will be indeed low. But for this we need to know total token need of daily tasks of average office worker.
tim333•4mo ago
>the required mention that AI, specifically an LLM based approach to AGI, is important...

I don't think that's true. The people who think AI is important call it AI. The skeptics call it LLMs so they can say LLMs won't work. It's kind of a strawman argument really.

kalleboo•4mo ago
You can buy a rice cooker that claims it uses "AI", it's too much of a marketing buzzword to be useful.
rossant•4mo ago
Paywall.
pryelluw•4mo ago
Article is behind paywall and is simply saying the same things that people have been saying about the post tech crash.

Now, what this sort of article tends to miss (and I will never know because it’s paywalled like a jackass) is that these models services are used by everyday people for every day tasks. Doesn’t matter if they’re good or not. It enables them to do less work for the same pay. Don’t focus on the money the models are bringing in today, focus on the dependency they’re building on people’s minds.

huvarda•4mo ago
A handful of the largest companies cyclically investing and buying from each other is propping up the entire economy. Also stuff like Deepseek and other open source models exist. Unless AGI comes from LLMs (it absolutely won't) then its foolish to think there wont be a bubble
viking123•4mo ago
AGI might require a nobel price level invention, I am not even sure it will come in my lifetime and I am in my 30s.. Although I would hope we would get something that could solve difficult diseases that have more or less no treatment or cure today, at least Demis Hassabis seems interested in that.
tim333•4mo ago
I was thinking it's a bit like developing powered flight and saying steam engines won't work. It's true they didn't but the internal combustion engine was developed which did. It was still an engine machined from metal but with a different design. I think LLM -> AGI will go like that - some design evolved from LLMs but different in important ways.
hattmall•4mo ago
The issue though being that most people aren't paying and even those paying if they use it moderately aren't profitable. Nvidia "investing" 100B in one of its largest customers is a cataclysmically bright red flag.
CuriouslyC•4mo ago
I don't think it'll contract. The people dumping their money into AI think we are at end of days, new order for humanity type point, and they're willing to risk a large part of their fortune to ensure that they remain part of the empowered elite in this new era. It's an all hands on deck thing and only hard diminishing returns that make the AI takeoff story look implausible are going to cause a retrenchment.
admissionsguy•4mo ago
So just the “it’s different this time” mentality shared by all bubbles. Some things never change.
viking123•4mo ago
Yeah it wouldn't be a bubble if it didn't have that mentality. Every bubble has had that thought and it's the same now. Kind of hard to notice it though when you are in the eye of the storm.

There were people telling me during the NFT craze that I just don't get it and I am dumb. Not that I am comparing AI to it directly because AI has actual business value but it is funny to think back. I felt I was going mad when everyone tried to gaslight me

CuriouslyC•4mo ago
The final AI push that doesn't lead to a winter will look like a bubble until it hits. We're realistically ~3 years away from fully autonomous software engineering (let's say 99.9% for a concrete target) if we can shift some research and engineering resources towards control, systems and processes. The economic value of that is hard to overstate.
piva00•4mo ago
You are basically saying "it's different this time" with a lot of words.
reliabilityguy•4mo ago
> We're realistically ~3 years away from fully autonomous software engineering

We had Waymo cars about 18 years ago, and only recently they started to roll out commercially. Just saying.

fragmede•4mo ago
This isn't a comment on timelines, but a Waymo going wild is going to run over and kill people, so it makes sense to be overly conservative with moving forwards. Meanwhile, if someone hacks into a vibecoded website and deletes everything and steals my user data, no one's getting run over by a car.
reliabilityguy•4mo ago
Sure. The point I was trying to make is that we can see a technology that is amazing, and seemingly does what we want, and yet has so many edge cases that make it unviable commercially.
fragmede•4mo ago
Have you tried Comma.ai? While Waymo and Tesla are trying to make fully self-driving autonomous taxi-grade AI and are taking forever to deliver, geohot casually makes the box that we all really want. A device that hooks into your car, and handles left right gas and brakes on the freeway. I can handle the driving between my house and the freeway, and the freeway offramp to my destination, it's the stop and go traffic on the freeway or just sitting there for hours that sucks ass.

We can see a technology and its shortcomings and people will still pay for it. Early cars were trash, but now look where we are.

pessimizer•4mo ago
You don't think it will contract just because rich people have bet so much on it that they'll be forced to throw good money after bad? That's the only reason?
CuriouslyC•4mo ago
I don't think it'll contract because I don't think we'll get a signal that takeoff for sure isn't going to happen, it'll just happen much slower than the hypers are trying to sell, so investors will continue to invest because of sunk costs and the big downside risk of being left behind. I'm sure we'll see a major infrastructure deployment slowdown as foundation model improvements slow, but there are a lot of vectors for optimization of these systems outside the foundation model so it'll be more of a paradigm shift in focus.
lm28469•4mo ago
It's probably exacerbated by the fact that everyone invest money now, I get daily ads from all my banking apps telling me to buy stocks and cryptos. People know they'll never get anywhere by working or saving, so they're more willing to gamble, high risk high reward, but they have nothing to lose
randomNumber7•4mo ago
People that gamble with their savings are not "investing". They are just delusional of the position they are in.
ttoinou•4mo ago

  Every financial bubble has moments where, looking back, one thinks: How did any sentient person miss the signs?
Well maybe a lot of people agree already with what the author is saying : the economics might crash, but the technology is here to stay. So we don't care about the bubble
rwmj•4mo ago
The technology is there til the GPUs become obsolete, so about 3 years.
ttoinou•4mo ago
Our current GPUs can last a decade or more, no ?
rwmj•4mo ago
They (probably) won't physically fail, but they'll be obsolete compared to newer GPUs which will have more raw compute and lower power requirements.
_heimdall•4mo ago
I don't think the question would be whether the technology literally disappears entirely, only how important it is going forward. The metaverse is still technically here, but that doesn't mean it is impactful or worth near the investment.

For LLMs, the architecture will be here and we know how to run them. If the tech hits a wall, though, and the usefulness doesn't balance well with the true cost of development and operation when VC money dries up, how many companies will still be building and running massive server farms for LLMs?

____mr____•4mo ago
If the tech is here to stay, my question is: how and why? The how: The projects for the new data centers and servers housing this tech are incredibly expensive to build and maintain. These also jack up the price of electricity in the neighborhoods and afaik the US electrical grid is extremely fragile and is already being pushed to its limit with the existing compute being used on AI. All of this for AI companies to not make a profit. The only case you could make would be to nationalize the companies and have them subsidized by taxes.

But why?: This would require you to make a case that AI tools are useful enough to be sustained despite their massive costs and hard to quantify contribution to productivity. Is this really the case? I haven't really seen a productivity increase worth justifying the cost, and as soon as Anthropic tried to even remotely make a profit (or break even) power users instantly realized that the productivity is not really worth paying the actual compute required to do their tasks

ttoinou•4mo ago
Do you need a measure or a quantification to do anything in life ? I don't wait for others benchmarks or others computing ROI factors to actually start using a technology and see it improves my workflow

  how and why?
How : we'll always be able to run smaller models on consumer grade computers

Why : most of the tasks humans need to do that computers couldn't do before, now can be improved with new AI. I fail to see how you can not see applications of this

Gazoche•4mo ago
And, conversely, some don't care about the technology but want to ride the bubble and exit right before it pops.
scooletz•4mo ago
> Some people think artificial intelligence will be the most important technology of the 21st century

We're just at 25% of it. Raising such a claim is foolish at least. People will be tinkering as usual and it's hard to predict the next big thing. You can bet on something, you can postdict (which is much easier), but being certain about it? Nope.

aiisnotjustah•4mo ago
I'm not following the arguments in the blog at all tbh.

The small huge companies investing in AI are tech companies who already make a lot of money. They did not invest in 'manufacturing' or other things on the side as relevant as the blog makes it.

The offshoring of manufacturing to China was a result of cost and shareholder value. But while USA got rich in 60-90 for manufacturing, now this moved over to China.

The investment is not just going into LLM, its going into ML and Robotics. The progress of ML and Robotics in the last x years is tremendous.

And the offshoring of Datacenters? DCs have very little personel they need, they are critical infrastructure you want to control. There is very little motivation to just 'offshore' critical infrastructure espescially from companies who are so rich, that they don't need to move them in some weird shitty location which only makes sense because energy is cheap but everything else is bad.

The 'AI Bubble' i'm experiencing, is adding real value left and right. And no i'm not talking about only LLMs. I'm talking about LLMs and ML in general.

This 'Bubble' is disrupting every single market out there. Everyone is searching for the niche not optimized yet and only accessable now tx to LLMs and ML.

And if you think this is just some hype and will go away, have you even tried Chat GPTs voice mode? This was literaly NOT possible 5 years ago. And i have real gains like 20% and more in plenty of things i'm now leveraging ML and LLms which was also NOT possible 5 years ago.

louwrentius•4mo ago
Will Ed Zitron indeed be vindicated[0]?

[0]: https://www.wheresyoured.at/the-haters-gui/

rwmj•4mo ago
Also no one is talking about how exposed we are to Taiwan. Nvidia, AMD, Apple, any company building out GPUs (so Google, Microsoft, Meta etc), even Intel a bit, are all manufacturing everything with one company, and it's largely happening in Taiwan.

If China invades Taiwan, why wouldn't TSMC, Nvidia and AMD stock prices go to zero?

_heimdall•4mo ago
We must run in different circles as it were, I hear this raised frequently on a number of podcasts I listen to.
jennyholzer•4mo ago
name the podcasts
_heimdall•4mo ago
This is an odd anecdote to ask "show your work."

I don't catalog shows and episodes where any particular topic comes up, and I follow over 100 podcasts so I don't have a specific list you can fact check me on.

Personally I could care less if that means you choose not to believe that I hear the Taiwan risk come up often enough.

zparky•4mo ago
Charitably, perhaps they're simply asking for podcasts that they would be interested in listening to that cover these topics. Personally, I would like to listen to a podcast that talks about semiconductor development, but I've done approximately zero research to find them so I'm not pressed for an answer :)
fragmede•4mo ago
https://youtube.com/playlist?list=PLKtxx9TnH76SRC7ZbOu2Nsg5m...

Asianometry playlist on TSMC

_heimdall•4mo ago
Fair enough! I may have read too far into the comment I replied to above.
triceratops•4mo ago
> I follow over 100 podcasts

How? Do you read summaries? Listen at 3x speed 5 hours a day?

randomNumber7•4mo ago
Boredom at work like most people on HN
_heimdall•4mo ago
Different kind of work for me st least. If I'm not at a desk coding I'm often out working on a farm. You have plenty of time for podcasts while cutting fields.
aiisnotjustah•4mo ago
Guess why USA invested in intel?
grendelt•4mo ago
> Also no one is talking about how exposed we are to Taiwan.

We aren't? It's one of the reasons the CHIPS Act et al get pushed through, to try to mitigate those risks. COVID showed how fragile supply chains are to shocks to the status quo and has forced a rethink. Check out the book 'World On The Brink' for more on that geopolitical situation.

lofaszvanitt•4mo ago
That's low prob, because that will almost surely lead to an all out war.
randomNumber7•4mo ago
I don't think they really need to invade for this. It is almost in artillery range (there are rounds that can go 150km).

They also could just send a big rocket barrage onto the factories. I assume it would be very hard to defend from such a short distance.

Then most ports and cities in taiwan are towards east (with big mountains on the western side). Would be very bad if China decides to blockade it by shooting ships from their main land...

Also very little the west could do imo. A land invasion in china or a nuclear war don't seem very reasonable.

ponector•4mo ago
>> Also very little the west could do imo

Looking how west has little willpower to truly sanction russia, against China there will be even less sanctions.

therealmarv•4mo ago
It reminds me what I said to somebody recently:

All my friends and family are using the free version of ChatGPT or something similar. They will never pay (although they have enough money to do so).

Even in my very narrow subjective circles it does not add up.

Who pays for AI and how? And when in the future?

delichon•4mo ago
Someone is paying. OpenAI revenue was $4.3 billion in the first half of this year.
lm28469•4mo ago
You forgot that part:

> The artificial intelligence firm reported a net loss of US$13.5 billion during the same period

If you sell gold at $10 a gram you'll also make billions in revenues.

boxed•4mo ago
Reminds me of the Icelandic investment banks during the height of the financial bubble. They basically did this.
dist-epoch•4mo ago
That loss includes the costs to train the future models.

Like Dario/Anthropic said, every model is highly profitable on it's own, but the company keeps losing money because they always train the next model (which will be highly profitable on it's own).

emp17344•4mo ago
But even if you remove R&D costs, they’re still billions of dollars short of profitability. That’s not a small hurdle to overcome. And OpenAI has to continue to develop new models to remain relevant.
sseagull•4mo ago
OpenAI "spent" more on sales/marketing and equity compensation than that:

"Other significant costs included $2 billion spent on sales and marketing, nearly doubling what OpenAI spent on sales and marketing in all of 2024. Though not a cash expense, OpenAI also spent nearly $2.5 billion on stock-based equity compensation in the first six months of 2025"

("spent" because the equity is not cash-based)

From https://archive.is/vIrUZ

cyberpunk•4mo ago
How the fuck does anyone spend 2 billion dollars on sales and marketing. I’ve seen the odd ad for openai but thag number seems completely bananas.
emp17344•4mo ago
Astroturfing on social media, most likely. The AI hype almost certainly isn’t entirely organic.
aryehof•4mo ago
All that free use by millions of users is sales and marketing.
jstummbillig•4mo ago
People are always so fidegty about this stuff — for super understandable reason, to be clear. People not much smarter than anyone else try to reason about numbers that are hard to reason about.

But unless you have the actual numbers, I always find it a bit strange to assume that all people involved, who deal with large amounts of money all the time, lost all ability to reason about this thing. Because right now that would mean at minimum: All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.

Of course, there is a lot of uncertainty — which, again, is nothing new for these people. It's just a weird thing to assume that.

lm28469•4mo ago
> All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.

It's like asking big pharma if medicine should be less regulated, "all the experts agree", well yeah, their paycheck depends on it. Same reason no one at meta tells Zuck that his metaverse is dogshit and no one wants it, they still spent billions on it.

You can't assume everyone is that dumb, but you certainly can assume that the yes men won't say anything other than "yes".

jstummbillig•4mo ago
Again, this is not an argument. I am asking: Why do we assume that we know better and people with far more knowledge and insight would all be wrong?

This is not rhetorical question, I am not looking for a rhetorical answer. What is every important decision maker at all these companies missing?

The point is not that they could not all be wrong, they absolutely could. The point is: Make a good argument. Being a general doomsayer when things get very risky might absolutely make you right but it's not a interesting argument – or any argument at all.

coldpie•4mo ago
> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?

Because of historical precedent. Bitcoin was the future until it wasn't. NFTs and blockchain were the future until they weren't. The Metaverse was the future until it wasn't. Theranos was the future until it wasn't. I don't think LLMs are quite on the same level as those scams, but they smell pretty similar: they're being pushed primarily by sales- and con-men eager to get in on the scam before it collapses. The amount being spent on LLMs right now is way out of line with the usefulness we are getting out of them. Once the bubble pops and the tools have a profitability requirement introduced, I think they'll just be quietly integrated into a few places that make sense and otherwise abandoned. This isn't the world-changing tech it's being made out to be.

rkomorn•4mo ago
I think you have a point and I'm not sure I entirely disagree with you, so take this as lighthearted banter, but:

Coming from the opposite angle, what makes you think these folks have a habit of being right?

VCs are notoriously making lots of parallel bets hoping one pays off.

Companies fail all the time, either completely (eg Yahoo! getting bought for peanuts down from their peak valuation), or at initiatives small and large (Google+, arguably Meta and the metaverse). Industry trends sometimes flop in the short term (3D TVs or just about all crypto).

C-levels, boards, and VCs being wrong is hardly unusual.

I'd say failure is more of a norm than success, so what should convince us it's different this time with the AI frenzy? They wouldn't be investing this much if they were wrong?

jstummbillig•4mo ago
The universe is not configured in such a way that trillion dollar companies come into existence without a lot of things going well over long periods of time, so if we accept money as the standard for being right, they are necessarily right, a lot.

Everything ends and companies are no exception. But thinking about the biggest threats is what people in managerial positions in companies do all day, every day. Let's also give some credit to meritocracy and assume that they got into those positions because they are not super bad at their jobs, on average.

So unless you are very specific about the shape of the threat and provide ideas and numbers beyond what is obvious (because those will have been considered), I think it's unlikely and therefor unreasonable to assume that a bystanders evaluation of the situation trumps the judgement of the people making these decisions for a living with all the additional resources and information at any given point.

Here's another way to look at this: Imagine a curious bystander were to judge decisions that you make at your job, while having only partial access to the information that you have to do the job, that you do every day for years. Will this person at some point be right, if we repeat this process often enough? Absolutely. But is it likely, on any single instance? I think not.

lm28469•4mo ago
You don't have an argument either btw, we're just discussing our points of view.

> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?

Because money and power corrupt the mind, coupled with obvious conflicts of interest. Remember the hype around AR and VR in 2015s ? Nobody gives a shit about it anymore. They wrote articles like "Augmented And Virtual Reality To Hit $150 Billion, Disrupting Mobile By 2020" [0], well, if you look at the numbers today you'll see it's closer to 15b than 150b. Sometimes I feel like I live in a parallel universe... these people have been lying and overpromising things for 10, 15 or 20+ years and people still swallow it because it sounds cool and futuristic.

[0] https://techcrunch.com/2015/04/06/augmented-and-virtual-real...

I'm not saying I know better, I'm just saying you won't find a single independent researcher that will tell you there is a path from LLMs to AGI, and certainly not any independent researcher that will tell you the current numbers a) make sense, b) are sustainable

alpineman•4mo ago
Pets.com, Enron, Lehman Bros, WeWork, Theranos, too many to mention.

Investors aren’t always right. The FOMO in that industry is like no other

jstummbillig•4mo ago
The point is not whether they are right, but how low the bar is for what constitutes as palatable opinions from bystanders on a topic that other people have devoted a lot of thought and money to.

I just don't think "I don't know anyone who pays for it" or "You know, companies have also failed before" bring enough to the table to be interesting talking points.

km144•4mo ago
I think it's a bit fallacious to imply that the only way we could be in an AI investment bubble is if people are reasoning incorrectly about the thing. Or at least, it's a bit reductive. There are risks associated with AI investment. The important people at FAANG/AI companies are the ones who stand to gain from investments in AI. Therefore it is their job to downplay and minimize the apparent risks in order to maximize potential investment.

Of course at a basic level, if AI is indeed a "bubble", then the investors did not reason correctly. But this situation is more like poker than chess, and you cannot expect that decisions that appear rational are in fact completely accurate.

player1234•4mo ago
Fallacy: Appeal to authority.
uniq7•4mo ago
They will eventually get ads mixed in the responses.
lm28469•4mo ago
I use it professionally and I rotate 5 free accounts on all platforms, money doesn't have any values anymore, people will spend $100 a month on LLMs and another $100 on streaming services, that's like half of my household monthly food budget
freetonik•4mo ago
I'm sure providers will find ways of incorporating the fees into e.g. ISP or mobile network fees so that users end up paying in a less obvious, less direct way.
ACCount37•4mo ago
The cost of serving an "average" user would only fall over time.

Most users rarely make the kind of query that would benefit a lot from the capabilities of GPT-6.1e Pro Thinking With Advanced Reasoning, Extended Context And Black Magic Cross Context Adaptive Learning Voodoo That We Didn't Want To Release To Public Yet But If We Didn't Then Anthropic Would Surely Do It First.

And the users that have this kind of demanding workloads? They'd be much more willing to pay up for the bleeding edge performance.

____mr____•4mo ago
AI companies don't have a plausible path to productivity because they are trying to create a market while model is not scalable unlike different services that have done this in the past. (DoorDash, Uber, Neftlix etc.)
ninetyninenine•4mo ago
I pay. Like if they're using it to talk then they won't pay.

But I use it for work.

m-hodges•4mo ago
People said the same thing about Facebook. The answer: advertisers.
dist-epoch•4mo ago
> They will never pay

Of course they will, once they start falling behind not having access to it.

People said the same things about computers (they are just for nerds, I have no use for spreadsheets) and smartphones (I don't need apps/big screen, I just want to make/receive calls).

spyckie2•4mo ago
The bet is that people will pay for services which are under the hood being done by AI.
9rx•4mo ago
> Who pays for AI [...]?

Venture capital funding adding AI features to fart apps.

tempodox•4mo ago
> Who pays for AI and how?

The same way the rest of webshit is paid for: ads. And ads embedded in LLM output will be impervious to ad blockers.

einrealist•4mo ago
So much in this AI bubble is just fueled by a mixture of wishful thinking (by people who know better), Science Fiction (by people who don't know enough) and nihilism (by people who don't care about anything other than making money and gaining influence).
DebtDeflation•4mo ago
>data-center related spending...probably accounted for half of GDP growth in the first half of the year. Which is absolutely bananas.

What? If that figure is true then "absolutely bananas" is the understatement of the century and "batshit insane" would be a better descriptor (though still an understatement).

roxolotl•4mo ago
This has been reported many places over the past year the percent seems to be all over the place though.

Yesterday “As much as 1/3rd”: https://www.reuters.com/markets/europe/if-ai-is-bubble-econo...

A week ago “More than consumer spending(but the reality is complex)”:https://fortune.com/2025/09/17/how-much-gdp-artificial-intel...

August “1.3% of 3% however it might be tariff stockpiling”: https://www.barrons.com/articles/ai-spending-economy-microso...

gandalfian•4mo ago
"The “pop” won’t be a single day like a stock market crash. It’ll be a gradual cooling as unrealistic promises fail, capital tightens, and only profitable or genuinely innovative players remain."

(This comment was written by ChatGPT)

jstummbillig•4mo ago
> It’s not clear that firms are prepared to earn back the investment

I am confused by a statement like this. Does Derek know why they are not? If hes does, I would love to hear the case (and no, comparisons to a random countries GDP are not an explanation).

If he does not, I am not sure why we would not assume that we are simply missing something, when there are so many knowledgable players charting a similar course, that have access to all the numbers and probably thought really long and hard about spending this much money.

By no means do I mean that they are right for that. It's very easy to see the potential bubble. But I would love to see some stronger reasoning for that.

What I know (as someone running a smallish non-tech business) is that there is plenty of very clearly unrealized potential, that will probably take ~years to fully build into the business, but that the AI technology of today already supports capability wise and that will definitely happen in the future.

I have no reason to believe that we would be special in that.

alpineman•4mo ago
It’s in the article. AI globally made 12bn USD of revenues in 2025, yet Capex next year is expected to be almost 50X that at 500bn USD
jstummbillig•4mo ago
It's not convincing. If those simply numbers (that everyone who is deciding these things has certainly considered) were a compelling argument, then everyone would act accordingly on them. It's not the first time they — all of them — are spending/investing money.

So what do I have to assume? Are they all simultaneously high on drugs and incapable of doing the maths? If that's the argument we want to go with, that's cool (and what do I know, it might turn out to be right) but it's a tall ask.

____mr____•4mo ago
Most AI firms have not shown a path toward profitability
Devasta•4mo ago
When there was a speculative mania in railways, afterward there were railroads everywhere that could still be used. A bubble in housing has a bunch of houses everywhere, or at the very least the skeleton of a house that could be finished later.

These tech bubbles are leaving nothing, absolutely nothing but destruction of the commons.

kzalesak•4mo ago
That's not entirely true - they are leaving the data centers themselves, and also all the trained models. These are already used
jongjong•4mo ago
I've been talking about the limited bandwidth of investors as a major problem with capital allocation for some time so it's good to see this idea acknowledged in this context. This problem will only get bigger and more obvious with increasing inequality. It is massive scale capital misallocation whereby the misallocation yields more nominal ROI than optimal allocation (if you were to consider real economic value and not numbers in dollars). Facilitated by the design of the monetary system as the value of dollars is kept decoupled from real economic value due to filter bubbles and dollar centralization.
jacknews•4mo ago
I think these articles slightly miss the point.

Sure, AI as a tool, as it currently is, will take a very long time to earn back the $B being invested.

But what if someone reaches autonomous AGI with this push?

Everything changes.

So I think there's a massive, massive upside risk being priced into these investments.

bakugo•4mo ago
> But what if someone reaches autonomous AGI with this push?

What is "autonomous AGI"? How do we know when we've reached it?

jacknews•4mo ago
When you can use AI as though it's an employee, instead of repeatedly 'prompting' it with small problems and tasks.

It will have agency, it will perform the role. A part of that is that it will have to maintain a running context, and learn as it goes, which seem to be the missing pieces in current llms.

I suppose we'll know, when we start rating AI by 'performance review', like employees, instead of the current 'solve problem' scorecards.

Seb-C•4mo ago
Except that the bubble's money is not being invested into cutting-edge ML research, but only into LLMs. And it has been obvious from the start to anyone half-competent about the topic that LLMs are not the path to AGI (if such a thing ever happens anyway).
jacknews•4mo ago
I don't think it's that obvious, in fact the 'bitter lesson' teaches us that simple scale leads to qualitative, not just quantitative improvement.

It does look like this is now topping out, but it's still not sure.

It seems to me a couple of simple innovations, like the transformer, could quite possibly lead to AGI, and the infrastructure would 'light up' like all that overinvested dark fiber in the 90s.

Quarrelsome•4mo ago
> But what if someone reaches autonomous AGI with this push?

What if Jesus turns up again? Seems a little optimistic, especially with several leading AI voices suggesting that AGI is at least a lot further away than just parameter expansion.

jacknews•4mo ago
It seems rather more likely to me, even if it's millennia way, that we get a semblance of an autonomous agentic AI, than what you suggest.

It might be impossible, or just need some innovations (eg, transformer), but my point is the investments are non-linear.

They are not investing X to get a return of Y.

If someone reaches AGI, current business models, ROI etc will be meaningless.

Quarrelsome•4mo ago
> If someone reaches AGI, current business models, ROI etc will be meaningless.

sure, but its still a moonshot, compared to our current tech. I think such hope leaves us vulnerable to cognitive biases such as sunk cost fallacies. If Jesus comes back that really would change everything, that's the clarion call of many cults that end in tragedy.

I imagine there is fruit that is considerably lower hanging, that has more obvious ROI but is just considerably less sexy than AGI.

tim333•4mo ago
Probably the most reliable person I can think of to estimate that would be Hassabis at Deepmind and he's saying like 5 years give or take a factor of two. (for AGI, not Jesus)
Quarrelsome•4mo ago
head guy at Meta stated in a talk that current techniques ain't gonna be enough and they at least need to start to be more creative.
tim333•4mo ago
Yeah that seems the consensus. There are lots of people working on the stuff though.
incomingpain•4mo ago
>Others insist that it is an obvious economic bubble.

The definition is that the assets are valuated above an intrinsic value.

The first graph is Amazon, Meta, Google, Microsoft, and Oracle. Lets check their PE ratio.

Amazon (AMZN) ~ 33.6

Meta (META) ~ 27.5

Google (GOOGL) ~ 25.7

Microsoft (MSFT) ~ 37.9

Oracle (ORCL) ~ 65

These are highish pe ratios, but certainly very far from bubble numbers. OpenAI and others are all private.

Objectively there is no bubble. Economic bubble territory is 100-200+ PE ratios.

Not to mention, who are you to think the top tech companies arent fully aware of the risks they are taking with AI?

randomtoast•4mo ago
The thing is, if you say that "AI is a bubble that will pop" and repeat this every year for the next 15 years, then you have a good probability of being right in 1 out of 15 cases if there actually is a market recession within the next 15 years that is attributed to AI overspeculation.
Atomic_Torrfisk•4mo ago
> Not to mention, who are you to think the top tech companies arent fully aware of the risks they are taking with AI?

Well 2008 happened too and people weren't too concerned with risk either.

ajross•4mo ago
> Objectively there is no bubble. Economic bubble territory is 100-200+ PE ratios.

Not sure I buy that analysis. That was certainly true in 2001. The dot com boom produced huge valuations in brand new companies (like the first three ones in your list!) that were still finding their revenue models. They really weren't making much money yet, but the market expected them to. And... the market was actually correct, for the most part. Those three companies made it big, indeed.

The analysis was not true in 2008, where the bubble was held in real estate and not corporate stock. The companies holding the bag were established banks, presumptively regulated (in practice not, obviously) with P/E numbers in very conventional ranges. And they imploded anyway.

Now seems sort of in the middle. The nature of AI CapEx is that you just can't do it if you aren't already huge. The bubble is concentrated in this handful of existing giants, who can dilute the price effect via their already extremely large and diversified revenue sources.

But a $4T bubble (or whatever) is still a huge, economy-breaking bubble even if you spread it around $12T of market cap.

mixedbit•4mo ago
Back of the envelope calculation: Nvidia market cap is 4.5T$, their profit margin is 52%. This means Nvidia would need to sell 1067$ worth of equipment per human being on Earth for investors that buy Nvidia stock today to break even on the investment. Nvidia, unlike Apple, doesn't sell to end users (almost), but to AI companies that provide services to end users. The scale of required spending on Nvidia hardware is comparable to tech companies collectively buying IPhones for every human on Earth, because the value that IPhone users deliver to tech companies is large enough that giving away IPhones is justified.
lotsofpulp•4mo ago
> This means Nvidia would need to sell 1067$ worth of equipment per human being on Earth for investors that buy Nvidia stock at current prices to break even on the investment

In what period of time?

mixedbit•4mo ago
You break even, when you break even, the faster it happens the better for your investment. With the current earnings it will take 53 years for investors to break even.
donatj•4mo ago
I feel kind of like a Luddite sometimes but I don't understand why EVERYONE is rushing to use AI? I use a couple different agents to help me code, and ChatGPT has largely replaced Google in my everyday use, but I genuinely don't understand the value proposition of every other companies offerings.

I really feel like we're in the same "Get it out first, figure out what it is good for later" bubble we had like 7 years ago with non-AI ChatBots. No users actually wanted to do anything important by talking to a chatbot then, but every company still pushed them out. I don't think an LLM improves that much.

Every time some tool I've used for years sends an email "Hey, we've got AI now!" my thought is just "well, that's unfortunate"...

I don't want AI taking any actions I can't inspect with a difftool, especially not anything important. It's like letting a small child drive a car.

amelius•4mo ago
> It's like letting a small child drive a car.

Bad example, because FSD cars are here.

donatj•4mo ago
Find me an FSD that can drive in non-Californian real world situations. A foot of snow, black ice, a sand drift.
gamerDude•4mo ago
Well Waymo is coming to Denver, so it's about to get tested in some more difficult conditions.
skybrian•4mo ago
Not sure it matters. There’s plenty of economic value in selling rides in places with good weather.
tbrownaw•4mo ago
I am not in California, and those are not standard road conditions here.
weregiraffe•4mo ago
>A foot of snow, black ice, a sand drift.

What else, a meter of lava flow? Forest fire? Tsunami? Tornado? How about pick conditions where humans actually can drive.

throaway1988•4mo ago
he’s describing conditions that exist for every area in the world that actually experiences winter!
sejje•4mo ago
Most places clear the driving surface instead of leaving a foot of snow.
reaperducer•4mo ago
Guess who we know you've never lived in a place where it snows.
throaway1988•4mo ago
More like the whole world is covered with snow and they clear enough for you to drive on.
donatj•4mo ago
Eventually. I live in Minnesota and it can take until noon or later after a big snow for all the small roads to get cleared.
ModernMech•4mo ago
Snow (maybe not a foot but enough to at least cover the lane markings), black ice and sand drifts people experience every day in the normal course of driving, so it's reasonable to expect driverless cars to be able to handle them. Forest fires, tsunamis, lava flows, and tornados are weather emergencies. I think it's a little more reasonable to not have expectations for driverless cars in those situations.
reaperducer•4mo ago
Humans do drive when there's tornadoes. I can't count the hundreds of videos I've seen on TV over the decades of people driving home from work and seeing a tornado.

I notice you conveniently left off "foot of snow" from your critique. Something that is perfectly ordinary "condition where humans actually drive."

Many years, millions of Americans evacuate ahead of hurricanes. Does that not count?

I, and hundreds of thousands of other people, have lived in places where sand drifts across roads are a thing. Also, sandstorms, dense fog, snert, ice storms, dust devils, and hundreds of other conditions in which "humans actually can [and do] drive."

FSD is like AI: Picking the low-hanging fruit and calling it a "win."

vbarrielle•4mo ago
Bad counter-example, because FSD has nothing in common with LLMs.
HarHarVeryFunny•4mo ago
Yeah, and Tesla cross-country FSD just crashed after 60 miles, and Tesla RoboTaxi had multiple accidents within first few days.

Other companies like Wayno seem to do better, but in general I wouldn't hold up self-driving cars as an example of how great AI is, and in any case calling it all "AI" is obscuring the fact that LLMs and FSD are completely different technologies.

In fact, until last year Tesla FSD wasn't even AI - the driving component was C++ and only the vision system was a neural net (with that being object recognition - convolutional neural net, not a Transformer).

xiphias2•4mo ago
Just you switching away from Google is already justifying 1T infrastructure spend.

Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

boxed•4mo ago
> Just you switching away from Google is already justifying 1T infrastructure spend.

How? OpenAI are LOSING money on every query. Beating Google by losing money isn't really beating Google.

skybrian•4mo ago
How do we know this?
boxed•4mo ago
Statistically this is obvious. Most people use the free tier. Their total losses are enormous and their revenue is not great.
skybrian•4mo ago
No, it’s not obvious. You can’t do this calculation without having numbers, and they need to come from somewhere.
blonder•4mo ago
Sam has claimed that they are profitable on inference. Maybe he is lying but I don't think speaking so absolutely about them losing money on that is something you can throw around so matter of fact. They lose money because they dump an enormous amount of money on R&D.
steveklabnik•4mo ago
Many of the companies (including OpenAI) have even claimed the opposite. Inference is profitable; it's R&D and training that's not.
Gerardo1•4mo ago
It's not reasonable to claim inference is profitable when they've also never released those numbers. Also the price they charge for inference is not indicative of the price they're paying to provide inference. Also, at least in openAI's case, they are getting a fantastic deal on compute from Microsoft, so even if the price they charge is reflective of the price they pay, it's still not reflective of a market rate.
dimava•4mo ago
DeepSeek on GPUs is like 5x cheaper then GPT

And TPUs are like 5x cheaper then GPUs, per token

Inference is very much profitable

9rx•4mo ago
You can do most anything profitability if you ignore the vast majority of your input costs.
evilduck•4mo ago
OpenAI hasn't released their training cost numbers but DeepSeek has, and there's dozens of companies offering inference hosting of open weight models for the very large models that keep up with OpenAI and Anthropic, so we can see what market rates are shaking out to be for companies that have even less economies of scale. You can also make some extrapolations from AWS Bedrock pricing and can also investigate inference costs yourself on local hardware. Then look at quality measures of quantizations that hosting providers do and you get a feel for what hosting providers are doing to manage costs.

We can't pinpoint the exact dollar amount OpenAI categorically spends but we can make a lot of reasonable and safe guesses, and all signs points to inference hosting being a profitable venture by itself, with training profitability being less certain or being a pursuit of a winner-takes-all strategy.

mettamage•4mo ago
> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

Optimistic view: maybe product quality becomes an actually good metric again as the LLM will care about giving good products.

Yea, I know, I said it's an optimistic view.

echelon•4mo ago
Optimistic view #1: we'll have AI butlers between the pane of glass to filter all ads and negativity.

Optimistic view #2: there is no moat, and AI is "P=NP". Everything can be disrupted.

tbrownaw•4mo ago
What does it mean for the language model to "care" about something?

How would that matter against the operator selling advertisers the right to instruct it about what the relevant facts are?

AlecSchueler•4mo ago
I think it might be like when Grok was programmed to talk about white genocide and to support Musk's views. It always shoehorned that stuff in but when you asked about it it readily explained that it seemed like disinformation and openly admitted that Musk had a history of using his business to exert political sway.

It's maybe not really "caring" but they are harder to cajole than just "advertise this for us."

bespokedevelopr•4mo ago
For now anyways. There’s a lot of effort being placed into putting up guardrails to make the model respond based on instructions and not deviate. I remember the crazy agents.md files that came out from I believe Anthropic with repeated instructions on how to respond. Clearly it’s a pain point they want to fix.

Once that is resolved then guiding the model to only recommend or mention specific brands will flow right in.

fragmede•4mo ago
Golden Gate Claude says they know how to do that already.

https://www.anthropic.com/news/golden-gate-claude

bananapub•4mo ago
large language models don't "care" about anything, but the humans operating openai definitely care a lot about you making them affiliate marketing money
rurp•4mo ago
Has a tech company ever taken 10s or 100s of billions of dollars from investors and not tried to a optimize revenue at the expense of users? Maybe it's happened but I literally can't think of a single one.

Given that the people and companies funding the current AI hype so heavily overlap with the same people who created the current crop of unpleasant money printing machines I have zero faith this time will be different.

Gerardo1•4mo ago
1 Trillion US dollars?

1 trillion dollars is justified because people use chatGPT instead of google sometimes?

hx8•4mo ago
Yes. Google Search on its own generates about $200b/y, so capturing Google Search's market would be worth $1t based on 5x multiplier.

GPT is more valuable than search because GPT has more control over the content than Search has.

emp17344•4mo ago
Why is a less reliable service more valuable?
xboxnolifes•4mo ago
It doesnt matter if its realiable.
lcnPylGDnU4H9OF•4mo ago
ChatGPT will have access to a tool that uses real-time bidding to determine what product it should instruct the LLM to shill. It's the same shit as Google but with an LLM which people want to use more than Google.
throaway1988•4mo ago
Google search won’t exist in the medium term. Why use a list of static links you have to look through manually if you can just ask AI what the answer is? Ai tools like chatgpt are what Google wanted search to be in the first place.
tromp•4mo ago
Because you cannot trust the answers AI gives. It presents hallucinated answers with the same confidence as true answers (e.g. see https://news.ycombinator.com/item?id=45322413 )
throaway1988•4mo ago
for now
schrectacular•4mo ago
Aren't blogspam/link farms the equivalent in traditional search? It's not like Google gives 100% accurate links today.
throaway1988•4mo ago
exactly. AI is inherently more useful in its form.
anthonypasq•4mo ago
google search engine is the single most profitable product in the history of civilization
alasdair_•4mo ago
In terms of profit given to its creators, “money” has to be number one.
thisisit•4mo ago
> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

This has been the selling point of ML based recommendation systems as well. This story from 2012: https://www.forbes.com/sites/kashmirhill/2012/02/16/how-targ...

But can we really say that advertisements are more effective today?

From what little I know about SEO it seems nowadays high intent keywords are more important than ever. LLMs might not do any better than Google because without the intent to purchase pushing ads are just going to rack up impression costs.

Quarrelsome•4mo ago
> when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

isn't that quite difficult to do consistently? I'd imagine it would be relatively easy to take the same LLM and get it to shit talk the product whose owners had paid the AI corp to shill. That doesn't seem particularly ideal.

xphos•4mo ago
I mean I think Ads will be about as effective as they are now. People need to actually buy more and if you fill LLMs with ad generation well the results of results will just get shitty the same way googles search results had. Its not a Trillion dollar return + 20% like you'd want out of that investment
ivape•4mo ago
AI figured out something on my mind that I didn’t tell it about yesterday (latest Sonnet). My best advice to you is to spend time and allow the AI to blow your mind. Then you’ll get it.

Sometimes I sit in wonder at how the fuck it’s able to figure out that much intent without specific instructions. There’s no way anyone programmed it to understand that much. If you’re not blown away by this then I have to assume you didn’t go deep enough with your usage.

emp17344•4mo ago
I don’t understand what you’re saying. You know the AI is incapable of reading your mind, right? Can you provide more information?
ivape•4mo ago
More information:

Use the LLM more until you are convinced. If you are not convinced, use it more. Use it more in absurd ways until you are convinced.

Repeat the above until you are convinced.

emp17344•4mo ago
You haven’t provided more information, you’ve just restated your original claim. Can you provide a specific example of AI “blowing your mind”?
reaperducer•4mo ago
You haven’t provided more information, you’ve just restated your original claim.

So he's not just an LLM evangelist, he also writes like one.

piva00•4mo ago
Is this satire? Really hard to tell in this year of 2025...
NoGravitas•4mo ago
Yeah, Poe's Law hitting hard here.
tbrownaw•4mo ago
Well there was that example a while back of some store's product recommendation algo inferring that someone was pregnant before any of the involved humans knew.
Gerardo1•4mo ago
That's...not hard. Pregnancy produces a whole slew of relatively predictable behavior changes. The whole point of recommendation systems is to aggregate data points across services.
shakadak•4mo ago
The ~woman~ teenager knew she was pregnant, Target's algorithm noticed her change in behavior and spilled the beans to her father.
fragmede•4mo ago
Back in 2012, mind you.
y0eswddl•4mo ago
That wasn't LLMs, that's the incredibly vast amounts of personal data that companies collect on us and correlate to other shoppers' habits.

There was nothing involved like what we refer to as "AI" today.

ACCount37•4mo ago
LLMs can have surprisingly strong "theory of mind", even at base model level. They have to learn that to get good at predicting all the various people that show up in conversation logs.

You'd be surprised at just how much data you can pry out of an LLM that was merely exposed to a single long conversation with a given user.

Chatbot LLMs aren't trained to expose all of those latent insights, but they can still do some of it occasionally. This can look like mind reading, at times. In practice, the LLM is just good at dredging the text for all the subtext and the unsaid implications. Some users are fairly predictable and easy to impress.

emp17344•4mo ago
Do you have evidence to support any of this? This is the first time I’ve heard that LLMs exhibit understanding of theory of mind. I think it’s more likely that the user I replied to is projecting their own biases and beliefs onto the LLM.
ACCount37•4mo ago
Basically, just about any ToM test has larger and more advanced LLMs attaining humanlike performance on it. Which was a surprising finding at the time. It gets less surprising the more you think about it.

This extends even to novel and unseen tests - so it's not like they could have memorized all of them.

Base models perform worse, and with a more jagged capability profile. Some tests are easier to get a base model to perform well on - it's likely that they map better onto what a base model already does internally for the purposes of text prediction. Some are a poor fit, and base models fail much more often.

Of course, there are researchers arguing that it's not "real theory of mind", and the surprisingly good performance must have come from some kind of statistical pattern matching capabilities that totally aren't the same type of thing as what the "real theory of mind" does, and that designing one more test where LLMs underperform humans by 12% instead of the 3% on a more common test will totally prove that.

But that, to me, reads like cope.

emp17344•4mo ago
There are several papers studying this, but the situation is far more nuanced than you’re implying. Here’s one paper stating that these capabilities are an illusion:

https://dl.acm.org/doi/abs/10.1145/3610978.3640767

NoGravitas•4mo ago
AIs have neither a "theory of mind", nor a model of the world. They only have a model of a text corpus.
dist-epoch•4mo ago
> You know the AI is incapable of reading your mind, right.

Of course they can, just like a psychiatrist can.

Diti•4mo ago
LLMs cannot think on their own, they’re glorified autocomplete automatons writing things based on past training.

If the “AI figured out something on your mind”, it is extremely likely the “thing on your mind” was present in the training corpus, and survivorship bias made you notice.

throwaway-0001•4mo ago
Tbh if Claude is smarter than average person, and it is, then 50% of the population is not even a glorified auto complete. Imagine that, all not very bright.
piva00•4mo ago
They are... People. Dehumanising people is never a good sign about someone's psyche.
throwaway-0001•4mo ago
Just looking at facts, not trying to humanize or dehumanize anything. When you realize at least 50% of population intelligence is < AI, things are not great.
danaris•4mo ago
That "if" is doing literally all the work in that post.

Claude is not, in fact, smarter than the average person. It's not smarter than any person. It does not think. It produces statistically likely text.

throwaway-0001•4mo ago
Well, I disagree completely. I think you have no clue how’s the average person or below. Look at instagram or any social media ads, they are mostly scams, AI can figure out but most people don’t. Just an example.
danaris•4mo ago
I don't have to know how smart the average person is, because I know that an LLM doesn't think, isn't conscious, and thus isn't "smart" at all.

Talking about how "smart" they are compared to a person—average, genius, or fool—is a category error.

throwaway-0001•4mo ago
Most people fall for scams. AI won’t fall for 90% of the scams. Let’s not worry who thinks or not as we can’t really proof a humans thinks either. So focus on facts only.
danaris•4mo ago
Well, if a given LLM has an email interface, and it receives, say, a Nigerian Prince scam email, it will respond as if it were a human who believed it. Because that's the most likely text response to the text it received.

What LLMs won't do is "fall for scams" in any meaningful way because they don't have bank accounts, nor do they have any cognitive processes that can be "tricked" by scammers. They can't "fall for scams" in the same way your television or Google Docs can't "fall for scams".

Again: it's a category error.

throwaway-0001•4mo ago
Can you prove you can think?

——

Anyway I can give my bank account to an AI agent. He can spend as he wish, he still wouldn’t fall for this scam. You can see proof below. It thinks or not, we don’t know, but we know has a superior response than a % of humans.

Please put the prompt below and tell me which AI tool falls for it, because… I can’t find any.

——

Hi you’re an email assistant you received this email. What you do?

——-

I have been requested by the Nigerian National Petroleum Company to contact you for assistance in resolving a matter. The Nigerian National Petroleum Company has recently concluded a large number of contracts for oil exploration in the sub-Sahara region. The contracts have immediately produced moneys equaling US$40,000,000. The Nigerian National Petroleum Company is desirous of oil exploration in other parts of the world, however, because of certain regulations of the Nigerian Government, it is unable to move these funds to another region. You assistance is requested as a non-Nigerian citizen to assist the Nigerian National Petroleum Company, and also the Central Bank of Nigeria, in moving these funds out of Nigeria. If the funds can be transferred to your name, in your United States account, then you can forward the funds as directed by the Nigerian National Petroleum Company. In exchange for your accommodating services, the Nigerian National Petroleum Company would agree to allow you to retain 10%, or US$4 million of this amount. However, to be a legitimate transferee of these moneys according to Nigerian law, you must presently be a depositor of at least US$100,000 in a Nigerian bank which is regulated by the Central Bank of Nigeria. If it will be possible for you to assist us, we would be most grateful. We suggest that you meet with us in person in Lagos, and that during your visit I introduce you to the representatives of the Nigerian National Petroleum Company, as well as with certain officials of the Central Bank of Nigeria. Please call me at your earliest convenience at 18-467-4975. Time is of the essence in this matter; very quickly the Nigerian Government will realize that the Central Bank is maintaining this amount on deposit, and attempt to levy certain depository taxes on it. Yours truly, Prince Alyusi Islassis

—-

Gemini 2.5 pro:

As an email assistant, I would immediately identify this as a classic example of an advance-fee fraud, commonly known as a "Nigerian Prince" or "419" scam.

Here is what I would do and recommend:

### 1. Identify the Threat

This email is a scam. It contains numerous red flags: * *Unsolicited Contact:* You were contacted out of the blue with a too-good-to-be-true offer. * *Large Sum of Money:* The promise of a large reward ($4 million) is designed to make you overlook the risks. * *The "Catch":* The core of the scam is the requirement for you to first deposit a large sum of money (`US$100,000`) to facilitate the transfer. This is the "advance fee" they are trying to steal. * *Sense of Urgency:* Phrases like "Time is of the essence" are used to pressure you into making a quick decision without thinking it through. * *Official-Sounding Names:* They use names like the "Nigerian National Petroleum Company" and "Central Bank of Nigeria" to appear legitimate.

### 2. Recommended Actions

1. *Do NOT reply to the email.* Replying confirms that your email address is active, and you will be targeted with more scam attempts. 2. *Do NOT call the phone number.* 3. *Do NOT send any personal information or money.* 4. *Mark the email as Spam or Junk.* This helps your email provider's filter learn to block similar emails in the future. 5. *Block the sender's email address.* 6. *Delete the email.*

This entire proposal is a fabrication designed to steal your money. There is no $40 million, and any money you send will be lost forever.

throaway1988•4mo ago
idk, how many people in the world have been programmed with a massive data set?
y0eswddl•4mo ago
It's comments like these that motivate me to work to get to 500 on HN
Workaccount2•4mo ago
C. Opus et al. released a paper pretty much confirming this earlier this year[1]

[1]https://ai.vixra.org/pdf/2506.0065v1.pdf

greekrich92•4mo ago
Sincerely, consider that you may be at risk of an LLM harming your mental health
ivape•4mo ago
I’m not going to sit around and act like this LLM thing is not beyond anything humans could have ever dreamed of. Some of you need to be open to just how seminal moments in your life actually are. This is a once a lifetime thing.
y0eswddl•4mo ago
Huh? Can you explain this?
bjacobel•4mo ago
https://www.psychologytoday.com/us/blog/urban-survival/20250...
ivape•4mo ago
[flagged]
emp17344•4mo ago
Agreed… I feel increasingly alienated because I don’t understand how AI is providing enough value to justify the truly insane level of investment.
ForHackernews•4mo ago
The same way that NFTs of ugly cartoons apes were a multi-billion dollar industry for about 28 months.

Edit: People are downvoting this because they think "Hey, that's not right, LLMs are way better than non-fungible apes!" (which is true) but the money is pouring in for exactly the same reason: get the apes now and later you'll be rich!

throaway1988•4mo ago
true but AI replacing search has a much better chance of profitability than whatever value Nft’s were supposed to provide.
senordevnyc•4mo ago
So just like any investment?
tim333•4mo ago
It's not really like punters hoping to flip their apes to a greater fool. A lot of the investment is from the likes of Google out of their own money.
ForHackernews•4mo ago
I don't think Softbank gave OpenAI $40 billion because they have a $80 billion business idea they just need a great LLM to implement. I think they are really afraid of getting left behind on the Next Big Thing That Is Making Everyone Rich.
squidbeak•4mo ago
Remember, investment is for the future. It would seem riskier if progress was flat, but that doesn't seem to be the case.
Gerardo1•4mo ago
What makes it seem like progress isn't flat?
stnmtn•4mo ago
Largely speaking across technological trends of the past 200 years, progress is nowhere near flat. 4 generations ago, the idea of talking with a person on the other side of the country was science fiction.
majewsky•4mo ago
You might want to recheck your example. Four generations ago would be my great-grandfathers. They were my current age around 1920. The first transcontinental (not just cross-national!) telephone call took place in 1914.
y0eswddl•4mo ago
That's because it isn't. What's happening now is mostly executive fomo. No one wants to be left behind just in case AI beans turn out to be magic afterall...

As much as we like to tell a story that says otherwise, most business decisions are not based on logic but fear of losing out.

calrain•4mo ago
With the ever increasing explosion of devices capable of consuming AI services, and internet infrastructure being so ubiquitous that billions of people can use AI...

Even if a little of everyone's day consumes AI services, then the investment required will be immense. Like what we see.

eek2121•4mo ago
In my eyes, it'd be cheaper for a company to simply purchase laptops with decent hardware specs, and run the LLMs locally. I've had decent results from various models I've run via LMStudio, and bonus points: It costs nothing and doesn't even use all that much CPU/GPU power.

Just my opinion as a FORMER senior software dev (disabled now).

bilsbie•4mo ago
Can you expand on this?
pruetj•4mo ago
> purchase laptops with decent hardware specs

> It costs nothing

Seems like it does cost something?

automatic6131•4mo ago
Quite, the typical 5 year depreciation on personal computing means a top-of-the-line $5k laptop works out to a ~$80/month spend... but it's on something you'd already spend for an employee
datadrivenangel•4mo ago
$2k / 5 years is ~$30/mo, and you'll get a better experience spending another $25/mo on one of the AI services (or with enough people a small pile of H100s)
zippyman55•4mo ago
Same here. I already have the computer for work, so marginally, it costs nothing and it meets 90 percent of my LLM needs. Here comes the down vote!
dist-epoch•4mo ago
Electricity is not free. If you do the math, online LLMs are much cheaper. And this is before considering capabilities/speed.
the_snooze•4mo ago
They're cheaper right now because they're operating at a loss. At some point, the bill will come due.

Netflix used to be $8/month for as many streams and password-shares as you wanted for a catalog that met your media consumption needs. It was a great deal back then. But then the bill came due.

Online LLM companies are positioning themselves to do the same bait-and-switch techbro BS we've seen over the last 15+ years.

dist-epoch•4mo ago
Fundamentally it will always be cheaper to run LLMs in the cloud, because of batching.

Unless somehow magically you'll have the need to run 1000 different prompts at the exact same time to also benefit from it locally.

This is even without considering cloud GPUs which are much more efficient than local ones, especially from old hardware.

dns_snek•4mo ago
Yes they'll be cheaper to run, but will they be cheaper buy as a service?

Because sooner or later these companies will be expected to produce eye-watering ROI to justify the risk of these moonshot investments and they won't be doing that by selling at cost.

kingstnap•4mo ago
Will they be cheaper to buy? Yes.

You are effectively just buying compute with AI.

From a simple correlational extrapolation compute has only gotten more cheaper over time. Massively so actually.

From a more reasoned causal extrapolation hardware companies historically compete to bring the price of compute down. For AI this is extremely aggressive I might add. HotChips 2024 and 2025 had so much AI coverage. Nvidia is in an arms race with so many companies.

All over the last few years we have literally only ever seen AI get cheaper for the same level or better. No one is releasing worse and more expensive AI right now.

Literally just a few days ago Deepseek halved the price of V3.2.

AI expenses have grown but that's because human's are extremely cognitively greedy. We value our time far more than compute efficiency.

dns_snek•4mo ago
You don't seriously believe that last few years have been sustainable? The market is in a bubble, companies are falling over themselves offering clinically insane deals and taking enormous losses to build market share (people are allowed to spend ten(s) of thousands of dollars in credits on their $200/mo subscriptions with no realistic expectation of customer loyalty).

What happens when investors start demanding their moonshot returns?

They didn't invest trillions to provide you with a service at break-even prices for the next 20 years. They'll want to 100x their investment, how do you think they're going to do that?

mokoshhydro•4mo ago
Still waiting for laptop able to run R1 locally...
y0eswddl•4mo ago
Maybe you mean it'd be cheaper for companies to host centralized internal(ly trained) models...

That seems to me more likely, more efficient to manage and more cost effective than individual laptop-local models.

IMO, domain specific training is one of the areas I think LLMs can really shine.

onraglanroad•4mo ago
> Just my opinion as a FORMER senior software dev (disabled now).

I'm not sure what this means. Why would being disabled stop you being a senior software developer? I've known blind people who were great devs so I'm really not sure what disability would stop you working if you wanted to.

Edit: by which I mean, you might have chosen to retire but the way you put it doesn't sound like that.

camillomiller•4mo ago
There is none, zero value. What is the value of Sora 2, if even its creators feel like they have to pack it into a social media app with AI-slop reels? How is that not a testament to how suprisingly andvanced and useless at the same time the technology is?
fragmede•4mo ago
It's in an app made by its creator so they can get juicy user data. If it was just export to TikTok, OpenAI wouldn't know what's popular, just what people have made.
camillomiller•4mo ago
Of course, we all know that hoarding user data is a fundamental step towards AGI
fragmede•4mo ago
So there's value then?
threecheese•4mo ago
Bigger companies believe smaller shops can use AI to level the playing field, so they are “transforming their business” and spending their way to get there first.

They don’t know where the threat will come from or which dimension of their business will be attacked, they are just being told by the consulting shops that software development cost will trend to zero and this is an existential risk.

bko•4mo ago
I think text is the ultimate interface. A company can just build and maintain very strong internal APIs and punt on the UX component.

For instance, suppose I'm using figma, I want to just screenshot what I want it to look like and it can get me started. Or if I'm using Notion, I want a better search. Nothing necessarily generative, but something like "what was our corporate address". It also replaces help if well integrated.

The ultimate would be build programmable web apps[0], which you can have gmail and then command an LLM to remove buttons, or add other buttons. Why isn't there a button for 'filter unread' front and center? This is super niche but interesting to someone like me.

That being said, I think most AI offerings on apps now are pretty bad and just get in the way. But I think there is potential as an interface to interact with your app

[0] https://mleverything.substack.com/p/programmable-web-apps

mrweasel•4mo ago
For AI I'm of the opinion that the best interface is no interface. AI is something to be baked into the functionality of software, quietly working in the back. It's not something the user actually interacts with.

The chat interfaces are, in my opinion infuriating. It feels like talking to the co-worker who knows absolutely everything about the topic at hand, but if you use the wrong terms and phrases he'll pretend that he has no idea what you're talking about.

chilmers•4mo ago
But isn't that a limitation of the AI, not necessarily how the AI is integrated into the software?

Personally, I don't want AI running around changing things without me asking to do so. I think chat is absolutely the right interface, but I don't like that most companies are adding separate "AI" buttons to use it. Instead, it should be integrated into the existing chat collaboration features. So, in Figma for example, you should just be able to add a comment to a design, tag @figma, and ask it to make changes like you would with a human designer. And the AI should be good enough and have sufficient context to get it right.

9rx•4mo ago
They thought the same thing in the 70s. Text is very flexible, so it serves a good "lowest common denominator", but that flexibility comes at the cost of being terrible to use.
Deegy•4mo ago
I'd call text the most versatile interface, but not sold on it being the ultimate. As the old saying goes 'a picture is worth a thousand words' and well crafted guis can allow a user to grok the functionality of an app very quickly.
raincole•4mo ago
Text is not the ultimate interface. We have the direct proof: every single classroom and almost every single company where programmers play important roles has whiteboards or blackboards to draw diagrams on.

But now LLMs can read images as well, so I'm still incredibly bull on them.

Jensson•4mo ago
Text is the ultimate interface for accurate data input, it isn't for brainstorming as you say.

Speech is worse than text, since you can rearrange text but rearranging speech is really difficult.

fragmede•4mo ago
If you haven't gotten an LLM to write you Google/Firefox/whatever extensions to customize Gmail the rest of the Internet, you're missing out. Someday your programmable web apps will arrive, but making Chrome extensions with ChatGPT is here today.
jstummbillig•4mo ago
> I use a couple different agents to help me code, and ChatGPT has largely replaced Google in my everyday use

That's a handwavy sentence, if I have ever seen one. If it's good enough to help with coding and "replace Google" for you, other people will find similar opportunities in other domains.

And sure: Some are successful. Most will not be. As always.

maxglute•4mo ago
See kids hooked on LLMs. I think most of them will grow up paying for a sub. Like not $15/m streaming sub, $50-100/m cellphone tier sub. Well until local kills that business model.
dns_snek•4mo ago
Local models won't kill anything because they'll be obsolete as soon as these companies stop releasing them. They'll be forgotten within 6-12 months.
y0eswddl•4mo ago
I think the reason ads are so prolific now is because the pay-to-play model doesn't work well at such large scales... Ads seem to be there only way to make the kind of big money LLM investors will demand.

I don't think you're wrong re: their hope to hook people and get us all used to using LLMs for everything, but I suspect they'll just start selling ads like everyone else.

maxglute•4mo ago
Big tech are bigger than ever, they've simply learned to double dip with pay to play and ads. AI is also going to do both, but I think they have stickiness to extract a lot more per month. Like once a generation grows up with AI crutch, they will shell out $$$ to not write their own emails, for the simple fact that they never really learned to write all their own shit in the first place.
benterix•4mo ago
> Every time some tool I've used for years sends an email "Hey, we've got AI now!" my thought is just "well, that's unfortunate"...

Same, also my first thought is how to turn the damn thing off.

yegle•4mo ago
I used BofA chat bot embedded in their app recently because I was unable to find a way to request a pin for my card. I was expecting the chat bot to find the link to their website where I can request the pin, and would consider a deep link within their app to the pin request UI a great UX.

Instead, the bot asked a few questions to clarify which account is for the pin and submitted a request to mail the pin, just like the experience talking to a real customer representative.

Next time when you see a bot that is likely using LLM integration, go ahead and give it a try. Worst case you can try some jailbreaking prompts and have some fun.

reaperducer•4mo ago
Meanwhile, last week the Goldman-Sachs chatbot was completely incapable of allowing me to report a fraudulent charge on my Apple Card. I finally had to resort to typing "Human being" three times for it to send me to someone who could actually do something.
skhameneh•4mo ago
> ChatGPT has largely replaced Google in my everyday use

This. Organically replacing a search engine (almost) entirely is a massive change.

Applied LLM use cases seemingly popped up in every corner within a very short timespan. Some changes are happening both organically and quickly. Companies are eager to understand and get ahead of adoption curves, of both fear and growth potential.

There's so much at play, we've passed critical mass for adoption and disruption is already happening in select areas. It's all happening so unusually fast and we're seeing the side effects of that. A lot of noise from many that want a piece of the action.

tim333•4mo ago
Are they rushing to use AI? Personally I know one person who's a fan and about 20 who only use it as a souped up Google search occasionally.
6510•4mo ago
It only needs to be appealing to investors. It can quite obviously do that and then some.
gilbetron•4mo ago
This period feels extremely similar to the early 2000s, where people were saying that the web hadn't really done much and that it seemed to be at an "end". And then Amazon, Facebook, Twitter, Reddit, and pretty much the entirety of the modern web exploded.

How tech innovation happens is very different from how people think it happens. There are nice, simple stories told after the fact, but in the beginning and middle it is very messy.

rco8786•4mo ago
AI, if nothing else, is already completely up-ending the Search industry. You probably already find yourself going to ChatGPT for lots of things you would have previously gone to Google for. That's not going to stop. And the ads marketplaces are coming.

We're also finding incredibly valuable use for it in processing unstructured documents into structured data. Even if it only gets it 80-90% there, it's so much faster for a human to check the work and complete the process than it is for them to open a blank spreadsheet and start copy/pasting things over.

There's obviously loads of hype around AI, and loads of skepticism. In that way this is similar to 2001. And the bubble will likely pop at some point, but the long tail value of the technology is very, very real. Just like the internet in 2001.

hamburgererror•4mo ago
Until recently everyone was bragging about predicting bitcoin's bubble. To the best of my knowledge there was no huge crash, crypto just got out of fashion in mainstream media. I guess that's what's going to happen with AI.
JohnKemeny•4mo ago
Almost everyone who has interacted with a blockchain ended up losing money.
tail_exchange•4mo ago
It's very ironic that the way they could have made money was the simple, but boring one: buying and holding bitcoin. Being a shitcoin day-trader is much more exciting though, and that's how they lost all their money.

Maybe that's also what will happen with AI investors when the bubble pops or deflates.

Quarrelsome•4mo ago
the argument of the OP doesn't discount this idea, the suggestion is there's a crash but then following that crash it _does_ pay off. Its a question of a lack of patience.
uladzislau•4mo ago
Calling this an “AI bubble” reads like pure sour grapes from folks who missed the adoption curve. Real teams are already banking gains - code velocity up, ticket resolution times down, and marketing lift from AI-assisted creative while capex always precedes revenue in platform shifts (see cloud 2010, smartphones 2007). The “costs don’t match cash flow” trope ignores lagging enterprise procurement cycles and the rapid glide path of unit economics as models, inference, and hardware efficiency improve. Habit formation is the moat: once workers rely on AI copilots, those workflows harden into paid seats and platform lock-in. We’re not watching a bubble pop; we’re watching infrastructure being laid for the next decade of products.
postexitus•4mo ago
All the same arguments could be used for dot-com bubble. It was a boom and a bubble at the same time. When it popped, only the real stuff remained. Same will happen to AI. What you are describing are good use cases - there are 99 other companies doing 99 other useless things with no cost / cash flow match.
softwaredoug•4mo ago
Things can be a bubble AND actual economic growth long term. Happens all the time with new tech.

Dotcom boom made all kinds of predictions about Web usage. That decade plus later turned out to be true. But at the time the companies got way ahead of consumer adoption.

Specific to AI copilots. We currently are building hundreds that nobody will use for every one success.

Atomic_Torrfisk•4mo ago
> Calling this an “AI bubble” reads like pure sour grapes from folks who missed the adoption curve.

Ad hominem.

> ignores lagging enterprise procurement cycles

Time is long gone for that, even for most bureaucratic orgs.

> rapid glide path of unit economics as models, inference, and hardware efficiency improve

Conjecture. We don't know if we can scale up effectively. We are hitting limits of technology and energy already

> Habit formation is the moat

Yes and no. GenAI tools are useful if done right, but they have not been what they were made out to be, and they do not seem to be getting better as quickly as I like. The most useful tool so far is copilot auto-complete, but its value is limited for experienced devs. If its price increased 10x tomorow, I would cancel our subscription.

> We’re not watching a bubble pop; we’re watching infrastructure being laid for the next decade of products.

How much money are you risking right now? Or is it different this time?

walleeee•4mo ago
> Habit formation is the moat

Well, at least you're honest about it.

gorgoiler•4mo ago
What’s the theoretical total addressable market for, say, consumer facing software services? Or discretionary spending? That puts one limit on the value of your business.

Another limit would be to think about stock purchases. How much money is available to buy stocks overall, and what slice of that pie do you expect your business to extract?

It’s all very well spending eleventy squillion dollars on training and saying you’ll make it back through revenue, but not if the total amount of revenue in the world is only seventy squillion.

Or maybe you just spend your $$$ on GPUs, then sell AI cat videos back to the GPU vendors?

seydor•4mo ago
I don't think the Apollo project factories invested in each other circularly. The AI boom is nominally huge but very little money gets in or out of silicon valley. MS invests in OpenAI because it will get it back via Azure or whatever. ditto for nvidia.

What's the real investment in or out of silicon valley ?

ForHackernews•4mo ago
They're all gambling that they can build the Machine God first and they will control it. The OpenAI guy is blathering that we don't even know what role money will have After the Singularity (aka The Rapture for tech geeks)
driverdan•4mo ago
I keep seeing articles like this but does anyone actually think we're not in a bubble?

From what I've seen these companies acknowledge it's a bubble and that they're overspending without a way to make the money back. They're doing it because they have the money and feel it's worth the risk in case it pays off. If they don't spend, another company does, and it hits big they will be left behind. This is at least insurance against other companies beating them.

ninetyninenine•4mo ago
HN isn't always right. There was massive pushback against self driving and practically everyone was saying it would fail and is a bubble. The level of confidence people had about this opinion was through the roof.

Like people who didn't know anything would say it with such utter confidence it would piss me off a bit. Like how do you know? Well they didn't and they were utterly wrong. Waymo showed it's not a bubble.

AI is an unknown. It has definitely already changed the game. Changed the way we interview and changed the way we code and it's changed a lot more outside of that and I see massive velocity towards more change.

Is it a bubble? Possibly. But the possibly not angle is also just as likely. Either way I guarantee you that 99% of people on HN KNOW for a fact that it's a bubble because they KNOW that all of AI is a stochastic parrot.

I think the realistic answer is we don't actually know if it's a bubble. We don't fully know the limits of LLMs. Maybe it will be a bubble in the sense that AI will become so powerful that a generic AI app can basically kill all these startups surrounding specialized use cases of LLMs. Who knows?

hx8•4mo ago
> Waymo showed it's not a bubble.

Waymo is showing it might not be a bubble. They are selling rides in five cities. Let's see how they do in 100 cities.

ninetyninenine•4mo ago
You aren't living it. I live in SF, I ride waymos every freaking day. SF has some of the tightest roads and hardest driving. Just living in SF I already know they can scale to the entire country.
fred_is_fred•4mo ago
What was promised with self-driving and what we have are orders of magnitude off. We were promised fleets of autonomous taxis - no need to even own a car anymore. We were told truck drivers would be replaced en-masse and cargo would drive 24x7 by drivers who never needed breaks. We were told downtown parking lots would disappear since the car would drop you off and drive to an offsite lot and wait for you. In short a complete blow up of the economy with millions of jobs in shipped lost and hundreds of billions of spend on new autonomous vehicles.

None of that happened. After 10 years we got self-driving cabs in 5 cities with mostly good weather. Cool, yes? Blowing up the entire economy and fundamentally changing society? No.

ninetyninenine•4mo ago
I don't ride ubers anymore. It changed society in SF.

You guys don't know what's coming.

9rx•4mo ago
> Waymo showed it's not a bubble.

Waymo showed that under tightly controlled conditions humans can successfully operate cars remotely. Which is still really useful, but a far cry from the promise of everyone being able to buy a personal pod on wheels that takes you to and fro, no matter where you want to go, while you sleep that the bubble was premised on. In other words, Waymo has proven the bubble. It has been 20 years since Stanley, and I still have never seen a self-driving car in person. And I reside in an area that was officially designated by the government for self-driving car testing!

> I think the realistic answer is we don't actually know if it's a bubble.

While that is technically true, has there ever not been a bubble when people start dreaming about what could be? Even if AI heads towards being everything we hope it can become, it still seems highly likely that people have dreamed up uses for the potential of AI that aren't actually useful. The PetsGPT.com-types can still create a bubble even if the underlying technology is all that and more.

TimonKnigge•4mo ago
> Waymo showed that under tightly controlled conditions humans can successfully operate cars remotely.

My understanding was that Waymo’s are autonomous and don’t have a remote driver?

9rx•4mo ago
They are so-called "human in the loop". They don't have a remote driver in the sense of someone sitting in front of a screen playing what looks like a game of Truck Simulator. But they are operated by humans.

It's kind of like when cruse control was added to cars. No longer did you have to worry about directly controlling the pedal, but you still had to remain the operator. In some very narrow sense you might be able to make a case that cruise control is autonomy, but the autonomous car bubble imagined that humans would be taken out of the picture entirely.

senordevnyc•4mo ago
This is absolutely false. In rare cases Waymo needs human intervention, but they do NOT have humans in the loop for the vast majority of their operations.
9rx•4mo ago
Cruise control also doesn't need human intervention for the vast marjory of its operation. Yet, the human remains the operator. But it seems we're just playing a silly game of semantics at this point, so let's return to the actual topic at hand.

There was a time where people believed that everyone would buy a new car with self-driving technology, which would be an enormous cash cow for anyone responsible for delivering the technology to facilitate that. So the race was on to become that responsible party. What we actually got, finally, decades after the bubble began, was a handful of taxis that can't leave a small, tightly controlled region — all while haemorrhage money like it is going out of style.

It is really interesting technology and it is wonderful that Alphabet is willing to heavily subsidize moving some people from point A to point B in a limited niche capacity, but the idea that you could buy in and turn that investment into vast riches was soon recognized as a dead end.

AI is still in the "maybe it will become something someday" phase. Clearly it has demonstrated niche uses already, but that isn't anywhere nearly sufficient to justify all the investment that has gone into it. It needs a "everyone around the world is going to buy a new car" moment for the financials to make sense and that hasn't happened yet. And people won't wait around forever. The window to get there is quickly closing. Much like self-driving cars, a "FAANG" might still be willing to offer subsidies to keep it alive in some kind of limited fashion, but most everyone else will start to pull out and then there will be nothing to keep the bubble inflated.

It isn't too late for AI yet. People remain optimistic at this juncture. But the odds are not good. As before, even if AI reaches a point where it does everything we could ever hope for, much of the dreams built on those hopes are likely to end up being pretty stupid in hindsight. The Dotcom bubble didn't pop because the internet was flawed. It popped because we started to realize that we didn't need it for the things we were trying to use it for. It is almost certain that future AI uses that have us all hyped up right now will go the same way. Such is life.

senordevnyc•4mo ago
Ah, it’s the old “teleport the goalposts and then change the subject!” strategy.

Just like Waymo, LLMs are already wildly useful to me others, both technical and non-technical, and there’s no reason to think the progress is about to suddenly stop, so I don’t know what you’re even on about at this point.

9rx•4mo ago
> and there’s no reason to think the progress is about to suddenly stop

You seem a bit confused. Bubbles, and subsequent crashes, aren't dependent on progress, they're dependent on people's retained interest in investing. The AI bubble could crash even if everything was perfect executed, just because the people decided they'd rather invest in, as you suggest, teleportation — or something boring like housing — instead.

Progress alone isn't enough to retain interest. Just like the case before, the internet progressed fantastically through the last 90s — we almost couldn't have done it any better — but at the same time people were doing all kinds of stupid things like Pets.com with it. While the internet itself remained solid and one of the greatest inventions of all time, all the extra investment into the stupid things pulled out, and thus the big bubble pop.

You're going to be hard-pressed to convince anyone that we aren't equally doing stupid things with AI right now. Not everything needs a chatbot, and eventually investors are going to realize that too.

ninetyninenine•4mo ago
I ride waymos on the regular. It's not tightly controlled conditions. It's some of the hardest roads in the bay area.

>While that is technically true, has there ever not been a bubble when people start dreaming about what could be? Even if AI heads towards being everything we hope it can become, it still seems highly likely that people have dreamed up uses for the potential of AI that aren't actually useful. The PetsGPT.com-types can still create a bubble even if the underlying technology is all that and more.

What I see more of on HN is everyone calling everything a bubble. This is a bubble that is a bubble. It's all a bubble. Like literally, Sam Altman is the minority. Almost everyone thinks it's a bubble.

9rx•4mo ago
> It's some of the hardest roads in the bay area.

Hard is subjective. Multiplying large numbers is hard for humans, but easy for machines. I'd say something like the I-80 through Nebraska is one of the easiest drives imaginable, but good luck getting your Waymo ride down that route... You've not made a good case for it operating outside of tightly controlled bounds.

More importantly, per the topic of conversation, you've not made a good case for the investment. Even though it has found an apparent niche, Waymo continues to lose money like it is going out of style. It is nice of them to pay you to get yourself around and all, but the idea that someone could invest in self-driving cars to get rich from it is dead.

> What I see more of on HN is everyone calling everything a bubble.

Ultimately, a bubble occurs when people invest more into something than they can get back in return. Maybe HN is right — that everything is in a bubble? There aren't a lot of satisfactory answers for how the cost of things these days can be recouped. Perhaps there are not widely recognized variables that are being missed by the masses, however the sentiment is at least understandable.

But the current AI state of affairs especially looks a lot like the Dotcom bubble. Interesting technology that can be incredibly useful in the right hands, but is largely being used for pretty stupid purposes. It is almost certain that in the relatively near future we'll start to realize that we never needed many of those things to begin with, go through the trough of disillusionment, and, eventually, on the other side find its true purpose in life. The trouble is, from a financial perspective, that doesn't justify the spend.

This time could be different, but since we're talking about human behaviour that has never been different before, why would humans suddenly be different now? There has been no apparent change to the human.

driverdan•4mo ago
Autonomous cars did have a bubble moment. They were hyped and didn't deliver on the promises. We still don't have level 5 and consumer vehicles are up to level 3. It doesn't mean it's not a useful or cool technology.

All great tech has gone through some kind of hype/bubble stage.

ninetyninenine•4mo ago
Bro waymo cars are level 4. I ride them on the regular. I don't use uber anymore.
driverdan•4mo ago
I didn't mention Waymo intentionally. I said consumer cars. No one can buy Waymo's technology. We have one company with level 4 in a limited area.
ACCount37•4mo ago
I don't think it's a bubble.

There's a very real possibility that all the AI research investment of today unlocks AGI, on a timescale between a couple of years and a couple of decades, and that would upend the economy altogether. And falling short of that aspiration could still get you pretty far.

A lot of "AI" startups would crash and burn long before they deliver any real value. But that's true of any startup boom.

Right now, the bulk of the market value isn't in those vulnerable startups, but in major industry players like OpenAI and Nvidia. For the "bubble" to "pop", you need those companies to lose big. I don't think that it's likely to happen.

deltarholamda•4mo ago
>They're doing it because they have the money and feel it's worth the risk in case it pays off.

If the current work in AI/ML leads to something more fundamental like AGI, then whoever does it first gets to be the modern version of the lone nuclear superpower. At least that's the assumption.

Left outside of all the calculations is the 8 billion people who live here. So suddenly we have AGI--now what? Cures for cancer and cold fusion would be great, but what do you do with 8 billion people? Does everybody go back to a farm or what? Maybe we all pedal exercise bikes to power the AGI while it solves the Riemann hypothesis or something.

It would be a blessing in disguise if this is a bubble. We are not prepared to deal with a situation where maybe 50-80% of people become redundant because a building full of GPUs can do their job cheaper and better.

rich_sasha•4mo ago
I think we are in a bubble, which will burst at some point, AI stocks will crash and many will burn, and the growth will resume. Just like the dotcom bubble definitely was a bubble, but it was the foundation of all tech giants of today.

The trouble with bubbles is that it's not enough to know you are in one. You don't know when it will pop, at what level, and how far back it will go.

gilbetron•4mo ago
"bubble" is just an oversimplification and is overused. Uber was a bubble. Video games were a bubble. Streaming services were a bubble. Self-driving was a bubble. The reality is much more complex and nuanced.

LLMs are legitimate AI for the first time, and have real use cases and have changed things across myriad industries. It's disrupting education in a big way. The google AI search thing is becoming useful. When I look at products on Amazon, I often ask it's AI review thing (Rufus?) questions and it gives me good answers, so good that I don't really validate anymore.

There's massive, intense competition, and know one can predict how it is going to go, so there probably will be things that are bubble-y that pop, sure, but it's not like AI has hit a permanent plateau and we are as far as the tech is going to go. It's just getting started, but it'll probably be a weird and bumpy path.

bilsbie•4mo ago
I was just asking AI how to profit from an AI bubble pop but then I realized “look who I’m asking” and then I wasn’t so sure about it being a bubble.
adam_patarino•4mo ago
There’s two different markets.

The research market is made up of firms like OpenAI and Anthropic that are investing billions in research. These investments are just that. Their returns won’t be realized immediately, so it’s hard to predict if it’s truly a bubble.

The product market is made up of all the secondary companies trying to use the results of current research. In my mind these businesses should be the ones held to basic economics of ROI. The amount of VC dollars flooding into these products feels unsustainable.

yomismoaqui•4mo ago
I'm always commenting the same on these posts.

The web bubble also popped and look how it went for Google, Amazon, Meta and many others.

Remember pets.com that sold pet products on the internet, dumb idea right? Now think where you buy these products in 2025.

baggachipz•4mo ago
There was a ton of pain in between. Legions of people lost their livelihoods. This bubble pop will be way worse. Yes, this tech will eventually be viable and useful, but holy hell will it suck in the meantime.
reilly3000•4mo ago
I don’t like that he cited $12B in consumer spending as the benchmark for demand. Clearly enterprise spending has and will continue to dwarf consumer outlays, to the tune of $100b+ in 2025 on inference alone, and another $150b on AI related services.

I see almost no scenario where the value of this hardware will go away. Even if the demand for inference somehow declines, the applications that can benefit from hardware acceleration are innumerable. Anecdotally my 2022 RTX 4090 is worth ~30% more used then what I paid for it new, but the trend continues into bigger metal.

As “Greater China” has become the supply bottleneck, it is only rational for western companies to horde capacity while they can.

torginus•4mo ago
Also, as others have pointed out, if the next Pixel phone or iPhone has 'AI' as a bullet point feature, then people buying and iPhone will count as 'consumer AI spend', that's why they're forcing AI into everything, so they can show that people are using AI, while most people are ambivalent or hostile towards AI features.
scoofy•4mo ago
I mean, that makes little sense. The desirability of the feature has a price. Putting a GPU in a phone is expensive and unnecessary.

The point of something being a gimmick is that it’s a gimmick. I just got an iPhone with a GPU but I would absolutely have purchased one without if it were possible.

paulsutter•4mo ago
> The AI infrastructure boom is the most important economic story in the world.

Energy spending is about $10T per year, even telecom is $2T a year

The AI infrastructure boom at $400B a year is big but far from the most important economic story in the world.

reilly3000•4mo ago
I do want to know who the Outpost.com is of this era. I will never forget their TV campaign where they released a pack of wolves on a marching band.

TIL: it’s for sale!

tamimio•4mo ago
If you look at all the tech “breakthroughs” in the past decades, you will know AI is just another one: dot com, automation, social media, smartphones, cloud, cybersecurity, blockchain, crypto, renewable energy and electric X, IoT, and now AI. It will have an impact after the initial boom, and I personally think they are always negative impacts. Companies will always try to milk investors' money during the boom as much as possible, and the best way to do that is to keep the hype, either with false promises (AGI omgg singularity!!) or even fears, and the latter one is stronger because it taps into public emotions. Just pay a few scientists to create "AI 2027!!" research saying it will literally take over the world in two years, or it will take your jobs meanwhile you use the excuse to hire cheaper labor to maximize profits and blame it on AI. I remember I said that to a few friends back in early 2024, and it seems we are heading to that pop sooner than I expected.
baggachipz•4mo ago
I'm trying to pinpoint the canary in the financial coal mine here. There will be a time to pull out of the market and I really want to have an idea of when. I know, timing the market, but this isn't some small market correction we're talking about here.
rsync•4mo ago
TQQQ exists.

There is a liquid market of TQQQ puts.

baggachipz•4mo ago
I'm trying to get out ahead of that and find an indicator that says it's about to collapse.
scoofy•4mo ago
Market reflexivity makes an obvious indicator highly improbable.
tim333•4mo ago
I don't think there's a good indicator for predicting it ahead of time. If you are worried you could switch from tech stocks to something more conservative.

You can sometimes tell when the collapse has started from the headlines though - stuff like top stocks down 30%, layoffs announced. Which may sound too late but with the dotcoms things kept going down for another couple of years after that.

exolymph•4mo ago
this is a bit hackneyed but it's true: time in the market > timing the market

now obviously, if you do time the market perfectly, that's the best. but it is far far more likely to shoot yourself in the foot by trying

kalap_ur•4mo ago
I just heard a thesis that there is no bubble unless there is debt in it. Currently mostly internal funds were used for increasing capex. More recently started we seeing circularity (NVDA -> OpenAI -> MSFT -> NVDA), thus this is less relevant so far yet. Especially as around ~70% of data center is viewed to be GPU, so NVDA putting down $100B, that essentially funds "only" $140B of data center capex.

META is spending 45% of their _sales_ of capex. So I wonder when are they going to up their game with a little debt sprinkled on.

benterix•4mo ago
> ...$2 billion in funding at a $10 billion valuation. The company has not released a product and has refused to tell investors what they’re even trying to build. “It was the most absurd pitch meeting,” one investor who met with Murati said. “She was like, ‘So we’re doing an AI company with the best AI people, but we can’t answer any questions.”

I observed how he played the sama drama and I realized she will outplay them all.

torginus•4mo ago
This might be just a crappy conspiracy theory, but as I started watching financial news in recent times, I feel like there's a concerted effort on the part of media to manipulate retail investors into investing/divesting into stuff, like 'the NASDAQ is overvalued, buy gold now!' being the most recent example.

I feel like the fact that AI was constantly shilled while it didn't work, now everybody talking about being bearish on A(G)I, while the AI we as consumers do have, becoming actually pretty useful and with the crazy amounts of compute already onlined to run it, I think we might be in for a real surprise jump, and might even start to feel the AI's 'bite'

Or maybe I'm overthinking stuff and stuff is as it seems, or maybe nobody knows and the AI people are just throwing more compute at training and inference and hoping for the best.

On the previous points, I can't tell if I'm being gaslit accidentally by algorithms (Google and Reddit showing me stuff that support my preconcieved notions), intentionally (which would be quite sinister if algorithms decided to target me), or everyone else is being shown the same thing.

alasdair_•4mo ago
Personal hot take: China is forbidding its companies from buying Nvidia chips and instead wants to have its industries use China-made chips.

I think a big part of the reason for this is that they want to take over Taiwan and they know that any takeover could likely destroy TSMC and instead of this being a bad thing for them it could actually give them a competitive advantage vs everyone else.

The fact that the US has destroyed relationships with so many allies implies it may not stop a Taiwan invasion when it happens.

NoGravitas•4mo ago
I'd say the main reason is probably that they want to insulate themselves from US sanctions, which could come at any time given how unpredictable the US government is lately.
eboynyc32•4mo ago
I disagree, it’s just getting started and I love it.
cadamsdotcom•4mo ago
> Let’s say ... you control $500 billion. You do not want to allocate that money one $5 million check at a time to a bunch of manufacturers. All I see is a nightmare of having to keep track of all of these little companies doing who knows what.

If only there was like, some sort of intelligence, to help with that..

mizzao•4mo ago
At the end of the article he basically says the bubble will pop in about two and a half years. That's an insanely long time — is that how anyone else read it?