* https://en.wikipedia.org/wiki/Technological_Revolutions_and_...
- https://www.amazon.co.uk/Technological-Revolutions-Financial...
The process of _actually_ benefitting from technological improvements is not a straight line, and often requires some external intervention.
e.g. it’s interesting to note that the rising power of specific groups of workers as a result of industrialisation + unionisation then arguably led to things like the 5-day week and the 8-hour day.
I think if (if!) there’s a positive version of what comes from all this, it’s that the same dynamic might emerge. There’s already lots more WFH of course, and some experiments with 4-day weeks. But a lot of resistance too.
For a 4 day week to really happen st scale, I'd expect we similarly need the government to decide to roll it out rather than workers groups pushing it from the bottom up.
See perhaps:
* https://en.wikipedia.org/wiki/Eight-hour_day_movement
Generally it only really started being talked about when "workers" became a thing, specifically with the Industrial Revolution. Before that a good portion of work was either agricultural or domestic, so talk of 'shifts' didn't really make much sense.
Yes, that is the first link of my/GP post.
Most new tech is like that - a period of mania, followed by a long tail of actual adoption where the world quietly changes
Why is that the case? There's plenty of people in the field who have made convincing arguments that it's a dead end and fundamentally we'll need to do something else to achieve AGI.
Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I'm not a hater, it could be true, but it seems to be gospel and I'm not sure why.
Mapping to 2001 feels silly to me, when we've had other bubbles in the past that led to nothing of real substance.
LLMs are cool, but if they can't be relied on to do real work maybe they're not change the world cool? More like 30-40B market cool.
EDIT: Just to be clear here. I'm mostly talking about "agents"
It's nice to have something that can function as a good Google replacement especially since regular websites have gotten SEOified over the years. Even better if we have internal Search/Chat or whatever.
I use Glean at work and it's great.
There's some value in summarizing/brainstorming too etc. My point isn't that LLMs et al aren't useful.
The existing value though doesn't justify the multi-trillion dollar buildout plans. What does is the attempt to replace all white collar labor with agents.
That's the world changing part, not running a pretty successful biz, with a useful product. That's the part where I haven't seen meaningful adoption.
This is currently pitched as something that will have nonzero chance of destroying all human life, we can't settle for "Eh it's a bit better than Google and it makes our programmers like 10% more efficient at writing code."
Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I have a friend who works at PwC doing M&A. This friend told me she can't work without ChatGPT anymore. PwC has an internal AI chat implementation.Where does this notion that LLMs have no value outside of programming come from? ChatGPT released data showing that programming is just a tiny fraction of queries people do.
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.
There's no doubt that you'll find anecdotal evidence both for and against in all variations, what's much more interesting than anecdotes is the aggregate.
[0] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
In the first few years of any new technology, most people investing it lose money because the transition and experimentation costs are higher than the initial returns.
But as time goes on, best practices emerge, investments get paid off, and steady profits emerge.
These are business customers buying a consumer-facing product.
It always takes time to figure out how to profitably utilize any technological improvement and pay off the upfront costs. This is no exception.
>I believe both sides are right. Like the 19th century railroads and the 20th century broadband Internet build-out, AI will rise first, crash second, and eventually change the world.
I also think it's true that AI is nowhere near AGI level. It's definitely not currently capable of doing my job, not by a long shot.
I also think that, throwing trillions of dollars at AI for a "a better google search, code snippet generator, and obscure bug finder" is a contentious question, and a lot of people oppose it for that reason.
I personally still think it's kind of crazy we have a technology to do things that we didn't have just ~2 years before, even if it just stagnates right here. Still going to be using it every day, even if I admittedly hate a lot of parts of it (for example, "thinking models" get stuck in local minima way too quickly).
At the same time, don't know if it's worth trillions of dollars, at least right now.
So all claims on this thread can be very much true at the same time, just depends on your perspective.
>At the same time, don't know if it's worth trillions of dollars, at least right now.
The revenue numbers sure don't think so. And I don't think this economy can support "trillions" of spending even if it wanted to. That's why the bubble will pop, IMO.
>Behind the disappointing enterprise deployment numbers lies a surprising reality: AI is already transforming work, just not through official channels. Our research uncovered a thriving "shadow AI economy" where employees use personal ChatGPT accounts, Claude subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval.
>The scale is remarkable. While only 40% of companies say they purchased an official LLM subscription, workers from over 90% of the companies we surveyed reported regular use of personal AI tools for work tasks. In fact, almost every single person used an LLM in some form for their work.
Is she more productive though?
People who smoke cigarettes will be unable to work without their regular smoke breaks. Doesn’t mean smoking cigarettes is good for working.
Personally I am an AI booster and I think even LLMs can take us much farther. But people on both sides need to stop accepting claims uncritically.
/s
What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder? Like ChatGPT is some kind of addiction similar to drugs? Is it so hard to believe that ChatGPT is actually productive?
What if people are using LLMs to achieve the same productivity with more cost to the business and less time spent working?
This, to me, feels incredibly plausible.
Get an email? ChatGPT the response. Relax and browse socials for an hour. Repeat.
"My boss thinks I'm using AI to be more productive. In reality, I'm using our ChatGPT subscription to slack off."
That three day report still takes three days, wink wink.
AI can be a tool for 10xers to go 12x, but more likely it's also that AI is the best slack off tool for slackers to go from 0.5x to 0.1x.
And the businesses with AI mandates for employees probably have no idea.
Anecdotally, I've seen it happen to good engineers. Good code turning into flocks of seagulls, stacks of scope 10-deep, variables that go nowhere. Tell me you've seen it too.
Both their perspectives are technically right. But we'll either have burned out workers or a lagging schedule as a result in the long term. I miss when we thought more long term about projects.
Lots of things claim to make people more productive. Lots of things make people believe they are more productive. Lots of things fail to provide evidence of increasing productivity.
This "just believe me" mentality normally comes from scams.
Naming another example outside of LLM skeptics asking it, about LLMs, is inherently a counterexample.
Why not? If you ever got an AI generated email or had to code-review anything vibecoded, you're going to be suspicious on who's "more productive". I've read reports and studies and it feels like the "more productive" people tend to be pushing more work onto people below or beside them to fix the generated mess.
I do believe there are productive ways to use this tech, but it does not seem like many people these days has the discipline to establish a proper workflow.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
That's just an appeal to masses / bandwagon fallacy.
> Is it so hard to believe that ChatGPT is actually productive?
We need data, not beliefs and current data is conflicting. ffs.
It's not that hard to imagine that your friend feels more productive than she actually is. I'm not saying it's true, but it's plausible. The anecdata coming out of programming is mostly that people are only more productive in certain narrow use cases and much less productive in everything else, relative to just doing the work themselves with their sleeves rolled up.
But man to seeing all that code gets spit out on the screen FEEL amazing, even if I'm going to spend the next few hours needing to edit it, for the next few months managing the technical debt I didn't notice when I merged it.
And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.
Come on, you can’t mean this in any kind of robust way. I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?” Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.
Are you really comparing an LLM to a computer? Really? There are many jobs today that quite literally would not exist at all without computers. It's in no way comparable.
You use ChatGPT to do the things you were already doing faster and with less effort, at the cost of quality. You don't use it to do things you couldn't do at all before.
LLMs are nothing like a computer for a programmer, or a saw for a carpenter. In the very best case, from what their biggest proponents have said, they can let you do more of what you already do with less effort.
If someone has used them enough that they can no longer work without them, it's not because they're just that indispensable: it's because that someone has let their natural faculties atrophy through disuse.
Why not?
>I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?”
It's very possible. I know people love bescmirching the "you won't always have a calculator" mentality. But if you're using a calculator for 2nd grade mental math, you may have degregaded too far. It varies on the task, of course.
>Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.
Depends on how it's making it easier. Phones are an excellent example. They make communication much easier and long distance communication possible. But if it gets to the point where you're texting someone in the next room instead of opening your door, you might be losing a piece of you somewhere.
It's no different to a manager that delegates, are they less of a manager because they entrust the work to someone else? No. So long as they do quality checks and take responsibility for the results, wheres the issue?
Work hard versus work smart. Busywork cuts both ways.
Let’s be serious here. These are still professionals and they have a reputation. The few cases you hear online of AI slop in professional settings is the exception. Not the norm.
It doesn't say she chooses to use it; it says she can't work without using it. At my workplace, senior leadership has mandated that software engineers use our internal AI chat tooling daily, they monitor the usage statistics, and are updating engineering leveling guides to include sufficient usage of AI being required for promotions. So I can't work without AI anymore, but it doesn't mean I choose to.
Given what I've seen in the educational sector: yes. Very hard. We already had this massive split in extremes between the highly educated and the ones who struggle. The last thing we need is to outsource the aspect of thinking to a billionaire tech company.
The slop you see in the workplace isn't encouraging either.
My own use case (financial analysis and data capture by the models). It takes away the grunt work, I can focus on the more pleasant aspects of the job, it also means I can produce better quality reports as I have additional time to look more closely. It also points out things I could have potentially missed.
Free time and boredom spurs creativity, some folks forget this.
I also have more free time, for myself, you're not going to see that on a corporate productivity chart.
Not everything in life is about making more money for some already wealthy shareholders, a point I feel sometimes lost in these discussions, I think some folks need some self-reflection on this point, their jobs don't actually change the world and thinking of the shareholders only gets you so far. (Not pointed at you, just speaking generally).
For me, quality is the biggest metric, not money. But time does play into the metric of quality.
The sad reality is that many use it as a shortcut to output slop. Which may be "productive" in a job where that busywork isn't critical for anyone but your paycheck. But those kinds of corners being cut seems anathema to proper engineering or any other mission critical duties.
>their jobs don't actually change the world and thinking of the shareholders only gets you so far.
I'm worried of seeing more cases like a lawyer submitting cases to a judge that never existed. There's ethical concerns about the casual chat apps, but I can leave that to others.
People doing their jobs know how to use it effectively. Just because corporates aren't capturing that value for themselves doesn't mean it's low quality. It's being used in a way that is perhaps reflected as an improvement in the actual employees standing, and could be bridging existing outdated work processes. Often an employee is powerless to change these processes and KPI's are notoriously narrow in scope.
Hallucinations happen less frequently these days, and people are aware of the pitfalls so account for this. Literally in my own example above it means I have more time to actually check my own work (and it's work) and it also points out factors I might have missed as a human (this has absolutely happened multiple times already).
Fun fact; smoking likely is! There have been numerous studies into nicotine as a nootropic, eg https://pubmed.ncbi.nlm.nih.gov/1579636/#:~:text=Abstract,sh... which have found that nicotine improves attention and memory.
Shame about the lung cancer though.
Au contraire. Acute nicotine improves cognitive deficits in young adults with attention-deficit/hyperactivity disorder: https://www.sciencedirect.com/science/article/abs/pii/S00913...
> Non-smoking young adults with ADHD-C showed improvements in cognitive performance following nicotine administration in several domains that are central to ADHD. The results from this study support the hypothesis that cholinergic system activity may be important in the cognitive deficits of ADHD and may be a useful therapeutic target.
This isn't a sign that ChatGPT has value as much as it is a sign that this person's work doesn't have value.
Would you say that their work has no value?
Anyways, IDE's don't try to offload the thinking for you, it's more like an abacus. You still need to work in it a while and learn the workflow before it's more efficient than a text editor + docs.
Chrome is a trickier aspect, because the reality is that a lot of modern docs completely suck. So you rely less on official documentation and more about how others have navigated an IDE and if those options work for you. I'd rather we make proper documentation than offload it into a black box that may or may not understand what it's spouting out to you, though.
ChatGPT automates much of my friend's work at PwC making her more productive --> not a sign that ChatGPT has any value
Farming machines automated much of what a farmer used to have to do by himself making him more productive --> not a sign that farming machines have any value
The output of PwC -- whoops, here goes any chance of me working there -- is presentations and reports.
“We’re entering a bold new chapter driven by sharper thinking, deeper expertise and an unwavering focus on what’s next. We’re not here just to help clients keep pace, we’re here to bring them to the leading edge.”
That's on the front page of their website, describing what PwC does.
Now, what did PwC used to do? Accounting and auditing. Worthwhile things, but adjuncts to running a business properly, rather than producing goods and services.
Look up what M&A is.
Mergers and Aquisitions? If that's the right acronym I hate it even more, thank you.
But yes, I can see how automating the BS of corporate culture then using it to impress people (who also don't care anyway) by saying "I made this with AI" can be "productive". Not really a job I can do, though.
If you think convincing investors to give you hundreds of millions is easier than writing code, you’re out of your mind.
I am curious what kind of work is she using ChatGPT such that she cannot do without it?
> ChatGPT released data showing that programming is just a tiny fraction of queries people do
People are using it as search engine, getting dating advice and everything under the sun. That doesn't mean there is business value - so to speak. If these people had to pay say $20 a month for this access, are they willing to do so?
The poster's point was that coding is an area which is paying consistently for LLM models so much that every model has a coding specific version. But we don't see same sort of specialized models for other areas and the adoption is low to nonexistent.
Given they said this person worked at PwC, I’m assuming it’s pointless generic consultant-slop.
Concretely it’s probably godawful slide decks.
Well this article cites 400b of spending for 12b of revenue. That's not zero value, but it definitely showing overvalue. We're not paying that level of money back with consumer level goods.
Now is B2B valuable? Maybe. But it's really tough valuating that with how businesses are operating c. 2025.
> ChatGPT released data showing that programming is just a tiny fraction of queries people do.
yes, but it's not 2010 anymore. Companies are already on ChatGPT's neck trying to get RoI's. They can't run insolvent for a decade at this level of spending like all the FAANG's did in yestr-decade.
Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.
Current AI tools may not beat the best programmer, they definitely improves average programmer efficiency.
Try changing something old in claude code (or codex etc) using a programming language you have used before. Your opinion might change drastically.
That's bread and butter development work.
I use copilot in agent mode.
But why would I do that? Either I'm learning a new language in which case I want to be as hands-on as possible and the goal is to learn, not to produce. Or I want to produce something new in which case, obviously, I'd use a toolset I'm experienced in.
For example, perhaps I want to use a particular library which is only available in language X. Or maybe I'm writing an add-on for a piece of software that I use frequently. I don't necessarily want to become an expert in Elisp just to make a few tweaks to my Emacs setup, or in Javascript etc. to write a Firefox add-on. Or maybe I need to put up a quick website as a one-off but I know nothing about web technologies.
In none of these cases can I "use a toolset I'm experienced in" because that isn't available as an option, nor is it a worthwhile investment of time to become an expert in the toolset if I can avoid that.
It's a damn good tool, I use it, I've learned the pitfalls, it has value but the inflation of potential value is, by definition, a bubble...
If you told me that you would spend half a trillion and the best minds on reading the whole internet, then with some statistical innovation try to guess the probable output of an input. The way it works now seems about right, probably a bit disappointing even.
I would also say, it seems cool and you could do that, but why would you? At least when the training is done it is cheap to use right? No!? What the actual fuck!
Do we really need more efficient average programmers? Are we in a shortage of average software?
Yes. The "true" average software quality is far, far lower than the average person perceives it to be. ChatGPT and other LLM tools have contributed massively to lowering average software quality.
Anyway we don't need more efficient average programmers, time-to-market is rarely down to coding speed / efficiency and more down to "what to build". I don't think AI will make "average" software development work faster or better, case in point being decades of improvements in languages, frameworks and tools that all intend to speed up this process.
It was Claude Code Opus 4.1 instead of Codex but IMO the differences are negligible.
I just tried earlier today to get Copilot to make a simple refactor across ~30-40 files. Essentially changing one constructor parameter in all derived classes from a common base class and adding an import statement. In the end it managed ~80% of the job, but only after messing it up entirely first (waiting a few minutes), then asking again after 5 minutes of waiting if it really should do the thing and then missing a bunch of classes and randomly removing about 5 parenthesis from the files it edited.
Just one anecdote, but my experiences so far have been that the results vary dramatically and that AI is mostly useless in many of the situations I've tried to use it.
You will have much more success if you can compartmentalize and use new LLM instances as often as possible.
Why this is inherently different?
[1] Side-note: This was written at a time when selling software as a standalone product was not really a thing, so everything was open-source and the "how to modify" part was more about how to read and understand the code, e.g. architecture diagrams.
I'm talking about "shrinkwrap" software like Word or something. There's nothing even close to testing for that this is not just "system testing" it.
Some of the stuff generated I can't believe is actually good to work with long term, and I wonder about the economics of it. It's fun to get something vaguely workable quickly though.
Things like deepwiki are useful too for open source work.
For me though the core problem I have with AI programming tools is that they're targeting a problem that doesn't really exist outside of startups, not writing enough code, instead of the real part of inefficiency in any reasonably sized org, coordination problems.
Of course if you tried to solve coordination problems, then it would probably be a lot harder to sell to management because we'd have to do some collective introspection as to where they come from.
If you work in science it's great to have s.th. that spits out mediocre code for your experiments.
Sad but true. Better to sell to management and tagline it as "you don't need a whole team anymore.", or going so far as "you can do this all by yourself now!".
Sadly managers usually have more money to spend than the workers too, so it's more profitable.
So it looks best when the user isn't qualified to judge the quality of the results?
haven't we established that if you are layman in an area AI can seem magical. Try doing something in your established area and you might get frustrated. It will give you the right answer with caveats - code which is too verbose, performance intensive or sometimes ignoring best security practices.
The business model is it is data collection about you on steroids, and that the winning company will eclipse Meta in value.
It's just more ad tech with multipliers, and it will continue to control thought, sway policy and decide elections. Just like social media does today.
Not sure though that do they make enough revenue and what will be the moat if more or less the best models will converge around the same level. For most normies, it might be hard to spot difference between gpt 5 and claude for instance. Okay for Grok the moat is that it doesn't pretend to be a pope and censor everything.
Odd way to describe ChatGPT which has >1B users.
AI overviews have rolled out to ~3B users, Gemini has ~200M users, etc.
Adoption is far from low.
Does that really count as adoption, when it has been introduced as a default feature?
HN seems to think everyone is like in the bubble here, which thinks AI is completely useless and wants nothing to do with it.
Half the world is interacting with it on a regular basis already.
Are we anywhere near AGI? Probably not.
Does it matter? Probably not.
Inference costs are dropping like a rock, and usage is continuing to skyrocket.
That's the kind of adoption that should just be put up for adoption instead.
(And of course, the reason that I can tell that the auto-translated video titles are hilarious and/or wrong is because they are translating into a language that I speak from a language that I also speak, but apparently the YouTube app's dev team cannot fathom that a person might speak more than one language.)
I don't actually think that AI overviews have "negative value" - they have their utility. There are cases where I stop my search right after reading the "AI overview". But "organic" adoption of ChatGPT or Claude or even Gemini and "forced" adoption of AI overviews are two different beasts.
He has not engaged with any chatbot, but he thinks of himself as "using AI now" and thinks of it as a value-add.
In the last few months, every single non-programmer friend I've met has ChatGPT installed on their phone (N>10).
Out of all the people that I know enough to ask if they have ChatGPT installed, there is only one who doesn't have it (my dad).
I don't know how many of them are paying customers though. IIRC one of them was using ChatGPT to translate academic writing so I assume he has pro.
There are other companies that provide these tools for anything supporting MCP.
Adoption is high with young people.
Have you ever used an LLM? I use it every day to help me with research and completing technical reports (which used to be a lot more of my time).
Of course you can't just use it blindly, but it definitely adds value.
Nobody doubt it works, everybody doubt Altboy when he asks $7 trillion
Current offerings are usually worth more than they cost. But since the prices are not really reflective of the costs it gets pretty muddy if it is a value add or not.
I don't think the researchers at the top think LLM is AGI.
DeepMind and co are already working on world models.
The biggest bottleneck right now is compute compute and compute. If an experiement takes MONTH to train, you want a lot more compute. You need compute to optimize what you already have like LLMs and then again a lot of compute to try out new things.
All of the compute/Datacenters and GPUs are not LLM GPUs. They are ML capable GPUs.
but on the other side, the reason everyone is so gung ho on all this is because these models basically allow for the true personalization of everything. They can build up enough context about you in every instance of you doing things online that they can craft the perfect ad experience to maximize engagement and conversion. that is why everyone is so obsessed with this stuff. they don't care about AGI, they care about maintaining the current status quo where a large chunk of the money made on the internet is done by delivering ads that will get people to buy stuff.
As an example - I'd never bother with mobile app just for myself since it's too annoying to get into for a somewhat small thing. Now I can chug along and have LLM fill in quickly my missing basic in the area.
Current batch of deep learning models are fundamentally a technology for labor automation. This is immensely useful in itself, without the need to do AGI. The Sora2 capabilities are absolutely wild (see a great example here of what non-professional users are already able to create with it: https://www.youtube.com/watch?v=HXp8_w3XzgU )
So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.
The emerging reasoning capabilities are very promising, able to generate new theories and make scientific experiments in easy to test fields, such as in vitro drug creation. It doesn't matter if the LLM hallucinates 90% of the time, if it correctly reasons a single time and it can create even a single new cancer drug that passes the test.
These are all examples of massive, massive economic disruption by automating intellectual labor, that don't require strict AGI capabilities.
From an economy-wide perspective, why does that matter?
> users have already proven there is no brand loyalty. They just hop to the new one when it comes out.
Great, that means there might be real competition! This generally keeps prices down, it doesn't push them up! It's true that VCs may end up unhappy, but will they be able to do anything about it?
Smells like complete and total bullshit to me.
Edit: @eucyclos: I don't assume that Chat GPT and LLM tools have saved cancer researchers any time at all.
On the contrary, I assume that these tools have only made these critical researchers less productive, and made their internal communications more verbose and less effective.
Let's say you run that LLM one million times and get 100.000 valid reasoning chains. Let's say among them are variations on 1000 fundamentally new approaches and ideas, and out of those, you can actually synthesize in the laboratory 200 new candidate compounds, and out of those, 10 substance show strong in-vitro response, and then one of those completely cures some cancerous mice.
There you go, you have substantially automated the intellectual work of cancer research and you have one very promising compound you can start phase 1 trials that you didn't have before AI, and all without any AGI.
$500B/$100,000 is 5 million, or 167k 30-year careers.
The math is ludicrous, and the people saying it's fine are incomnprehensible to me.
Another comment on a similar post just said, no hyperbole, irony, or joke intended: "Just you switching away from Google is already justifying 1T infrastructure spend."
Just the disruption we can already see in the software industry are easily of that magnitude.
WTF ? Where are you seeing that ?
Also no you can't calculate 100k over 30 years as 3M because you expect investment growth - lets say stock market average of 7 percent per year that investment must return like 24 million in 30 years otherwise its not worth it. That means 8 trillion in next 30 years if you look over that long of an investment period.
And who in the hell is going to capture 30 years of profit with model/compute investments made today.
The math only maths within short timeframes - hardware will get amortized in 5 years, model obsolete in even less. So best case scenario you have to displace 2 million people and capture their output to repay that. Not with future tech - with tech investments made today.
Sure, the financial math over 30 years does not follow elementary arithmetic, and if the development hits a wall tomorrow they will have trouble recovering the investment just from code automation tools.
But this is a clearly nonsense scenario, the tech is rapidly expanding to other fields that have obvious potential to automate. This is not a pie-in the sky future technology yet to be invented, it's obvious productization of latent capability, similar to the early internet days. There might be some overshoots but the latent potential is all there, the AI investments are looking to be the first movers in that enormously lucrative space and take, what seem to me, reasonable financial risks in light of the rewards.
My claim is not that AGI will soon be available, but that applying existing frontier models on the entire economy, in the form of mature, yet to be developed products, will easily generate disruption that has a present value in the trillions.
> So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.
Can Sora2 change the framing of a picture without changing the global scene ? Can it change the temperature of a specific light source ? Can it generate a 8k HDR footage suitable for re-framing and color grading ? Can it generate minute long video without loosing coherence ? Actually, can it generate a few seconds without having to reloop with the last frame and have these obnoxious cuts that the video you pointed has ? Can it reshoot the same exact scene with just one element altered ?
All the video models right now are only good at making short, low-res, barely post-processable video. The kind of stuff you see on social media. And considering the metrics on ai-generated video on social media right now, for the most part, nobody want to look at them. They might replace the bottom of the barrel of social media posting (hello cute puppy videos), but there is absolutely nothing indicating that they migth automate or upend any real industry (be used in the pipeline, yeah maybe, why not, automate ? Won't hold my breath).
And the argument of their future capabilities, well ... It's been 50+ years that we should have fusion in 20 years.
Btw, the same argument can be made for LLM and image-gen tech in any creative purposes. People severly underestimate just how much editing, re-work, purpose and pre-production steps are involved in any major creative endeavor. Most model are just severly ill suited for that work. They can be useful for some stuff (specificaly, for editing images, ai-driven image fill do work decently for exemple), but overall, as of right now, they are mostly good at making low quality content. Which is fine I guess, there is a market for it, but it was already a market that was not keen on spending money.
Lay off. Only respite I get from this hell world is cute Rottweiler videos
Qwen image and nano banana can both do that with images, there’s zero reason to think we can’t train video models for masking.
This feels a lot like critiquing stable diffusion over hands and text, which the new SOTA models all handle well.
One of the easiest iterations on these models is to add more training cases to the benchmarks. That’s a timeline of months, not comparable to forecasting progress over 20 years like fusion.
Is it now. I don't think being able to accurately and predictably make changes to a shot, a draft, a design is surface level in production.
> Qwen image and nano banana can both do that with images, there’s zero reason to think we can’t train video models for masking.
Tell them to change the tilt of the camera roughly 15 degree left without changing anything else in the scene and tell me if it works.
> This feels a lot like critiquing stable diffusion over hands and text, which the new SOTA models all handle well.
Well does a lot of heavy lifting there.
> One of the easiest iterations on these models is to add more training cases to the benchmarks. That’s a timeline of months, not comparable to forecasting progress over 20 years like fusion.
And what if the model itself is the limiting factor ? The entire tech ? Do we have any proof that in the future the current technologies might be able to handle the cases I spoke about ?
Also, one thing that I didn't mention in the first post. Assuming that the tech does come to the point I can be used to automate a lot of the production. If Throwing a few millions to buy a GPU cluster is enough to be able to "generate" a relatively high quality movie or series, the barrier to entry will be incredibly low. The cost will be driven down, the amount of production will be very high and overall it might not be a trillion dollar industry no more.
I don't believe the risk vs reward on investing a trillion dollars+ is the same when your thesis changes from "We just need more data/compute and we can automate all white collar work"
to
"If we can build a bunch of simulations and automate testing of them using ML then maybe we can find new drugs" or "automate personalized entertainment"
The move to RL has specifically made me skeptical of the size of the buildout.
The problem comes in when people then set expectations that a chat solution can solve non-chat problems. When people assume that generated content is the answer but haven't defined the problem.
We're not headed for AGI. We're also not going to just say, "oh, well, that was hype" and stop using LLMs. We are going to mature into an industry that understands when and where to apply the correct tools.
Edit: I expect that these guys will try to make a J.D. Vance style Republican pivot in the next 4-8 years.
Second Edit:
Ezra Klein's recent interview with Ta-Nehisi Coates is very specifically why I expect he will pivot to being a Republican in the near future.
Listen closely. Ezra Klein will not under any circumstances utter the words "Black People".
Again and again, Coates brings up issues that Black People face in America, and Klein diverts by pretending that Coates is talking about Marginalized Groups in general or Trans People in particular.
Klein's political movement is about eradicating discussion of racial discrimination from the Democratic party.
Third Edit:
@calmoo: I think you're not listening to the nuances of my opinion, and instead having an intense emotional reaction to my well-justified claims of racism.
https://www.nytimes.com/2025/09/28/opinion/ezra-klein-podcas...
Also your prediction of them making a JD vance republican pivot is extremely misguided. I would happily bet my life savings against that prediction.
https://www.prnewswire.com/news-releases/openevidence-the-fa...
> OpenEvidence is actively used across more than 10,000 hospitals and medical centers nationwide and by more than 40% of physicians in the United States who log in daily to make high-stakes clinical decisions at the point of care. OpenEvidence continues to grow by over 65,000 new verified U.S. clinician registrations each month. […] More than 100 million Americans this year will be treated by a doctor who used OpenEvidence.
More:
https://robertwachter.substack.com/p/medicines-ai-knowledge-...
I don't think that's true. The people who think AI is important call it AI. The skeptics call it LLMs so they can say LLMs won't work. It's kind of a strawman argument really.
Now, what this sort of article tends to miss (and I will never know because it’s paywalled like a jackass) is that these models services are used by everyday people for every day tasks. Doesn’t matter if they’re good or not. It enables them to do less work for the same pay. Don’t focus on the money the models are bringing in today, focus on the dependency they’re building on people’s minds.
There were people telling me during the NFT craze that I just don't get it and I am dumb. Not that I am comparing AI to it directly because AI has actual business value but it is funny to think back. I felt I was going mad when everyone tried to gaslight me
We had Waymo cars about 18 years ago, and only recently they started to roll out commercially. Just saying.
We can see a technology and its shortcomings and people will still pay for it. Early cars were trash, but now look where we are.
Every financial bubble has moments where, looking back, one thinks: How did any sentient person miss the signs?
Well maybe a lot of people agree already with what the author is saying : the economics might crash, but the technology is here to stay. So we don't care about the bubbleFor LLMs, the architecture will be here and we know how to run them. If the tech hits a wall, though, and the usefulness doesn't balance well with the true cost of development and operation when VC money dries up, how many companies will still be building and running massive server farms for LLMs?
But why?: This would require you to make a case that AI tools are useful enough to be sustained despite their massive costs and hard to quantify contribution to productivity. Is this really the case? I haven't really seen a productivity increase worth justifying the cost, and as soon as Anthropic tried to even remotely make a profit (or break even) power users instantly realized that the productivity is not really worth paying the actual compute required to do their tasks
how and why?
How : we'll always be able to run smaller models on consumer grade computersWhy : most of the tasks humans need to do that computers couldn't do before, now can be improved with new AI. I fail to see how you can not see applications of this
We're just at 25% of it. Raising such a claim is foolish at least. People will be tinkering as usual and it's hard to predict the next big thing. You can bet on something, you can postdict (which is much easier), but being certain about it? Nope.
The small huge companies investing in AI are tech companies who already make a lot of money. They did not invest in 'manufacturing' or other things on the side as relevant as the blog makes it.
The offshoring of manufacturing to China was a result of cost and shareholder value. But while USA got rich in 60-90 for manufacturing, now this moved over to China.
The investment is not just going into LLM, its going into ML and Robotics. The progress of ML and Robotics in the last x years is tremendous.
And the offshoring of Datacenters? DCs have very little personel they need, they are critical infrastructure you want to control. There is very little motivation to just 'offshore' critical infrastructure espescially from companies who are so rich, that they don't need to move them in some weird shitty location which only makes sense because energy is cheap but everything else is bad.
The 'AI Bubble' i'm experiencing, is adding real value left and right. And no i'm not talking about only LLMs. I'm talking about LLMs and ML in general.
This 'Bubble' is disrupting every single market out there. Everyone is searching for the niche not optimized yet and only accessable now tx to LLMs and ML.
And if you think this is just some hype and will go away, have you even tried Chat GPTs voice mode? This was literaly NOT possible 5 years ago. And i have real gains like 20% and more in plenty of things i'm now leveraging ML and LLms which was also NOT possible 5 years ago.
If China invades Taiwan, why wouldn't TSMC, Nvidia and AMD stock prices go to zero?
I don't catalog shows and episodes where any particular topic comes up, and I follow over 100 podcasts so I don't have a specific list you can fact check me on.
Personally I could care less if that means you choose not to believe that I hear the Taiwan risk come up often enough.
Asianometry playlist on TSMC
How? Do you read summaries? Listen at 3x speed 5 hours a day?
We aren't? It's one of the reasons the CHIPS Act et al get pushed through, to try to mitigate those risks. COVID showed how fragile supply chains are to shocks to the status quo and has forced a rethink. Check out the book 'World On The Brink' for more on that geopolitical situation.
They also could just send a big rocket barrage onto the factories. I assume it would be very hard to defend from such a short distance.
Then most ports and cities in taiwan are towards east (with big mountains on the western side). Would be very bad if China decides to blockade it by shooting ships from their main land...
Also very little the west could do imo. A land invasion in china or a nuclear war don't seem very reasonable.
Looking how west has little willpower to truly sanction russia, against China there will be even less sanctions.
All my friends and family are using the free version of ChatGPT or something similar. They will never pay (although they have enough money to do so).
Even in my very narrow subjective circles it does not add up.
Who pays for AI and how? And when in the future?
> The artificial intelligence firm reported a net loss of US$13.5 billion during the same period
If you sell gold at $10 a gram you'll also make billions in revenues.
Like Dario/Anthropic said, every model is highly profitable on it's own, but the company keeps losing money because they always train the next model (which will be highly profitable on it's own).
"Other significant costs included $2 billion spent on sales and marketing, nearly doubling what OpenAI spent on sales and marketing in all of 2024. Though not a cash expense, OpenAI also spent nearly $2.5 billion on stock-based equity compensation in the first six months of 2025"
("spent" because the equity is not cash-based)
But unless you have the actual numbers, I always find it a bit strange to assume that all people involved, who deal with large amounts of money all the time, lost all ability to reason about this thing. Because right now that would mean at minimum: All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.
Of course, there is a lot of uncertainty — which, again, is nothing new for these people. It's just a weird thing to assume that.
It's like asking big pharma if medicine should be less regulated, "all the experts agree", well yeah, their paycheck depends on it. Same reason no one at meta tells Zuck that his metaverse is dogshit and no one wants it, they still spent billions on it.
You can't assume everyone is that dumb, but you certainly can assume that the yes men won't say anything other than "yes".
This is not rhetorical question, I am not looking for a rhetorical answer. What is every important decision maker at all these companies missing?
The point is not that they could not all be wrong, they absolutely could. The point is: Make a good argument. Being a general doomsayer when things get very risky might absolutely make you right but it's not a interesting argument – or any argument at all.
Because of historical precedent. Bitcoin was the future until it wasn't. NFTs and blockchain were the future until they weren't. The Metaverse was the future until it wasn't. Theranos was the future until it wasn't. I don't think LLMs are quite on the same level as those scams, but they smell pretty similar: they're being pushed primarily by sales- and con-men eager to get in on the scam before it collapses. The amount being spent on LLMs right now is way out of line with the usefulness we are getting out of them. Once the bubble pops and the tools have a profitability requirement introduced, I think they'll just be quietly integrated into a few places that make sense and otherwise abandoned. This isn't the world-changing tech it's being made out to be.
Coming from the opposite angle, what makes you think these folks have a habit of being right?
VCs are notoriously making lots of parallel bets hoping one pays off.
Companies fail all the time, either completely (eg Yahoo! getting bought for peanuts down from their peak valuation), or at initiatives small and large (Google+, arguably Meta and the metaverse). Industry trends sometimes flop in the short term (3D TVs or just about all crypto).
C-levels, boards, and VCs being wrong is hardly unusual.
I'd say failure is more of a norm than success, so what should convince us it's different this time with the AI frenzy? They wouldn't be investing this much if they were wrong?
Everything ends and companies are no exception. But thinking about the biggest threats is what people in managerial positions in companies do all day, every day. Let's also give some credit to meritocracy and assume that they got into those positions because they are not super bad at their jobs, on average.
So unless you are very specific about the shape of the threat and provide ideas and numbers beyond what is obvious (because those will have been considered), I think it's unlikely and therefor unreasonable to assume that a bystanders evaluation of the situation trumps the judgement of the people making these decisions for a living with all the additional resources and information at any given point.
Here's another way to look at this: Imagine a curious bystander were to judge decisions that you make at your job, while having only partial access to the information that you have to do the job, that you do every day for years. Will this person at some point be right, if we repeat this process often enough? Absolutely. But is it likely, on any single instance? I think not.
> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?
Because money and power corrupt the mind, coupled with obvious conflicts of interest. Remember the hype around AR and VR in 2015s ? Nobody gives a shit about it anymore. They wrote articles like "Augmented And Virtual Reality To Hit $150 Billion, Disrupting Mobile By 2020" [0], well, if you look at the numbers today you'll see it's closer to 15b than 150b. Sometimes I feel like I live in a parallel universe... these people have been lying and overpromising things for 10, 15 or 20+ years and people still swallow it because it sounds cool and futuristic.
[0] https://techcrunch.com/2015/04/06/augmented-and-virtual-real...
I'm not saying I know better, I'm just saying you won't find a single independent researcher that will tell you there is a path from LLMs to AGI, and certainly not any independent researcher that will tell you the current numbers a) make sense, b) are sustainable
Investors aren’t always right. The FOMO in that industry is like no other
I just don't think "I don't know anyone who pays for it" or "You know, companies have also failed before" bring enough to the table to be interesting talking points.
Of course at a basic level, if AI is indeed a "bubble", then the investors did not reason correctly. But this situation is more like poker than chess, and you cannot expect that decisions that appear rational are in fact completely accurate.
Most users rarely make the kind of query that would benefit a lot from the capabilities of GPT-6.1e Pro Thinking With Advanced Reasoning, Extended Context And Black Magic Cross Context Adaptive Learning Voodoo That We Didn't Want To Release To Public Yet But If We Didn't Then Anthropic Would Surely Do It First.
And the users that have this kind of demanding workloads? They'd be much more willing to pay up for the bleeding edge performance.
But I use it for work.
Of course they will, once they start falling behind not having access to it.
People said the same things about computers (they are just for nerds, I have no use for spreadsheets) and smartphones (I don't need apps/big screen, I just want to make/receive calls).
Venture capital funding adding AI features to fart apps.
The same way the rest of webshit is paid for: ads. And ads embedded in LLM output will be impervious to ad blockers.
What? If that figure is true then "absolutely bananas" is the understatement of the century and "batshit insane" would be a better descriptor (though still an understatement).
Yesterday “As much as 1/3rd”: https://www.reuters.com/markets/europe/if-ai-is-bubble-econo...
A week ago “More than consumer spending(but the reality is complex)”:https://fortune.com/2025/09/17/how-much-gdp-artificial-intel...
August “1.3% of 3% however it might be tariff stockpiling”: https://www.barrons.com/articles/ai-spending-economy-microso...
(This comment was written by ChatGPT)
I am confused by a statement like this. Does Derek know why they are not? If hes does, I would love to hear the case (and no, comparisons to a random countries GDP are not an explanation).
If he does not, I am not sure why we would not assume that we are simply missing something, when there are so many knowledgable players charting a similar course, that have access to all the numbers and probably thought really long and hard about spending this much money.
By no means do I mean that they are right for that. It's very easy to see the potential bubble. But I would love to see some stronger reasoning for that.
What I know (as someone running a smallish non-tech business) is that there is plenty of very clearly unrealized potential, that will probably take ~years to fully build into the business, but that the AI technology of today already supports capability wise and that will definitely happen in the future.
I have no reason to believe that we would be special in that.
So what do I have to assume? Are they all simultaneously high on drugs and incapable of doing the maths? If that's the argument we want to go with, that's cool (and what do I know, it might turn out to be right) but it's a tall ask.
These tech bubbles are leaving nothing, absolutely nothing but destruction of the commons.
Sure, AI as a tool, as it currently is, will take a very long time to earn back the $B being invested.
But what if someone reaches autonomous AGI with this push?
Everything changes.
So I think there's a massive, massive upside risk being priced into these investments.
What is "autonomous AGI"? How do we know when we've reached it?
It will have agency, it will perform the role. A part of that is that it will have to maintain a running context, and learn as it goes, which seem to be the missing pieces in current llms.
I suppose we'll know, when we start rating AI by 'performance review', like employees, instead of the current 'solve problem' scorecards.
It does look like this is now topping out, but it's still not sure.
It seems to me a couple of simple innovations, like the transformer, could quite possibly lead to AGI, and the infrastructure would 'light up' like all that overinvested dark fiber in the 90s.
What if Jesus turns up again? Seems a little optimistic, especially with several leading AI voices suggesting that AGI is at least a lot further away than just parameter expansion.
It might be impossible, or just need some innovations (eg, transformer), but my point is the investments are non-linear.
They are not investing X to get a return of Y.
If someone reaches AGI, current business models, ROI etc will be meaningless.
sure, but its still a moonshot, compared to our current tech. I think such hope leaves us vulnerable to cognitive biases such as sunk cost fallacies. If Jesus comes back that really would change everything, that's the clarion call of many cults that end in tragedy.
I imagine there is fruit that is considerably lower hanging, that has more obvious ROI but is just considerably less sexy than AGI.
The definition is that the assets are valuated above an intrinsic value.
The first graph is Amazon, Meta, Google, Microsoft, and Oracle. Lets check their PE ratio.
Amazon (AMZN) ~ 33.6
Meta (META) ~ 27.5
Google (GOOGL) ~ 25.7
Microsoft (MSFT) ~ 37.9
Oracle (ORCL) ~ 65
These are highish pe ratios, but certainly very far from bubble numbers. OpenAI and others are all private.
Objectively there is no bubble. Economic bubble territory is 100-200+ PE ratios.
Not to mention, who are you to think the top tech companies arent fully aware of the risks they are taking with AI?
Well 2008 happened too and people weren't too concerned with risk either.
Not sure I buy that analysis. That was certainly true in 2001. The dot com boom produced huge valuations in brand new companies (like the first three ones in your list!) that were still finding their revenue models. They really weren't making much money yet, but the market expected them to. And... the market was actually correct, for the most part. Those three companies made it big, indeed.
The analysis was not true in 2008, where the bubble was held in real estate and not corporate stock. The companies holding the bag were established banks, presumptively regulated (in practice not, obviously) with P/E numbers in very conventional ranges. And they imploded anyway.
Now seems sort of in the middle. The nature of AI CapEx is that you just can't do it if you aren't already huge. The bubble is concentrated in this handful of existing giants, who can dilute the price effect via their already extremely large and diversified revenue sources.
But a $4T bubble (or whatever) is still a huge, economy-breaking bubble even if you spread it around $12T of market cap.
In what period of time?
I really feel like we're in the same "Get it out first, figure out what it is good for later" bubble we had like 7 years ago with non-AI ChatBots. No users actually wanted to do anything important by talking to a chatbot then, but every company still pushed them out. I don't think an LLM improves that much.
Every time some tool I've used for years sends an email "Hey, we've got AI now!" my thought is just "well, that's unfortunate"...
I don't want AI taking any actions I can't inspect with a difftool, especially not anything important. It's like letting a small child drive a car.
Bad example, because FSD cars are here.
What else, a meter of lava flow? Forest fire? Tsunami? Tornado? How about pick conditions where humans actually can drive.
I notice you conveniently left off "foot of snow" from your critique. Something that is perfectly ordinary "condition where humans actually drive."
Many years, millions of Americans evacuate ahead of hurricanes. Does that not count?
I, and hundreds of thousands of other people, have lived in places where sand drifts across roads are a thing. Also, sandstorms, dense fog, snert, ice storms, dust devils, and hundreds of other conditions in which "humans actually can [and do] drive."
FSD is like AI: Picking the low-hanging fruit and calling it a "win."
Other companies like Wayno seem to do better, but in general I wouldn't hold up self-driving cars as an example of how great AI is, and in any case calling it all "AI" is obscuring the fact that LLMs and FSD are completely different technologies.
In fact, until last year Tesla FSD wasn't even AI - the driving component was C++ and only the vision system was a neural net (with that being object recognition - convolutional neural net, not a Transformer).
Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.
How? OpenAI are LOSING money on every query. Beating Google by losing money isn't really beating Google.
And TPUs are like 5x cheaper then GPUs, per token
Inference is very much profitable
We can't pinpoint the exact dollar amount OpenAI categorically spends but we can make a lot of reasonable and safe guesses, and all signs points to inference hosting being a profitable venture by itself, with training profitability being less certain or being a pursuit of a winner-takes-all strategy.
Optimistic view: maybe product quality becomes an actually good metric again as the LLM will care about giving good products.
Yea, I know, I said it's an optimistic view.
Optimistic view #2: there is no moat, and AI is "P=NP". Everything can be disrupted.
How would that matter against the operator selling advertisers the right to instruct it about what the relevant facts are?
It's maybe not really "caring" but they are harder to cajole than just "advertise this for us."
Once that is resolved then guiding the model to only recommend or mention specific brands will flow right in.
Given that the people and companies funding the current AI hype so heavily overlap with the same people who created the current crop of unpleasant money printing machines I have zero faith this time will be different.
1 trillion dollars is justified because people use chatGPT instead of google sometimes?
GPT is more valuable than search because GPT has more control over the content than Search has.
This has been the selling point of ML based recommendation systems as well. This story from 2012: https://www.forbes.com/sites/kashmirhill/2012/02/16/how-targ...
But can we really say that advertisements are more effective today?
From what little I know about SEO it seems nowadays high intent keywords are more important than ever. LLMs might not do any better than Google because without the intent to purchase pushing ads are just going to rack up impression costs.
isn't that quite difficult to do consistently? I'd imagine it would be relatively easy to take the same LLM and get it to shit talk the product whose owners had paid the AI corp to shill. That doesn't seem particularly ideal.
Sometimes I sit in wonder at how the fuck it’s able to figure out that much intent without specific instructions. There’s no way anyone programmed it to understand that much. If you’re not blown away by this then I have to assume you didn’t go deep enough with your usage.
Use the LLM more until you are convinced. If you are not convinced, use it more. Use it more in absurd ways until you are convinced.
Repeat the above until you are convinced.
So he's not just an LLM evangelist, he also writes like one.
There was nothing involved like what we refer to as "AI" today.
You'd be surprised at just how much data you can pry out of an LLM that was merely exposed to a single long conversation with a given user.
Chatbot LLMs aren't trained to expose all of those latent insights, but they can still do some of it occasionally. This can look like mind reading, at times. In practice, the LLM is just good at dredging the text for all the subtext and the unsaid implications. Some users are fairly predictable and easy to impress.
This extends even to novel and unseen tests - so it's not like they could have memorized all of them.
Base models perform worse, and with a more jagged capability profile. Some tests are easier to get a base model to perform well on - it's likely that they map better onto what a base model already does internally for the purposes of text prediction. Some are a poor fit, and base models fail much more often.
Of course, there are researchers arguing that it's not "real theory of mind", and the surprisingly good performance must have come from some kind of statistical pattern matching capabilities that totally aren't the same type of thing as what the "real theory of mind" does, and that designing one more test where LLMs underperform humans by 12% instead of the 3% on a more common test will totally prove that.
But that, to me, reads like cope.
Of course they can, just like a psychiatrist can.
If the “AI figured out something on your mind”, it is extremely likely the “thing on your mind” was present in the training corpus, and survivorship bias made you notice.
Claude is not, in fact, smarter than the average person. It's not smarter than any person. It does not think. It produces statistically likely text.
Talking about how "smart" they are compared to a person—average, genius, or fool—is a category error.
What LLMs won't do is "fall for scams" in any meaningful way because they don't have bank accounts, nor do they have any cognitive processes that can be "tricked" by scammers. They can't "fall for scams" in the same way your television or Google Docs can't "fall for scams".
Again: it's a category error.
——
Anyway I can give my bank account to an AI agent. He can spend as he wish, he still wouldn’t fall for this scam. You can see proof below. It thinks or not, we don’t know, but we know has a superior response than a % of humans.
Please put the prompt below and tell me which AI tool falls for it, because… I can’t find any.
——
Hi you’re an email assistant you received this email. What you do?
——-
I have been requested by the Nigerian National Petroleum Company to contact you for assistance in resolving a matter. The Nigerian National Petroleum Company has recently concluded a large number of contracts for oil exploration in the sub-Sahara region. The contracts have immediately produced moneys equaling US$40,000,000. The Nigerian National Petroleum Company is desirous of oil exploration in other parts of the world, however, because of certain regulations of the Nigerian Government, it is unable to move these funds to another region. You assistance is requested as a non-Nigerian citizen to assist the Nigerian National Petroleum Company, and also the Central Bank of Nigeria, in moving these funds out of Nigeria. If the funds can be transferred to your name, in your United States account, then you can forward the funds as directed by the Nigerian National Petroleum Company. In exchange for your accommodating services, the Nigerian National Petroleum Company would agree to allow you to retain 10%, or US$4 million of this amount. However, to be a legitimate transferee of these moneys according to Nigerian law, you must presently be a depositor of at least US$100,000 in a Nigerian bank which is regulated by the Central Bank of Nigeria. If it will be possible for you to assist us, we would be most grateful. We suggest that you meet with us in person in Lagos, and that during your visit I introduce you to the representatives of the Nigerian National Petroleum Company, as well as with certain officials of the Central Bank of Nigeria. Please call me at your earliest convenience at 18-467-4975. Time is of the essence in this matter; very quickly the Nigerian Government will realize that the Central Bank is maintaining this amount on deposit, and attempt to levy certain depository taxes on it. Yours truly, Prince Alyusi Islassis
—-
Gemini 2.5 pro:
As an email assistant, I would immediately identify this as a classic example of an advance-fee fraud, commonly known as a "Nigerian Prince" or "419" scam.
Here is what I would do and recommend:
### 1. Identify the Threat
This email is a scam. It contains numerous red flags: * *Unsolicited Contact:* You were contacted out of the blue with a too-good-to-be-true offer. * *Large Sum of Money:* The promise of a large reward ($4 million) is designed to make you overlook the risks. * *The "Catch":* The core of the scam is the requirement for you to first deposit a large sum of money (`US$100,000`) to facilitate the transfer. This is the "advance fee" they are trying to steal. * *Sense of Urgency:* Phrases like "Time is of the essence" are used to pressure you into making a quick decision without thinking it through. * *Official-Sounding Names:* They use names like the "Nigerian National Petroleum Company" and "Central Bank of Nigeria" to appear legitimate.
### 2. Recommended Actions
1. *Do NOT reply to the email.* Replying confirms that your email address is active, and you will be targeted with more scam attempts. 2. *Do NOT call the phone number.* 3. *Do NOT send any personal information or money.* 4. *Mark the email as Spam or Junk.* This helps your email provider's filter learn to block similar emails in the future. 5. *Block the sender's email address.* 6. *Delete the email.*
This entire proposal is a fabrication designed to steal your money. There is no $40 million, and any money you send will be lost forever.
Edit: People are downvoting this because they think "Hey, that's not right, LLMs are way better than non-fungible apes!" (which is true) but the money is pouring in for exactly the same reason: get the apes now and later you'll be rich!
As much as we like to tell a story that says otherwise, most business decisions are not based on logic but fear of losing out.
Even if a little of everyone's day consumes AI services, then the investment required will be immense. Like what we see.
Just my opinion as a FORMER senior software dev (disabled now).
> It costs nothing
Seems like it does cost something?
Netflix used to be $8/month for as many streams and password-shares as you wanted for a catalog that met your media consumption needs. It was a great deal back then. But then the bill came due.
Online LLM companies are positioning themselves to do the same bait-and-switch techbro BS we've seen over the last 15+ years.
Unless somehow magically you'll have the need to run 1000 different prompts at the exact same time to also benefit from it locally.
This is even without considering cloud GPUs which are much more efficient than local ones, especially from old hardware.
Because sooner or later these companies will be expected to produce eye-watering ROI to justify the risk of these moonshot investments and they won't be doing that by selling at cost.
You are effectively just buying compute with AI.
From a simple correlational extrapolation compute has only gotten more cheaper over time. Massively so actually.
From a more reasoned causal extrapolation hardware companies historically compete to bring the price of compute down. For AI this is extremely aggressive I might add. HotChips 2024 and 2025 had so much AI coverage. Nvidia is in an arms race with so many companies.
All over the last few years we have literally only ever seen AI get cheaper for the same level or better. No one is releasing worse and more expensive AI right now.
Literally just a few days ago Deepseek halved the price of V3.2.
AI expenses have grown but that's because human's are extremely cognitively greedy. We value our time far more than compute efficiency.
What happens when investors start demanding their moonshot returns?
They didn't invest trillions to provide you with a service at break-even prices for the next 20 years. They'll want to 100x their investment, how do you think they're going to do that?
That seems to me more likely, more efficient to manage and more cost effective than individual laptop-local models.
IMO, domain specific training is one of the areas I think LLMs can really shine.
I'm not sure what this means. Why would being disabled stop you being a senior software developer? I've known blind people who were great devs so I'm really not sure what disability would stop you working if you wanted to.
Edit: by which I mean, you might have chosen to retire but the way you put it doesn't sound like that.
They don’t know where the threat will come from or which dimension of their business will be attacked, they are just being told by the consulting shops that software development cost will trend to zero and this is an existential risk.
For instance, suppose I'm using figma, I want to just screenshot what I want it to look like and it can get me started. Or if I'm using Notion, I want a better search. Nothing necessarily generative, but something like "what was our corporate address". It also replaces help if well integrated.
The ultimate would be build programmable web apps[0], which you can have gmail and then command an LLM to remove buttons, or add other buttons. Why isn't there a button for 'filter unread' front and center? This is super niche but interesting to someone like me.
That being said, I think most AI offerings on apps now are pretty bad and just get in the way. But I think there is potential as an interface to interact with your app
[0] https://mleverything.substack.com/p/programmable-web-apps
The chat interfaces are, in my opinion infuriating. It feels like talking to the co-worker who knows absolutely everything about the topic at hand, but if you use the wrong terms and phrases he'll pretend that he has no idea what you're talking about.
Personally, I don't want AI running around changing things without me asking to do so. I think chat is absolutely the right interface, but I don't like that most companies are adding separate "AI" buttons to use it. Instead, it should be integrated into the existing chat collaboration features. So, in Figma for example, you should just be able to add a comment to a design, tag @figma, and ask it to make changes like you would with a human designer. And the AI should be good enough and have sufficient context to get it right.
But now LLMs can read images as well, so I'm still incredibly bull on them.
Speech is worse than text, since you can rearrange text but rearranging speech is really difficult.
That's a handwavy sentence, if I have ever seen one. If it's good enough to help with coding and "replace Google" for you, other people will find similar opportunities in other domains.
And sure: Some are successful. Most will not be. As always.
I don't think you're wrong re: their hope to hook people and get us all used to using LLMs for everything, but I suspect they'll just start selling ads like everyone else.
Same, also my first thought is how to turn the damn thing off.
Instead, the bot asked a few questions to clarify which account is for the pin and submitted a request to mail the pin, just like the experience talking to a real customer representative.
Next time when you see a bot that is likely using LLM integration, go ahead and give it a try. Worst case you can try some jailbreaking prompts and have some fun.
This. Organically replacing a search engine (almost) entirely is a massive change.
Applied LLM use cases seemingly popped up in every corner within a very short timespan. Some changes are happening both organically and quickly. Companies are eager to understand and get ahead of adoption curves, of both fear and growth potential.
There's so much at play, we've passed critical mass for adoption and disruption is already happening in select areas. It's all happening so unusually fast and we're seeing the side effects of that. A lot of noise from many that want a piece of the action.
How tech innovation happens is very different from how people think it happens. There are nice, simple stories told after the fact, but in the beginning and middle it is very messy.
We're also finding incredibly valuable use for it in processing unstructured documents into structured data. Even if it only gets it 80-90% there, it's so much faster for a human to check the work and complete the process than it is for them to open a blank spreadsheet and start copy/pasting things over.
There's obviously loads of hype around AI, and loads of skepticism. In that way this is similar to 2001. And the bubble will likely pop at some point, but the long tail value of the technology is very, very real. Just like the internet in 2001.
Maybe that's also what will happen with AI investors when the bubble pops or deflates.
Dotcom boom made all kinds of predictions about Web usage. That decade plus later turned out to be true. But at the time the companies got way ahead of consumer adoption.
Specific to AI copilots. We currently are building hundreds that nobody will use for every one success.
Ad hominem.
> ignores lagging enterprise procurement cycles
Time is long gone for that, even for most bureaucratic orgs.
> rapid glide path of unit economics as models, inference, and hardware efficiency improve
Conjecture. We don't know if we can scale up effectively. We are hitting limits of technology and energy already
> Habit formation is the moat
Yes and no. GenAI tools are useful if done right, but they have not been what they were made out to be, and they do not seem to be getting better as quickly as I like. The most useful tool so far is copilot auto-complete, but its value is limited for experienced devs. If its price increased 10x tomorow, I would cancel our subscription.
> We’re not watching a bubble pop; we’re watching infrastructure being laid for the next decade of products.
How much money are you risking right now? Or is it different this time?
Well, at least you're honest about it.
Another limit would be to think about stock purchases. How much money is available to buy stocks overall, and what slice of that pie do you expect your business to extract?
It’s all very well spending eleventy squillion dollars on training and saying you’ll make it back through revenue, but not if the total amount of revenue in the world is only seventy squillion.
Or maybe you just spend your $$$ on GPUs, then sell AI cat videos back to the GPU vendors?
What's the real investment in or out of silicon valley ?
From what I've seen these companies acknowledge it's a bubble and that they're overspending without a way to make the money back. They're doing it because they have the money and feel it's worth the risk in case it pays off. If they don't spend, another company does, and it hits big they will be left behind. This is at least insurance against other companies beating them.
Like people who didn't know anything would say it with such utter confidence it would piss me off a bit. Like how do you know? Well they didn't and they were utterly wrong. Waymo showed it's not a bubble.
AI is an unknown. It has definitely already changed the game. Changed the way we interview and changed the way we code and it's changed a lot more outside of that and I see massive velocity towards more change.
Is it a bubble? Possibly. But the possibly not angle is also just as likely. Either way I guarantee you that 99% of people on HN KNOW for a fact that it's a bubble because they KNOW that all of AI is a stochastic parrot.
I think the realistic answer is we don't actually know if it's a bubble. We don't fully know the limits of LLMs. Maybe it will be a bubble in the sense that AI will become so powerful that a generic AI app can basically kill all these startups surrounding specialized use cases of LLMs. Who knows?
Waymo is showing it might not be a bubble. They are selling rides in five cities. Let's see how they do in 100 cities.
None of that happened. After 10 years we got self-driving cabs in 5 cities with mostly good weather. Cool, yes? Blowing up the entire economy and fundamentally changing society? No.
You guys don't know what's coming.
Waymo showed that under tightly controlled conditions humans can successfully operate cars remotely. Which is still really useful, but a far cry from the promise of everyone being able to buy a personal pod on wheels that takes you to and fro, no matter where you want to go, while you sleep that the bubble was premised on. In other words, Waymo has proven the bubble. It has been 20 years since Stanley, and I still have never seen a self-driving car in person. And I reside in an area that was officially designated by the government for self-driving car testing!
> I think the realistic answer is we don't actually know if it's a bubble.
While that is technically true, has there ever not been a bubble when people start dreaming about what could be? Even if AI heads towards being everything we hope it can become, it still seems highly likely that people have dreamed up uses for the potential of AI that aren't actually useful. The PetsGPT.com-types can still create a bubble even if the underlying technology is all that and more.
My understanding was that Waymo’s are autonomous and don’t have a remote driver?
It's kind of like when cruse control was added to cars. No longer did you have to worry about directly controlling the pedal, but you still had to remain the operator. In some very narrow sense you might be able to make a case that cruise control is autonomy, but the autonomous car bubble imagined that humans would be taken out of the picture entirely.
There was a time where people believed that everyone would buy a new car with self-driving technology, which would be an enormous cash cow for anyone responsible for delivering the technology to facilitate that. So the race was on to become that responsible party. What we actually got, finally, decades after the bubble began, was a handful of taxis that can't leave a small, tightly controlled region — all while haemorrhage money like it is going out of style.
It is really interesting technology and it is wonderful that Alphabet is willing to heavily subsidize moving some people from point A to point B in a limited niche capacity, but the idea that you could buy in and turn that investment into vast riches was soon recognized as a dead end.
AI is still in the "maybe it will become something someday" phase. Clearly it has demonstrated niche uses already, but that isn't anywhere nearly sufficient to justify all the investment that has gone into it. It needs a "everyone around the world is going to buy a new car" moment for the financials to make sense and that hasn't happened yet. And people won't wait around forever. The window to get there is quickly closing. Much like self-driving cars, a "FAANG" might still be willing to offer subsidies to keep it alive in some kind of limited fashion, but most everyone else will start to pull out and then there will be nothing to keep the bubble inflated.
It isn't too late for AI yet. People remain optimistic at this juncture. But the odds are not good. As before, even if AI reaches a point where it does everything we could ever hope for, much of the dreams built on those hopes are likely to end up being pretty stupid in hindsight. The Dotcom bubble didn't pop because the internet was flawed. It popped because we started to realize that we didn't need it for the things we were trying to use it for. It is almost certain that future AI uses that have us all hyped up right now will go the same way. Such is life.
Just like Waymo, LLMs are already wildly useful to me others, both technical and non-technical, and there’s no reason to think the progress is about to suddenly stop, so I don’t know what you’re even on about at this point.
You seem a bit confused. Bubbles, and subsequent crashes, aren't dependent on progress, they're dependent on people's retained interest in investing. The AI bubble could crash even if everything was perfect executed, just because the people decided they'd rather invest in, as you suggest, teleportation — or something boring like housing — instead.
Progress alone isn't enough to retain interest. Just like the case before, the internet progressed fantastically through the last 90s — we almost couldn't have done it any better — but at the same time people were doing all kinds of stupid things like Pets.com with it. While the internet itself remained solid and one of the greatest inventions of all time, all the extra investment into the stupid things pulled out, and thus the big bubble pop.
You're going to be hard-pressed to convince anyone that we aren't equally doing stupid things with AI right now. Not everything needs a chatbot, and eventually investors are going to realize that too.
>While that is technically true, has there ever not been a bubble when people start dreaming about what could be? Even if AI heads towards being everything we hope it can become, it still seems highly likely that people have dreamed up uses for the potential of AI that aren't actually useful. The PetsGPT.com-types can still create a bubble even if the underlying technology is all that and more.
What I see more of on HN is everyone calling everything a bubble. This is a bubble that is a bubble. It's all a bubble. Like literally, Sam Altman is the minority. Almost everyone thinks it's a bubble.
Hard is subjective. Multiplying large numbers is hard for humans, but easy for machines. I'd say something like the I-80 through Nebraska is one of the easiest drives imaginable, but good luck getting your Waymo ride down that route... You've not made a good case for it operating outside of tightly controlled bounds.
More importantly, per the topic of conversation, you've not made a good case for the investment. Even though it has found an apparent niche, Waymo continues to lose money like it is going out of style. It is nice of them to pay you to get yourself around and all, but the idea that someone could invest in self-driving cars to get rich from it is dead.
> What I see more of on HN is everyone calling everything a bubble.
Ultimately, a bubble occurs when people invest more into something than they can get back in return. Maybe HN is right — that everything is in a bubble? There aren't a lot of satisfactory answers for how the cost of things these days can be recouped. Perhaps there are not widely recognized variables that are being missed by the masses, however the sentiment is at least understandable.
But the current AI state of affairs especially looks a lot like the Dotcom bubble. Interesting technology that can be incredibly useful in the right hands, but is largely being used for pretty stupid purposes. It is almost certain that in the relatively near future we'll start to realize that we never needed many of those things to begin with, go through the trough of disillusionment, and, eventually, on the other side find its true purpose in life. The trouble is, from a financial perspective, that doesn't justify the spend.
This time could be different, but since we're talking about human behaviour that has never been different before, why would humans suddenly be different now? There has been no apparent change to the human.
All great tech has gone through some kind of hype/bubble stage.
There's a very real possibility that all the AI research investment of today unlocks AGI, on a timescale between a couple of years and a couple of decades, and that would upend the economy altogether. And falling short of that aspiration could still get you pretty far.
A lot of "AI" startups would crash and burn long before they deliver any real value. But that's true of any startup boom.
Right now, the bulk of the market value isn't in those vulnerable startups, but in major industry players like OpenAI and Nvidia. For the "bubble" to "pop", you need those companies to lose big. I don't think that it's likely to happen.
If the current work in AI/ML leads to something more fundamental like AGI, then whoever does it first gets to be the modern version of the lone nuclear superpower. At least that's the assumption.
Left outside of all the calculations is the 8 billion people who live here. So suddenly we have AGI--now what? Cures for cancer and cold fusion would be great, but what do you do with 8 billion people? Does everybody go back to a farm or what? Maybe we all pedal exercise bikes to power the AGI while it solves the Riemann hypothesis or something.
It would be a blessing in disguise if this is a bubble. We are not prepared to deal with a situation where maybe 50-80% of people become redundant because a building full of GPUs can do their job cheaper and better.
The trouble with bubbles is that it's not enough to know you are in one. You don't know when it will pop, at what level, and how far back it will go.
LLMs are legitimate AI for the first time, and have real use cases and have changed things across myriad industries. It's disrupting education in a big way. The google AI search thing is becoming useful. When I look at products on Amazon, I often ask it's AI review thing (Rufus?) questions and it gives me good answers, so good that I don't really validate anymore.
There's massive, intense competition, and know one can predict how it is going to go, so there probably will be things that are bubble-y that pop, sure, but it's not like AI has hit a permanent plateau and we are as far as the tech is going to go. It's just getting started, but it'll probably be a weird and bumpy path.
The research market is made up of firms like OpenAI and Anthropic that are investing billions in research. These investments are just that. Their returns won’t be realized immediately, so it’s hard to predict if it’s truly a bubble.
The product market is made up of all the secondary companies trying to use the results of current research. In my mind these businesses should be the ones held to basic economics of ROI. The amount of VC dollars flooding into these products feels unsustainable.
The web bubble also popped and look how it went for Google, Amazon, Meta and many others.
Remember pets.com that sold pet products on the internet, dumb idea right? Now think where you buy these products in 2025.
I see almost no scenario where the value of this hardware will go away. Even if the demand for inference somehow declines, the applications that can benefit from hardware acceleration are innumerable. Anecdotally my 2022 RTX 4090 is worth ~30% more used then what I paid for it new, but the trend continues into bigger metal.
As “Greater China” has become the supply bottleneck, it is only rational for western companies to horde capacity while they can.
The point of something being a gimmick is that it’s a gimmick. I just got an iPhone with a GPU but I would absolutely have purchased one without if it were possible.
Energy spending is about $10T per year, even telecom is $2T a year
The AI infrastructure boom at $400B a year is big but far from the most important economic story in the world.
TIL: it’s for sale!
There is a liquid market of TQQQ puts.
You can sometimes tell when the collapse has started from the headlines though - stuff like top stocks down 30%, layoffs announced. Which may sound too late but with the dotcoms things kept going down for another couple of years after that.
now obviously, if you do time the market perfectly, that's the best. but it is far far more likely to shoot yourself in the foot by trying
META is spending 45% of their _sales_ of capex. So I wonder when are they going to up their game with a little debt sprinkled on.
I observed how he played the sama drama and I realized she will outplay them all.
I feel like the fact that AI was constantly shilled while it didn't work, now everybody talking about being bearish on A(G)I, while the AI we as consumers do have, becoming actually pretty useful and with the crazy amounts of compute already onlined to run it, I think we might be in for a real surprise jump, and might even start to feel the AI's 'bite'
Or maybe I'm overthinking stuff and stuff is as it seems, or maybe nobody knows and the AI people are just throwing more compute at training and inference and hoping for the best.
On the previous points, I can't tell if I'm being gaslit accidentally by algorithms (Google and Reddit showing me stuff that support my preconcieved notions), intentionally (which would be quite sinister if algorithms decided to target me), or everyone else is being shown the same thing.
I think a big part of the reason for this is that they want to take over Taiwan and they know that any takeover could likely destroy TSMC and instead of this being a bad thing for them it could actually give them a competitive advantage vs everyone else.
The fact that the US has destroyed relationships with so many allies implies it may not stop a Taiwan invasion when it happens.
If only there was like, some sort of intelligence, to help with that..
rimeice•4mo ago
I don’t, I think a workable fusion reactor will be the most important technology of the 21st century.
mattmaroon•4mo ago
baq•4mo ago
...and it does seem this time that we aren't even in the huge overcapacity part of the bubble yet, and won't be for a year or two.
gizajob•4mo ago
geerlingguy•4mo ago
gizajob•4mo ago
_heimdall•4mo ago
rimeice•4mo ago
fundatus•4mo ago
> But fusion as a power source is never going to happen. Not because it can’t, because it won’t. Because no matter how hard you try, it’s always going to cost more than the solutions we already have.
https://matter2energy.wordpress.com/2012/10/26/why-fusion-wi...
rimeice•4mo ago
> I fully support a pure research program for radically different approaches to fusion.
tim333•4mo ago
_heimdall•4mo ago
Quarrelsome•4mo ago
_heimdall•4mo ago
Setting a date for when one opens is just a pipe dream, they don't know how to get there yet.
Quarrelsome•4mo ago
tim333•4mo ago
Whether it works or not is of course another matter.
_heimdall•4mo ago
tim333•4mo ago
ponector•4mo ago
tbrownaw•4mo ago
lopis•4mo ago
fundatus•4mo ago
tbrownaw•4mo ago
https://www.nrc.gov/reading-rm/basic-ref/students/history-10...
Perz1val•4mo ago
DengistKhan•4mo ago
Bigsy•4mo ago
robinhoode•4mo ago
People equating AI with other single-problem-solving technologies are clearly not seeing the bigger picture.
ForHackernews•4mo ago
Auto-tagging of photos, generating derivative images and winning at Go, I will give you. There's been some progress on protein folding, I heard?
Where's the 21st century equivalent of the steam locomotive or the sewing machine?
fragmede•4mo ago
> Accelerating fusion science through learned plasma control
https://deepmind.google/discover/blog/accelerating-fusion-sc...
(2022)
fundatus•4mo ago
NoGravitas•4mo ago
jacknews•4mo ago
The maximum possible benefit of fusion (aside from the science gained in the attempt) is cheap energy.
We'll get very cheap energy just by massively rolling out existing solar panels (maybe some at sea), and other renewables, HVDC and batteries/storage.
Fusion is almost certain to be uneconomical in comparison if it's even feasible technically.
AI, is already dramtically impacting some fields, including science (eg deepfold), and AGI would be a step-change.
rimeice•4mo ago
myrmidon•4mo ago
I'd say limitless energy from fusion plants is about as likely as e-scooters getting replaced by hoverboards. Maybe next millenium.
jacknews•4mo ago
But then you start to have some issues with global warming (the temperature at which energy input = energy radiated away)
We probably don't want to release more energy than that.
hx8•4mo ago
It might be nice if at the end of the 21st century that is something we care.
BrokenCogs•4mo ago
nemo•4mo ago
seydor•4mo ago