* https://en.wikipedia.org/wiki/Technological_Revolutions_and_...
- https://www.amazon.co.uk/Technological-Revolutions-Financial...
The process of _actually_ benefitting from technological improvements is not a straight line, and often requires some external intervention.
e.g. it’s interesting to note that the rising power of specific groups of workers as a result of industrialisation + unionisation then arguably led to things like the 5-day week and the 8-hour day.
I think if (if!) there’s a positive version of what comes from all this, it’s that the same dynamic might emerge. There’s already lots more WFH of course, and some experiments with 4-day weeks. But a lot of resistance too.
For a 4 day week to really happen st scale, I'd expect we similarly need the government to decide to roll it out rather than workers groups pushing it from the bottom up.
Most new tech is like that - a period of mania, followed by a long tail of actual adoption where the world quietly changes
Why is that the case? There's plenty of people in the field who have made convincing arguments that it's a dead end and fundamentally we'll need to do something else to achieve AGI.
Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I'm not a hater, it could be true, but it seems to be gospel and I'm not sure why.
Mapping to 2001 feels silly to me, when we've had other bubbles in the past that led to nothing of real substance.
LLMs are cool, but if they can't be relied on to do real work maybe they're not change the world cool? More like 30-40B market cool.
EDIT: Just to be clear here. I'm mostly talking about "agents"
It's nice to have something that can function as a good Google replacement especially since regular websites have gotten SEOified over the years. Even better if we have internal Search/Chat or whatever.
I use Glean at work and it's great.
There's some value in summarizing/brainstorming too etc. My point isn't that LLMs et al aren't useful.
The existing value though doesn't justify the multi-trillion dollar buildout plans. What does is the attempt to replace all white collar labor with agents.
That's the world changing part, not running a pretty successful biz, with a useful product. That's the part where I haven't seen meaningful adoption.
This is currently pitched as something that will have nonzero chance of destroying all human life, we can't settle for "Eh it's a bit better than Google and it makes our programmers like 10% more efficient at writing code."
Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I have a friend who works at PwC doing M&A. This friend told me she can't work without ChatGPT anymore. PwC has an internal AI chat implementation.Where does this notion that LLMs have no value outside of programming come from? ChatGPT released data showing that programming is just a tiny fraction of queries people do.
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.
There's no doubt that you'll find anecdotal evidence both for and against in all variations, what's much more interesting than anecdotes is the aggregate.
[0] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
In the first few years of any new technology, most people investing it lose money because the transition and experimentation costs are higher than the initial returns.
But as time goes on, best practices emerge, investments get paid off, and steady profits emerge.
I also think it's true that AI is nowhere near AGI level. It's definitely not currently capable of doing my job, not by a long shot.
I also think that, throwing trillions of dollars at AI for a "a better google search, code snippet generator, and obscure bug finder" is a contentious question, and a lot of people oppose it for that reason.
I personally still think it's kind of crazy we have a technology to do things that we didn't have just ~2 years before, even if it just stagnates right here. Still going to be using it every day, even if I admittedly hate a lot of parts of it (for example, "thinking models" get stuck in local minima way too quickly).
At the same time, don't know if it's worth trillions of dollars, at least right now.
So all claims on this thread can be very much true at the same time, just depends on your perspective.
Is she more productive though?
People who smoke cigarettes will be unable to work without their regular smoke breaks. Doesn’t mean smoking cigarettes is good for working.
Personally I am an AI booster and I think even LLMs can take us much farther. But people on both sides need to stop accepting claims uncritically.
/s
What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder? Like ChatGPT is some kind of addiction similar to drugs? Is it so hard to believe that ChatGPT is actually productive?
What if people are using LLMs to achieve the same productivity with more cost to the business and less time spent working?
This, to me, feels incredibly plausible.
Get an email? ChatGPT the response. Relax and browse socials for an hour. Repeat.
"My boss thinks I'm using AI to be more productive. In reality, I'm using our ChatGPT subscription to slack off."
AI can be a tool for 10xers to go 12x, but more likely it's that AI is the best slack off tool for slackers to go from 0.5x to 0.1x.
I've seen it happen to good engineers. Tell me you've seen it too.
Lots of things claim to make people more productive. Lots of things make people believe they are more productive. Lots of things fail to provide evidence of increasing productivity.
This "just believe me" mentality normally comes from scams.
We need data, not beliefs and current data is conflicting. ffs.
It's not that hard to imagine that your friend feels more productive than she actually is. I'm not saying it's true, but it's plausible. The anecdata coming out of programming is mostly that people are only more productive in certain narrow use cases and much less productive in everything else, relative to just doing the work themselves with their sleeves rolled up.
But man to seeing all that code gets spit out on the screen FEEL amazing, even if I'm going to spend the next few hours needing to edit it, for the next few months managing the technical debt I didn't notice when I merged it.
And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.
Come on, you can’t mean this in any kind of robust way. I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?” Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.
It doesn't say she chooses to use it; it says she can't work without using it. At my workplace, senior leadership has mandated that software engineers use our internal AI chat tooling daily, they monitor the usage statistics, and are updating engineering leveling guides to include sufficient usage of AI being required for promotions. So I can't work without AI anymore, but it doesn't mean I choose to.
This isn't a sign that ChatGPT has value as much as it is a sign that this person's work doesn't have value.
Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.
Current AI tools may not beat the best programmer, they definitely improves average programmer efficiency.
Try changing something old in claude code (or codex etc) using a programming language you have used before. Your opinion might change drastically.
But why would I do that? Either I'm learning a new language in which case I want to be as hands-on as possible and the goal is to learn, not to produce. Or I want to produce something new in which case, obviously, I'd use a toolset I'm experienced in.
For example, perhaps I want to use a particular library which is only available in language X. Or maybe I'm writing an add-on for a piece of software that I use frequently. I don't necessarily want to become an expert in Elisp just to make a few tweaks to my Emacs setup, or in Javascript etc. to write a Firefox add-on. Or maybe I need to put up a quick website as a one-off but I know nothing about web technologies.
In none of these cases can I "use a toolset I'm experienced in" because that isn't available as an option, nor is it a worthwhile investment of time to become an expert in the toolset if I can avoid that.
It's a damn good tool, I use it, I've learned the pitfalls, it has value but the inflation of potential value is, by definition, a bubble...
Do we really need more efficient average programmers? Are we in a shortage of average software?
Yes. The "true" average software quality is far, far lower than the average person perceives it to be. ChatGPT and other LLM tools have contributed massively to lowering average software quality.
Anyway we don't need more efficient average programmers, time-to-market is rarely down to coding speed / efficiency and more down to "what to build". I don't think AI will make "average" software development work faster or better, case in point being decades of improvements in languages, frameworks and tools that all intend to speed up this process.
It was Claude Code Opus 4.1 instead of Codex but IMO the differences are negligible.
I just tried earlier today to get Copilot to make a simple refactor across ~30-40 files. Essentially changing one constructor parameter in all derived classes from a common base class and adding an import statement. In the end it managed ~80% of the job, but only after messing it up entirely first (waiting a few minutes), then asking again after 5 minutes of waiting if it really should do the thing and then missing a bunch of classes and randomly removing about 5 parenthesis from the files it edited.
Just one anecdote, but my experiences so far have been that the results vary dramatically and that AI is mostly useless in many of the situations I've tried to use it.
Some of the stuff generated I can't believe is actually good to work with long term, and I wonder about the economics of it. It's fun to get something vaguely workable quickly though.
Things like deepwiki are useful too for open source work.
For me though the core problem I have with AI programming tools is that they're targeting a problem that doesn't really exist outside of startups, not writing enough code, instead of the real part of inefficiency in any reasonably sized org, coordination problems.
Of course if you tried to solve coordination problems, then it would probably be a lot harder to sell to management because we'd have to do some collective introspection as to where they come from.
The business model is it is data collection about you on steroids, and that the winning company will eclipse Meta in value.
It's just more ad tech with multipliers, and it will continue to control thought, sway policy and decide elections. Just like social media does today.
Not sure though that do they make enough revenue and what will be the moat if more or less the best models will converge around the same level. For most normies, it might be hard to spot difference between gpt 5 and claude for instance. Okay for Grok the moat is that it doesn't pretend to be a pope and censor everything.
Odd way to describe ChatGPT which has >1B users.
AI overviews have rolled out to ~3B users, Gemini has ~200M users, etc.
Adoption is far from low.
Does that really count as adoption, when it has been introduced as a default feature?
HN seems to think everyone is like in the bubble here, which thinks AI is completely useless and wants nothing to do with it.
Half the world is interacting with it on a regular basis already.
Are we anywhere near AGI? Probably not.
Does it matter? Probably not.
Inference costs are dropping like a rock, and usage is continuing to skyrocket.
I don't actually think that AI overviews have "negative value" - they have their utility. There are cases where I stop my search right after reading the "AI overview". But "organic" adoption of ChatGPT or Claude or even Gemini and "forced" adoption of AI overviews are two different beasts.
He has not engaged with any chatbot, but he thinks of himself as "using AI now" and thinks of it as a value-add.
In the last few months, every single non-programmer friend I've met has ChatGPT installed on their phone (N>10).
Out of all the people that I know enough to ask if they have ChatGPT installed, there is only one who doesn't have it (my dad).
I don't know how many of them are paying customers though. IIRC one of them was using ChatGPT to translate academic writing so I assume he has pro.
Adoption is high with young people.
Have you ever used an LLM? I use it every day to help me with research and completing technical reports (which used to be a lot more of my time).
Of course you can't just use it blindly, but it definitely adds value.
Nobody doubt it works, everybody doubt Altboy when he asks $7 trillion
Current offerings are usually worth more than they cost. But since the prices are not really reflective of the costs it gets pretty muddy if it is a value add or not.
but on the other side, the reason everyone is so gung ho on all this is because these models basically allow for the true personalization of everything. They can build up enough context about you in every instance of you doing things online that they can craft the perfect ad experience to maximize engagement and conversion. that is why everyone is so obsessed with this stuff. they don't care about AGI, they care about maintaining the current status quo where a large chunk of the money made on the internet is done by delivering ads that will get people to buy stuff.
Current batch of deep learning models are fundamentally a technology for labor automation. This is immensely useful in itself, without the need to do AGI. The Sora2 capabilities are absolutely wild (see a great example here of what non-professional users are already able to create with it: https://www.youtube.com/watch?v=HXp8_w3XzgU )
So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.
The emerging reasoning capabilities are very promising, able to generate new theories and make scientific experiments in easy to test fields, such as in vitro drug creation. It doesn't matter if the LLM hallucinates 90% of the time, if it correctly reasons a single time and it can create even a single new cancer drug that passes the test.
These are all examples of massive, massive economic disruption by automating intellectual labor, that don't require strict AGI capabilities.
From an economy-wide perspective, why does that matter?
> users have already proven there is no brand loyalty. They just hop to the new one when it comes out.
Sounds to me like there is real competition, which generally keeps prices down, it doesn't push them up! It's true VCs may not end up happy.
Smells like complete and total bullshit to me.
> So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.
Can Sora2 change the framing of a picture without changing the global scene ? Can it change the temperature of a specific light source ? Can it generate a 8k HDR footage suitable for re-framing and color grading ? Can it generate minute long video without loosing coherence ? Actually, can it generate a few seconds without having to reloop with the last frame and have these obnoxious cuts that the video you pointed has ? Can it reshoot the same exact scene with just one element altered ?
All the video models right now are only good at making short, low-res, barely post-processable video. The kind of stuff you see on social media. And considering the metrics on ai-generated video on social media right now, for the most part, nobody want to look at them. They might replace the bottom of the barrel of social media posting (hello cute puppy videos), but there is absolutely nothing indicating that they migth automate or upend any real industry (be used in the pipeline, yeah maybe, why not, automate ? Won't hold my breath).
And the argument of their future capabilities, well ... It's been 50+ years that we should have fusion in 20 years.
Btw, the same argument can be made for LLM and image-gen tech in any creative purposes. People severly underestimate just how much editing, re-work, purpose and pre-production steps are involved in any major creative endeavor. Most model are just severly ill suited for that work. They can be useful for some stuff (specificaly, for editing images, ai-driven image fill do work decently for exemple), but overall, as of right now, they are mostly good at making low quality content. Which is fine I guess, there is a market for it, but it was already a market that was not keen on spending money.
Lay off. Only respite I get from this hell world is cute Rottweiler videos
I don't believe the risk vs reward on investing a trillion dollars+ is the same when your thesis changes from "We just need more data/compute and we can automate all white collar work"
to
"If we can build a bunch of simulations and automate testing of them using ML then maybe we can find new drugs" or "automate personalized entertainment"
The move to RL has specifically made me skeptical of the size of the buildout.
The problem comes in when people then set expectations that a chat solution can solve non-chat problems. When people assume that generated content is the answer but haven't defined the problem.
We're not headed for AGI. We're also not going to just say, "oh, well, that was hype" and stop using LLMs. We are going to mature into an industry that understands when and where to apply the correct tools.
Edit: I expect that these guys will try to make a J.D. Vance style Republican pivot in the next 4-8 years.
Second Edit:
Ezra Klein's recent interview with Ta-Nehisi Coates is very specifically why I expect he will pivot to being a Republican in the near future.
Listen closely. Ezra Klein will not under any circumstances utter the words "Black People".
Again and again, Coates brings up issues that Black People face in America, and Klein diverts by pretending that Coates is talking about Marginalized Groups in general or Trans People in particular.
Klein's political movement is about eradicating discussion of racial discrimination from the Democratic party.
https://www.nytimes.com/2025/09/28/opinion/ezra-klein-podcas...
Now, what this sort of article tends to miss (and I will never know because it’s paywalled like a jackass) is that these models services are used by everyday people for every day tasks. Doesn’t matter if they’re good or not. It enables them to do less work for the same pay. Don’t focus on the money the models are bringing in today, focus on the dependency they’re building on people’s minds.
There were people telling me during the NFT craze that I just don't get it and I am dumb. Not that I am comparing AI to it directly because AI has actual business value but it is funny to think back. I felt I was going mad when everyone tried to gaslight me
We had Waymo cars about 18 years ago, and only recently they started to roll out commercially. Just saying.
Every financial bubble has moments where, looking back, one thinks: How did any sentient person miss the signs?
Well maybe a lot of people agree already with what the author is saying : the economics might crash, but the technology is here to stay. So we don't care about the bubbleFor LLMs, the architecture will be here and we know how to run them. If the tech hits a wall, though, and the usefulness doesn't balance well with the true cost of development and operation when VC money dries up, how many companies will still be building and running massive server farms for LLMs?
But why?: This would require you to make a case that AI tools are useful enough to be sustained despite their massive costs and hard to quantify contribution to productivity. Is this really the case? I haven't really seen a productivity increase worth justifying the cost, and as soon as Anthropic tried to even remotely make a profit (or break even) power users instantly realized that the productivity is not really worth paying the actual compute required to do their tasks
We're just at 25% of it. Raising such a claim is foolish at least. People will be tinkering as usual and it's hard to predict the next big thing. You can bet on something, you can postdict (which is much easier), but being certain about it? Nope.
If China invades Taiwan, why wouldn't TSMC, Nvidia and AMD stock prices go to zero?
We aren't? It's one of the reasons the CHIPS Act et al get pushed through, to try to mitigate those risks. COVID showed how fragile supply chains are to shocks to the status quo and has forced a rethink. Check out the book 'World On The Brink' for more on that geopolitical situation.
All my friends and family are using the free version of ChatGPT or something similar. They will never pay (although they have enough money to do so).
Even in my very narrow subjective circles it does not add up.
Who pays for AI and how? And when in the future?
> The artificial intelligence firm reported a net loss of US$13.5 billion during the same period
If you sell gold at $10 a gram you'll also make billions in revenues.
But unless you have the actual numbers, I always find it a bit strange to assume that all people involved, who deal with large amounts of money all the time, lost all ability to reason about this thing. Because right now that would mean at minimum: All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.
Of course, there is a lot of uncertainty — which, again, is nothing new for these people. It's just a weird thing to assume that.
It's like asking big pharma if medicine should be less regulated, "all the experts agree", well yeah, their paycheck depends on it. Same reason no one at meta tells Zuck that his metaverse is dogshit and no one wants it, they still spent billions on it.
You can't assume everyone is that dumb, but you certainly can assume that the yes men won't say anything other than "yes".
This is not rhetorical question, I am not looking for a rhetorical answer. What is every important decision maker at all these companies missing?
The point is not that they could not all be wrong, they absolutely could. The point is: Make a good argument. Being a general doomsayer when things get very risky might absolutely make you right but it's not a interesting argument – or any argument at all.
Investors aren’t always right. The FOMO in that industry is like no other
Most users rarely make the kind of query that would benefit a lot from the capabilities of GPT-6.1e Pro Thinking With Advanced Reasoning, Extended Context And Black Magic Cross Context Adaptive Learning Voodoo That We Didn't Want To Release To Public Yet But If We Didn't Then Anthropic Would Surely Do It First.
And the users that have this kind of demanding workloads? They'd be much more willing to pay up for the bleeding edge performance.
What? If that figure is true then "absolutely bananas" is the understatement of the century and "batshit insane" would be a better descriptor (though still an understatement).
Yesterday “As much as 1/3rd”: https://www.reuters.com/markets/europe/if-ai-is-bubble-econo...
A week ago “More than consumer spending(but the reality is complex)”:https://fortune.com/2025/09/17/how-much-gdp-artificial-intel...
August “1.3% of 3% however it might be tariff stockpiling”: https://www.barrons.com/articles/ai-spending-economy-microso...
(This comment was written by ChatGPT)
I am confused by a statement like this. Does Derek know why they are not? If hes does, I would love to hear the case (and no, comparisons to a random countries GDP are not an explanation).
If he does not, I am not sure why we would not assume that we are simply missing something, when there are so many knowledgable players charting a similar course, that have access to all the numbers and probably thought really long and hard about spending this much money.
By no means do I mean that they are right for that. It's very easy to see the potential bubble. But I would love to see some stronger reasoning for that.
What I know (as someone running a smallish non-tech business) is that there is plenty of very clearly unrealized potential, that will probably take ~years to fully build into the business, but that the AI technology of today already supports capability wise and that will definitely happen in the future.
I have no reason to believe that we would be special in that.
These tech bubbles are leaving nothing, absolutely nothing but destruction of the commons.
Sure, AI as a tool, as it currently is, will take a very long time to earn back the $B being invested.
But what if someone reaches autonomous AGI with this push?
Everything changes.
So I think there's a massive, massive upside risk being priced into these investments.
What is "autonomous AGI"? How do we know when we've reached it?
It will have agency, it will perform the role. A part of that is that it will have to maintain a running context, and learn as it goes, which seem to be the missing pieces in current llms.
I suppose we'll know, when we start rating AI by 'performance review', like employees, instead of the current 'solve problem' scorecards.
The definition is that the assets are valuated above an intrinsic value.
The first graph is Amazon, Meta, Google, Microsoft, and Oracle. Lets check their PE ratio.
Amazon (AMZN) ~ 33.6
Meta (META) ~ 27.5
Google (GOOGL) ~ 25.7
Microsoft (MSFT) ~ 37.9
Oracle (ORCL) ~ 65
These are highish pe ratios, but certainly very far from bubble numbers. OpenAI and others are all private.
Objectively there is no bubble. Economic bubble territory is 100-200+ PE ratios.
Not to mention, who are you to think the top tech companies arent fully aware of the risks they are taking with AI?
In what period of time?
I really feel like we're in the same "Get it out first, figure out what it is good for later" bubble we had like 7 years ago with non-AI ChatBots. No users actually wanted to do anything important by talking to a chatbot then, but every company still pushed them out. I don't think an LLM improves that much.
Every time some tool I've used for years sends an email "Hey, we've got AI now!" my thought is just "well, that's unfortunate"...
I don't want AI taking any actions I can't inspect with a difftool, especially not anything important. It's like letting a small child drive a car.
Bad example, because FSD cars are here.
Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.
How? OpenAI are LOSING money on every query. Beating Google by losing money isn't really beating Google.
Optimistic view: maybe product quality becomes an actually good metric again as the LLM will care about giving good products.
Yea, I know, I said it's an optimistic view.
Sometimes I sit in wonder at how the fuck it’s able to figure out that much intent without specific instructions. There’s no way anyone programmed it to understand that much. If you’re not blown away by this then I have to assume you didn’t go deep enough with your usage.
Use the LLM more until you are convinced. If you are not convinced, use it more. Use it more in absurd ways until you are convinced.
Repeat the above until you are convinced.
Even if a little of everyone's day consumes AI services, then the investment required will be immense. Like what we see.
Just my opinion as a FORMER senior software dev (disabled now).
We're also finding incredibly valuable use for it in processing unstructured documents into structured data. Even if it only gets it 80-90% there, it's so much faster for a human to check the work and complete the process than it is for them to open a blank spreadsheet and start copy/pasting things over.
There's obviously loads of hype around AI, and loads of skepticism. In that way this is similar to 2001. And the bubble will likely pop at some point, but the long tail value of the technology is very, very real. Just like the internet in 2001.
Dotcom boom made all kinds of predictions about Web usage. That decade plus later turned out to be true. But at the time the companies got way ahead of consumer adoption.
Specific to AI copilots. We currently are building hundreds that nobody will use for every one success.
Another limit would be to think about stock purchases. How much money is available to buy stocks overall, and what slice of that pie do you expect your business to extract?
It’s all very well spending eleventy squillion dollars on training and saying you’ll make it back through revenue, but not if the total amount of revenue in the world is only seventy squillion.
Or maybe you just spend your $$$ on GPUs, then sell AI cat videos back to the GPU vendors?
What's the real investment in or out of silicon valley ?
rimeice•1h ago
I don’t, I think a workable fusion reactor will be the most important technology of the 21st century.
mattmaroon•1h ago
baq•1h ago
...and it does seem this time that we aren't even in the huge overcapacity part of the bubble yet, and won't be for a year or two.
gizajob•1h ago
geerlingguy•1h ago
_heimdall•57m ago
rimeice•48m ago
fundatus•36m ago
> But fusion as a power source is never going to happen. Not because it can’t, because it won’t. Because no matter how hard you try, it’s always going to cost more than the solutions we already have.
https://matter2energy.wordpress.com/2012/10/26/why-fusion-wi...
tbrownaw•52m ago
lopis•49m ago
fundatus•38m ago
tbrownaw•20m ago
https://www.nrc.gov/reading-rm/basic-ref/students/history-10...
Perz1val•36m ago
Bigsy•52m ago
robinhoode•31m ago
People equating AI with other single-problem-solving technologies are clearly not seeing the bigger picture.
fundatus•41m ago
jacknews•27m ago
The maximum possible benefit of fusion (aside from the science gained in the attempt) is cheap energy.
We'll get very cheap energy just by massively rolling out existing solar panels (maybe some at sea), and other renewables, HVDC and batteries/storage.
Fusion is almost certain to be uneconomical in comparison if it's even feasible technically.
AI, is already dramtically impacting some fields, including science (eg deepfold), and AGI would be a step-change.
BrokenCogs•12m ago
seydor•9m ago