I have no doubt that people will use this to axe grind about they think AI is dumb in general, but I feel like that misses the point that this is mostly about data center construction contributing to GDP.
We need to get past the hype first and let the cash grabbers crash.
After that, with a clear mind we can finally think about engineering this technology in a sane and useful way.
"On top of that, there is currently no reliable way to accurately measure how AI use among businesses and consumers contributes to economic growth."
No doubt people are using it work ( https://www.gallup.com/workplace/701195/frequent-workplace-c... ) the question is how much productivity results and to whom does it accrue.
Partially this is AI capability (both today and in the past), partially this is people taking time to change their tools.
It takes time for technology to show measurable impact in enormous economies. No reason why AI will be any different.
The iPhone killer UX + App store release can be directly traced to the growth in tech in the subsequent years its release.
Personally I think AI is unlikely to go the way of NFTs and it shows actual promise. What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it. The Internet didn't begin as a massive black hole sucking all the light out of the room for anything else before it really started showing commensurate ROI.
It's not good enough to just say oreo ceos say we need to more oreos.
There's a real grey area where these tools are useful in some capacity, and in that confusion we're spending billions. Too may people are saying too conflicting things and chaos is never good for clear long-term growth.
Either that 20 years is completelly inapplicable to AI, or we're in for a world of hurt. There's no in between given the kinds of bets that have been made.
I'm going to be honest, you can feel the AGI when you use newer agentic tools like OpenClaw or Claude. It's an entirely different world from GPT-4.0. This is serious intelligence.
Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.
You might as well be telling people to “HODL”
You can feel it coming.
You're right. I can feel how far away it is and how these tools will in no way be capable of getting us there.
They don’t have time to wait for all the companies to pick up use of AI tooling in their own pace.
So they lie and try to manufacture demand. Well demand is there but they have to manufacture FOMO so that demand materializes now and not in 20 or 10 years.
whether or not these companies can turn a profit - time will tell. but I am betting that our massively profitable companies (which are biggest spenders of course) perhaps know what they are doing and just maybe they should get the benefit of the doubt until they are proven wrong. but if I had to make a wager and on one side I have google, microsoft, amazon, meta... and on the other side I have bunch of AI bubble people with a bunch of time to predict a "crash" I'd put my money on the former...
And most jobs that can be automated already has been automated using traditional software.
Having a higher-paid, qualified employee supervising multiple AIs as the human only needs to spot for mistakes - maybe.
- "Thousands of CEOs just admitted AI had no impact on employment or productivity..." https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-s...
- “Over 80% of companies report no productivity gains from AI…” https://www.tomshardware.com/tech-industry/artificial-intell...
But fundamentally, large shifts like this are like steering a super tanker, the effects take time to percolate through economies as large and diversified as the US. This is the Solow paradox / productivity paradox https://en.wikipedia.org/wiki/Productivity_paradox
> The term can refer to the more general disconnect between powerful computer technologies and weak productivity growththe same firms "predict sizable impacts" over the next three years
late 2025 was an inflection point for a lot of companies
Today you have to be blind to not see the change that is coming.
World has its own (massive) inertia, burocracy present in businesses accounting for a big part in it.
AI itself is moving fast but not at infinite speeds. We start to have good enough tooling but it's not yet available to everyone and it still hangs on too many hacks that will need to crystalize. People have a lot of mess to sort out in their projects to start taking full advantage of AI tooling - in general everybody has to do bottom up cleanup and documentation of all their projects, setup skills and whatnot and that's assuming their corp is ok with it, not blocking it and "using ai" doesn't mean that "you can copy paste code to/from copilot 365".
As people say - something changed around Dec/Jan. We're only now going to start seeing noticable changes and changes themselves will start speeding up as well. But it all takes time.
the change that is coming.
Everything you argue reinforces that net output was still basically zero last year. I don't see them talking about 2026 data..Why? It’s descriptive of the “past”. While you’re trying to predict the near/far “future” and project your assumptions. Two different things.
Opus 4.6 is SPECIAL. nothing like other models. This is a new breed of intelligence.
I give it 18-24 months until we see a full-scale societal transformation.
I have been a paid AI practitioner since 1982, so I appreciate the benefits of AI - it is just that I hate the almost religious tech belief that real AI will happen from exponential cost increases for essentially linear gains.
I get that some lazy ass people have turned vibe coding and development into what I consider an activity sort-of like mindlessly scrolling social media.
sillyfluke•1h ago
With all this recent Claw stuff, it's weird that as people who should be championing the opposite due to our field of study or industry, some of us are now pushing a method of automation that is akin to robo vaccums randomly tracking dogshit across the carpet.
In my working environment, people get dressed down for repeatedly communicating incorrect information. If they do it repeatedly in an automated fashion they will be publically shamed if they are senior enough.
I have no idea what benefit a human-in-loop for sending an automatically generated emails or agent generated sdks or buliding blocks has when there is no guarentee or even a probability of correctness attached to the result. The effort for vaildating and editing a generated email can be equally or greater than manually writing a regular email let alone one of certain complexity or significance.
And what do we do to create to try to guarentee a semblance of correctness? We add another layer of automated validation performed by, you guessed it, the same crew of wacky fuzzy operators that can inject correct sounding gibberish or business workflows at any moment.
It's almost like trying to build a house of cards faster than the speed with which it is collapsing. There seems to be a morbid fascination among even the best of us with how far things can be taken until this way forward leads to some indisputable catastrophe.
ekjhgkejhgk•39m ago
Is it possible that this sort of problem will be fixed? Hypothetically, what would happen in a scenario where one of these apps can do in 1 hr the work that would take a developer a month, reliably? Or is your premise that will NEVER happen?
sillyfluke•29m ago
keybored•16m ago