Some people think the GenAI bomb is going to kill GenAI, I think it's just going to weed out those with too high of expenses and no way to evolve the compute to be cheaper over time.
Sure, a very very small percentage of people who know hardly anything about GenAI might think this.
The difference today is that every piece of capitalism immediately 100% utilized once it is plugged in.
The arms race to throw money at anything has "AI" in their business name is the same thing I saw back in 2000. No business plan, just some idea to somehow monetize the internet and VC's were doing the exact same thing. Throwing tons of good money after bad.
Although you can make an argument this is different, in a lot of ways, its just feels the same thing. The same energy, the same half baked ideas trying to get a few million to get something off the ground.
The question isn't if there will be a crash - there will - but there are always crashes. And there are always recoveries. It's all about "how long." And what happens to the companies that blow the money and then find out they can't fire all their white collar workers?
(Or, what happens if they find out they can?)
I think AI-powered IDE features will stick around. One notable head-and-shoulders-above-non-AI-competitor feature I've seen is "very very fuzzy search". I can ask AI "I think there's something in the code that inserts MyMessage into `my.kafka.topic`. But the gosh darn codebase is so convoluted that I literally can't find it. I suspect "my", "kafka", and "topic" all get constructed somewhere to produce that topic name because it doesn't show up in the code as a literal. I also think there's so much indirection between the producer setup and where the "event" actually first gets emitted that MyMessage might not look very much like the actual origination point. Where's the initial origin point?"
Previously, that was "ctrl-shift-F my.kafka.topic" and then ask a staff engineer and hope to God they know off-hand, and if they don't, go read the entire codebase/framework for 16 hours straight until you figure it out.
Now, LLMs have a decent shot at figuring it out.
I also think things like "is this chest Xray cancer?" are going to be hugely impactful.
But anyone expecting anything like Gen AI (being able to replace a real software engineer, or quality customer support rep, etc) is going to be disappointed.
I also think AI will generally eviscerate the bottoms of industries (expect generic gacha girl-collection games to get a lot of AI art) but also leave people valuing the tops of industries a lot more (lovingly crafted indie games, etc). So now this compute-expensive AI is targeting the already low-margin bottoms of industries. Probably not what VCs want. They want to replace software engineers, not make a slop gacha game cost 1/10th of its already low cost.
Yes, but https://radiologybusiness.com/topics/artificial-intelligence...
Nine years ago, scientist Geoffrey Hinton famously said, “People should stop training radiologists now,” believing it was “completely obvious” AI would outperform human rads within five years.
Unfortunately, the same thing is playing out here. Nobody likes being the guy that points out the gains are incremental when everyone is bragging about their 100x gains.
And everyone in the management side starts getting, understandably, afraid that their company will miss out on these magical gains.
It is all a recipe for wild overspending on the wrong things.
I had a very hard time explaining once you put something in the chain, you can’t easily pull it back out. If you wanted to verify documents, all you have to do is put a hash in a database table. Which we already had.
It has exactly one purpose: prevent any single entity from controlling the contents. That includes governments, business executives, lawyers, judges, and hackers. The only good thing is every single piece of data can be pulled out into a different data structure once you realize your mistake.
Note, I’m greatly oversimplifying all the details and I’m not referring to cryptocurrency.
I see the copium is barely hitting for you.
I'd like to propose a different characterization: "Blockchain" is when you want unrestricted membership and participation.
Allowing anybody to spin up any number of new nodes they desire us the fundamental requirement which causes a cascade of other design decisions and feedback systems. (Mining, proof-of-X, etc.)
In contrast, deterring one entity from taking over can also be done with a regular distributed database, where the nodes--and which entities operate them--are determined in advance.
Business process outsourcing companies are valued at $300bn according to the BPO Wikipedia page. So 5%-20% of that is 15-60bn. So even if we're valuing all the other GenAI impact at zero the impact on admin and support alone could plausibly justify this investment.
Klarna also cut costs replacing support with AI. Didn't work well so ha to rehire.
If anyone thinks they have figured it all out, stop blabbering around. Short the market.
Lots of people lost their shirts shorting the housing market prior to the 2008 crash. (_The Big Short_ highlights those who were successful, but plenty of people weren't.) But it was undoubtedly a bubble and there was a world-wide recession when it popped.
I think there is a bubble, if it's really just $40B maybe I'm wrong.
Ideas which are not terrible, instead have awful ROIs. Nobody has a use case beyond generating text, so lots of ideas about automating some text generation in certain niches. Not appreciating that those bits represent 0.1% of the business ventures. Yet they are technically feasible, so full steam ahead.
The funniest thing is that management has no idea how AI works so they're pretty much just Copilot Agents with a couple docs and a prompt. It's the most laughable shit I've ever seen. Management is so proud while we're all just shaking our heads hoping this trend passes.
Don't get me wrong, AI definitely has its use cases, but tossing a doc about company benefits into a bot is about the most worthless thing you could do with this tech.
It's possible the study is flawed, or is more limited than the claims being made. But some evidence is necessary to get there.
Here is the Archived Version: https://web.archive.org/web/20250818145714/https://nanda.med...
JCM9•2h ago
The tech isn’t going away, and is cool/useful, but from a business standpoint this whole thing looks like a dumpster fire burning next to a gasoline truck. Right now VC FOMO is the only thing stopping all that from blowing up. When that slows down buckle up.
j45•2h ago
There's definitely people who don't understand the tech talking about applying it increasing the failure rate of software projects.
throwawayoldie•2h ago
But...is it and are they? Gen AI boosters tend to make assertions like this as if they're unassailable facts, but never seem interested in backing them up. I guess it's easier than thinking.
wiml•9m ago
pier25•2h ago
That was probably 2-3 years ago.
I'd be surprised if VCs hadn't already figured out they're in a bubble. They're probably playing a game of chicken to see who will remain to capture the actually profitable use cases.
There's also a number of lawsuits going on against AI companies regarding piracy and copyright. It's already an established fact in the courts that these companies have downloaded ebooks, music, videos, and images illegally to train their models.
mrbluecoat•2h ago
/s
jandrese•2h ago
How many "we will have AGI by X/X/201X" predictions have we blown past already?
arcanemachiner•1h ago
Just imagine how many predictions we'll have in six months, or even a year from now!
ed_elliott_asc•1h ago
Terr_•48m ago
OtherShrezzing•1h ago
They’re all so highly levered up that they can’t afford for the bubble to pop. If this goes on for another couple of years before the pop, we may see “too big to fail” wheeled out to justify a bailout of Google or Microsoft.
impossiblefork•1h ago
I'm sure there will be losers, but I'm not quite sure who.