Some people think the GenAI bomb is going to kill GenAI, I think it's just going to weed out those with too high of expenses and no way to evolve the compute to be cheaper over time.
Sure, a very very small percentage of people who know hardly anything about GenAI might think this.
It’s worth reading up on it to see what’s actually comparable.
https://en.macromicro.me/series/23955/nasdaq-100-pe
It is easy to spot the dot com bubble on this chart.
The difference today is that every piece of capitalism immediately 100% utilized once it is plugged in.
The arms race to throw money at anything has "AI" in their business name is the same thing I saw back in 2000. No business plan, just some idea to somehow monetize the internet and VC's were doing the exact same thing. Throwing tons of good money after bad.
Although you can make an argument this is different, in a lot of ways, its just feels the same thing. The same energy, the same half baked ideas trying to get a few million to get something off the ground.
The question isn't if there will be a crash - there will - but there are always crashes. And there are always recoveries. It's all about "how long." And what happens to the companies that blow the money and then find out they can't fire all their white collar workers?
(Or, what happens if they find out they can?)
I think AI-powered IDE features will stick around. One notable head-and-shoulders-above-non-AI-competitor feature I've seen is "very very fuzzy search". I can ask AI "I think there's something in the code that inserts MyMessage into `my.kafka.topic`. But the gosh darn codebase is so convoluted that I literally can't find it. I suspect "my", "kafka", and "topic" all get constructed somewhere to produce that topic name because it doesn't show up in the code as a literal. I also think there's so much indirection between the producer setup and where the "event" actually first gets emitted that MyMessage might not look very much like the actual origination point. Where's the initial origin point?"
Previously, that was "ctrl-shift-F my.kafka.topic" and then ask a staff engineer and hope to God they know off-hand, and if they don't, go read the entire codebase/framework for 16 hours straight until you figure it out.
Now, LLMs have a decent shot at figuring it out.
I also think things like "is this chest Xray cancer?" are going to be hugely impactful.
But anyone expecting anything like Gen AI (being able to replace a real software engineer, or quality customer support rep, etc) is going to be disappointed.
I also think AI will generally eviscerate the bottoms of industries (expect generic gacha girl-collection games to get a lot of AI art) but also leave people valuing the tops of industries a lot more (lovingly crafted indie games, etc). So now this compute-expensive AI is targeting the already low-margin bottoms of industries. Probably not what VCs want. They want to replace software engineers, not make a slop gacha game cost 1/10th of its already low cost.
Yes, but https://radiologybusiness.com/topics/artificial-intelligence...
Nine years ago, scientist Geoffrey Hinton famously said, “People should stop training radiologists now,” believing it was “completely obvious” AI would outperform human rads within five years.
If we expect a technology to completely solve a problem as soon as it is launched, only a few in history could be considered a success. Can you imagine what it would be like if the first radios were considered a failure because you couldn't listen to music?
E.g. - I was considering a 3D printer but I had heard they were expensive, messy, complicated, it was hard to get prints to come out right, etc. But it turned out I was anchored on ~2016 era technology. I got a simple modern printer for a few hundred dollars and it (mostly) just works.
What it isn't, at present, is an investment in the future. I'm not making these virtual interns better coders, more thoughtful about architecture, or more autonomous in the future. Those aspects of development of new hires are vastly more valuable than the code output I'm getting in my IDE. So I'm hoping that we land in a place where we're fostering both rather than hoping that someone else is going to do the hard work of closing the agentic coding gap and growing maturity. Pulling an Indiana Jones style swap could be a really destructive move if we try to pull the human pipeline out of the system too early.
Just paying attention to near term savings runs a real risk falling into that trap.
It's well known that these fresh employees are not going to contribute to velocity of a team for at least a year. They're investments. I've seen levelling docs specifically call this out.
"It's prone to spiraling off into the weeds, makes silly mistakes, occasionally mangles whole repos (commit early, and often), and needs very crisp instruction and guidance"
This describes a team of juniors. If it's describing an entire team, then everyone above mid-level needs to be fired.
I will say that I think "the bottom of the market getting eviscerated" is going to apply to software devs too. There is now very little point in hiring someone who already only produces slop as their best output. The main people who need to be afraid of AI in the next 5 years is probably offshore and near-shore people, and perma-juniors who have done the "1 year of experience 10 times" thing.
Unfortunately, the same thing is playing out here. Nobody likes being the guy that points out the gains are incremental when everyone is bragging about their 100x gains.
And everyone in the management side starts getting, understandably, afraid that their company will miss out on these magical gains.
It is all a recipe for wild overspending on the wrong things.
I had a very hard time explaining once you put something in the chain, you can’t easily pull it back out. If you wanted to verify documents, all you have to do is put a hash in a database table. Which we already had.
It has exactly one purpose: prevent any single entity from controlling the contents. That includes governments, business executives, lawyers, judges, and hackers. The only good thing is every single piece of data can be pulled out into a different data structure once you realize your mistake.
Note, I’m greatly oversimplifying all the details and I’m not referring to cryptocurrency.
"hodl" - I think r/WallStreetBets came up with that. Perhaps not - its too ubiquitous, similar to teh. r/WSB did originate the eating green crayons thing along with smooth brain monkeys etc
I simply mixed and matched and riffed in an old school "lock up your daughter" meme.
Sorry, I don't have a Spongebob illustration. I do words.
OK, read the post ... legendary!
I'd like to propose a different characterization: "Blockchain" is when you want unrestricted membership and participation.
Allowing anybody to spin up any number of new nodes they desire us the fundamental requirement which causes a cascade of other design decisions and feedback systems. (Mining, proof-of-X, etc.)
In contrast, deterring one entity from taking over can also be done with a regular distributed database, where the nodes--and which entities operate them--are determined in advance.
But that's a poor definition of a blockchain. A blockchain is merely a distributed ledger with certain properties from cryptography.
If you spin up a private bitcoin network, it's a blockchain even if nobody else knows or cares about it. Now, are non-open blockchains at all useful? I suspect so, but I don't know of any great examples.
The wide space between 'membership is determined in advance' and 'literally anyone can make a million identities at a whim' is worth exploring, IMO.
If we charitably assume "blockchain" has some engineering meaning (and it isn't purely a word for marketing/scamming) then there's some new aspect which sets it apart from the distributed-databases we've been able to make just fine for decades.
Uncontrolled participation is that key aspect. Without that linchpin, almost all the other new stuff becomes weirdly moot or actively detrimental.
> If you spin up a private bitcoin network, it's a blockchain even if nobody else knows or cares about it.
That's practically a contradiction in terms. It may describe the ancestry of the project, but it doesn't describe what/how it's being used.
Compare: "If you make a version of Napster/Gnutella with all the networking code disabled, it's still a Peer-to-Peer file sharing client even when only one person uses it."
And yes. I'm using the engineering definition. I don't believe in letting a gaggle of marketers and scammers define my terms. A blockchain is a specific technology. It doesn't mean 'whatever scam is happening this week', even if said scam involves a blockchain.
I don't blame you for associating blockchains with scams and fully open projects, that's undeniably what we've seen it used for. But that's not what defines a blockchain.
"A scalpel can only be used for surgery"
"If you use a scalpel to cut a steak, it's still a scalpel."
"There must be some new aspect to scalpels! We've been able to make steak knives for decades!"
> The analogy is [the P2P application] on a LAN.
The analogy is the P2P application where regular clients can only discover a Special Master Client that must be running on a fixed IP, which only permits connections if you have credentials for a user-account arranged in advance.
In each case, the system's centerpiece feature is being voided, but that feature is different between them.
1. For "Blockchain", the centerpiece is unrestricted participation. (Other decentralization is an indirect effect.)
2. For P2P file sharing, the centerpiece is how nobody needs to run an expensive/vulnerable central server, but it wasn't a contradiction in terms to have a private peer-network.
A blockchain is ledger shared between multiple computers. Entries in this ledger contain the cryptographic hash of the previous entry, creating an append-only data structure where existing records cannot be modified.
While blockchains are generally used in p2p and open networks, this isn't a requirement. Bitcoin was a blockchain when Nakamoto was the only node. Popularity and openness are not required to meet this technical definition.
Not the blockchain itself, but the concept of an immutable, append only, tamper-proof ledger underpinning it is a very useful one in many contexts where the question of authenticity of datasets arises – the blockchain has given us a ledger database.
The ledger database is more than just a hash as it also provides a cryptographic proof that the data is authentic via hash chaining, no ability to delete or modify a record, and the entire change history of any record. All these properties make the ledger databases very useful in many contexts, especially the ones where official documents are involved.
The problem, however, stems from the fact that the git commit history can be modified, which automatically disqualifies git in many other use cases, e.g. official government-issued documents, financial records, recognition of prior learning, and similar – anywhere where the entire, unabridged history of records is required.
Commits in git are non-linear and form a tree[0][1]. A commit can be easily deleted without affecting the rest of the tree. If the git commit represent a subtree with branches dangling off it, deleting such a commit will delete the entire subtree. Commits can also be moved around, detached and re-attached to other commits.
[0] https://www.baeldung.com/ops/git-objects-empty-directory#2-g...
[1] https://www.baeldung.com/ops/git-trees-commit-tree-navigatio...
If strict compliance is not a hard requirement (open source project are the prime example), git can be used to prove provenance, provided you trust the signer’s public key or allowed signers file.
[0] https://www.techtarget.com/searchCIO/definition/ledger-datab...
Consider a leaf commit (or a leaf which is a subtree of commits). I am a person with nefarious intentions, and I delete the leaf commit, forcefully expire the reflogs, or force garbage collect them. At that point, there is no longer remaining evidence in git that the commit ever existed. If git were to be used to record a history of criminal offences, I would be able to single-handedly delete the last offence by running «git reflog expire --expire=now --all» followed by «git gc --aggressive --prune=now».
Ledger databases, on the other hand, do not have the «delete» operation. The «update» operation does not factually update the document/record and creates a new revision instead (just as git does), whilst retaining a full history of updates to the document/record. This is the fundamental difference.
If you're running Git on multiple nodes, it's exactly like running a blockchain on multiple nodes. You can mutate your own local copy, but that doesn't mutate mine, and the set of commits is functionally identical to the union of commits from every node. You can't delete a commit without deleting it from every clone.
That is the real magic with bitcoin (and derivatives.)
A typical scenario that disqualifies git in certain scenarios is as simple as a master repository and a few local copies of varying degree of vintage. Then the master has had its commit history changed, reflogs purged and GC'd, and the local copies will never have the knowledge of what the master could have had in the past.
git is perfectly acceptable in many scenarios, but if the immutability has to be enforced, in regulatory WORM requirements (SEC 17a-4, ISO 14641 or similar) with
1. Certified write-once, read-many retention, legal holds and tamper-evidence anchored to trusted time sources are mandatory;
2. Authoritative time and ordering – git commit timestamps are user-supplied and easily forged;
3. Granular access control and selective disclosure – git tends to replicate whole histories and the ledger can provide the proof of single record only;
4. Structured queries and point-in-time views at scale – git has no native time-travel queries, secondary indexes or ACID multi-record transactions across millions of records with consistent snapshots;
5. Independent third-party verification – if external parties (e.g. state auditors) must verify recorded events, verifiable receipts or Merkle proofs that stand alone are needed. git commits and optional signatures are not receipts and are hard to validate without access to the repository.
6. Consensus or anchoring needs – if integrity must be prove against a hostile or nefarious party, anchoring or consensus is required. git has no native consensus; ledger databases can anchor state hashes to public chains or quorum-backed authorities.
… git is not a really option.git can be hardedned, though, by signing commits and tags, forbidding force-pushes server-side, mirroring to append-only storage and periodically anchoring repository state to a public timestamping service. It still will not meet strict ledger-grade assurances, but it can raise the bar for internal use. The sheer amount of work required to accomplish that makes a solid case for a dedicated ledger database.
Demand for ledger databases is strong in the government and elsewhere where compliance is a non-negotiable. Microsoft have their own on the offer[1] as well.
[1] https://learn.microsoft.com/en-us/sql/relational-databases/s...
It only moved the goal post.
As long as you can't guarauntee that the data you put onto a blockchain is trustworth in the first place, whatever you put on a blockchain is not 'tamper-proof'.
Therefore the ONLY thing you can handle on a blockchain 'tamper-proof' is stuff only existing on the blockchain itself. Which means basically nothing.
And there is a second goal post which was moved: the ignorance about a blockchain being 'tamper-proof'. 51% attack are real, you don't know if a country just owns and controls a lot of nodes and the latest rumor: NSA was involved in blockchain creation. You don't know if something is hidden in the system which gives one entity an edge over others.
Consider two simple and common scenarios (there are more): citizenship certificates and the recognition of prior learning.
1. Citizenship – the person with the name of Joanna Doe in the birth certificate was issued with a citizenship certificate in the same name. As time goes by, Joanna Doe changes the legal name to Joanna Smith; the citizenship certificate is reissued with the new legal name. We do not want to truly update the existing record in the citizenship database and simply change Doe -> Smith as it will create a mismatch with the name in the birth certificate. If we use a ledger database, an update operation will create a new revision of the same record and all subsequent simple query operations will return the latest updated revision with the new name. The first revision, however, will still be retained in the table's cryptographically verifiable audit/history log.
Why should we care? Because Joanna Smith can accidentally throw their new citizenship certificate away and later they will want to renew their passport (or the driver's licence). The citizenship certificate may be restored[0] by presenting the birth certificate in the original name and the current or expired passport, but the passport is in the new name. From a random system's point of view, Joanna Doe and Joanna Smith are two distinct individuals with no apparent link between them. However, the ledger database can provide proof that it is the same person indeed because the unabridged history of name changes is available, it can be queried and relied upon.
2. Recognition of prior learning – a person has been awarded a degree at institution A. Credits from Institution A contributed to a degree at Institution B. The degree at B is revoked due to issues with source evidence (i.e. Institution A). The ledger database makes such ripple effects deterministic – a revocation event at B triggers rules that re-evaluate dependent awards and enrolments at partners, with a verifiable trail of who was notified and when. If Institution A later corrects its own records, Institution B and downstream bodies can attach a superseding record rather than overwrite, preserving full lineage. The entire drama unfolded will always be available.
2½. Recognition of prior learning (bonus) – an employer verified the degree on the hiring date. Months later it is revoked. The employer can present a ledger proof that, on the hiring date, the credential existed and was valid. It reduces dispute risk and supports fair-use decisions such as probation reviews rather than immediate termination.
All this stuff is very real and the right tech stack (i.e. the ledger DB) reduces the complexity tremendously.
[0] Different jurisdictions have different rules but the procedure is more or less similar amongst them.
Which is why blockchains have become such a ubiquitous technology. They're literally everywhere. Can't swing a cat without hitting a blockchain nowadays.
Business process outsourcing companies are valued at $300bn according to the BPO Wikipedia page. So 5%-20% of that is 15-60bn. So even if we're valuing all the other GenAI impact at zero the impact on admin and support alone could plausibly justify this investment.
Klarna also cut costs replacing support with AI. Didn't work well so ha to rehire.
That still means All New data centers. They aren't being built for for this now, and so the old ones'll have to get ripped out and rebuilt (in place?) before they get the new servers. I do think they've planned the external power delivery, but not cooling or IP infra. It's a CF.
The hyperscalars are not the ones having trouble generating income. They have plenty of paying customers. They certainly understand capital depreciation and the need to refresh hardware. Premature hardware failure will be charged back to Nvidia who are not exactly struggling for cash either.
https://semianalysis.com/2025/08/20/h100-vs-gb200-nvl72-trai...
I'm not at a hyperscalar but I've been involved with deployment of A100 and H100 GPUs and we RMA GPUs which don't work. I don't think it impacted our allocations which have always seemed fine to me, but obviously it's hard to know for sure and perhaps GB200 is different.
You are right that in theory NVDA can sell everything they produce without the hyperscalars, but strategically there are many risks with acting in that way towards deep pocketed clients. They'd have to go to another customer who is likely to be less reliable. They'd put themselves on very shaky ground legally. They'd create a much stronger incentive for a deep pocketed client to become a competitor (CF trainium, TPUs). I'd be surprised if they'd take such risks to avoid what's ultimately a small cost.
If it was true these were replacing anything it would be very clear in those sectors and it isn't. The real effect of end to end automation from LLMs is small to negligible. The entire "boring" industry is still chunging along and growing as it did before.
To whom? From the customer perspective, it sounds like a shittier level of service is coming, which is a kind of failure.
You're taking a different perspective. I'm a customer, not the business owner. I don't care about their "massive cost savings" if I'm getting shittier service.
If anyone thinks they have figured it all out, stop blabbering around. Short the market.
Lots of people lost their shirts shorting the housing market prior to the 2008 crash. (_The Big Short_ highlights those who were successful, but plenty of people weren't.) But it was undoubtedly a bubble and there was a world-wide recession when it popped.
longer than you can remain "*solvent", not "insolvent"
I think there is a bubble, if it's really just $40B maybe I'm wrong.
Ideas which are not terrible, instead have awful ROIs. Nobody has a use case beyond generating text, so lots of ideas about automating some text generation in certain niches. Not appreciating that those bits represent 0.1% of the business ventures. Yet they are technically feasible, so full steam ahead.
The funniest thing is that management has no idea how AI works so they're pretty much just Copilot Agents with a couple docs and a prompt. It's the most laughable shit I've ever seen. Management is so proud while we're all just shaking our heads hoping this trend passes.
Don't get me wrong, AI definitely has its use cases, but tossing a doc about company benefits into a bot is about the most worthless thing you could do with this tech.
Hahaha, my company has spent half a year pursuing the exact same thing to the letter after one of our VPs got the idea in his head at some AI conference. Rollout kept getting pushed back because of hallucinations in testing. I'm not 100% sure at this point if he forgot he made it a top internal priority and it was quietly shelved or if it's still limping along with no one in HR/upper management willing to give it the green light for release.
(using throwaway because my HN profile is linked to my real identity).
It's possible the study is flawed, or is more limited than the claims being made. But some evidence is necessary to get there.
Here is the Archived Version: https://web.archive.org/web/20250818145714/https://nanda.med...
Bonfires of money.
Predictably. Because all three of those concerns require highly speculative action to properly address.
That doesn't make those reasons invalid. Failures are expected, especially in early days. And are not a sign they are making spurious bets, or starry eyed about industry upheavals. The minimal return is still experience gained and a ramped up institutional focus.
How many of us here speed up our overall development by coding early on new projects before we have complete clarity? Writing code we will often throw away?
Well, if only 95% of our ideas don't work, with a little hard work and sacrifice, we are livin' in an optimish paradise.
Incidentally, "everyone" is wrong a lot of the time.
And depending on how you look at it, science itself is experimentation, but at least it mostly results in publications in the end, that may or may not be read, but at least serve as records of areas explored.
Scientists and mathematicians often burn barrels of time and unpublished ideas, not to mention following their curiosities into random pursuits, that give their subconscious the free space to crystalize slippery insights.
With their publishable work somehow gelling out of all that.
This reminds me of the exploration-exploration trade-off in reinforcement learning: you want to maximise your long term profits but, since your knowledge is incomplete, you must acquire new knowledge, which companies do by trying stuff. Prematurely dismissing GenAI could mean missing out on new efficiencies, which take time to be identified.
Yes, of course. Incompetent leaders do incompetent things.
No argument or surprise.
The point I made was less obvious. Competent leaders can also/often appear to throw money away, but for solid reasons.
I've never heard of such a thing
It’s like the New York Jewish joke about the terrible food and the too small portions.
The entire shtick is made up. People just tend to forget this too quick.
In other eras, everyone got excited and went to tent revivals.
Granted there could be an opportunity cost -- the real effort and electricity could be used elsewhere -- but only if it were possible to create a similar amount of excitement about something useful, like putting solar panels everywhere. But that takes different people and different skills, so maybe this costs nothing?
(Money can be created and destroyed -- it doesn't just circulate -- but that destruction happens when loans are repaid within a fractional reserve system. Which is kind of a scam, but money itself is, so, whatever.)
Once you internalize that this is all sort of a scam, does that change your behavior? Maybe you start making NFTs or minting shitcoins.
JCM9•5mo ago
The tech isn’t going away, and is cool/useful, but from a business standpoint this whole thing looks like a dumpster fire burning next to a gasoline truck. Right now VC FOMO is the only thing stopping all that from blowing up. When that slows down buckle up.
j45•5mo ago
There's definitely people who don't understand the tech talking about applying it increasing the failure rate of software projects.
throwawayoldie•5mo ago
But...is it and are they? Gen AI boosters tend to make assertions like this as if they're unassailable facts, but never seem interested in backing them up. I guess it's easier than thinking.
wiml•5mo ago
j45•5mo ago
Human thought is the only way any software (including LLMs) operate efficiently.
Same goes for leveraging LLMs.
j45•5mo ago
It is, and they are.
For the boosters, notice how few or many come from a tech background and pay attention accordingly
throwawayoldie•5mo ago
pier25•5mo ago
That was probably 2-3 years ago.
I'd be surprised if VCs hadn't already figured out they're in a bubble. They're probably playing a game of chicken to see who will remain to capture the actually profitable use cases.
There's also a number of lawsuits going on against AI companies regarding piracy and copyright. It's already an established fact in the courts that these companies have downloaded ebooks, music, videos, and images illegally to train their models.
mrbluecoat•5mo ago
/s
jandrese•5mo ago
How many "we will have AGI by X/X/201X" predictions have we blown past already?
arcanemachiner•5mo ago
Just imagine how many predictions we'll have in six months, or even a year from now!
ed_elliott_asc•5mo ago
Terr_•5mo ago
sebastiennight•5mo ago
This seems wildly inaccurate.
Can you find any single such claim from any credible source? Anybody hyping up an AGI timeframe within the 2010s?
jandrese•5mo ago
https://www.reuters.com/technology/teslas-musk-predicts-ai-w...
Another AI company CEO prediction:
https://time.com/7205596/sam-altman-superintelligence-agi/
sebastiennight•5mo ago
2025 is halfway through the 2020's.
OtherShrezzing•5mo ago
They’re all so highly levered up that they can’t afford for the bubble to pop. If this goes on for another couple of years before the pop, we may see “too big to fail” wheeled out to justify a bailout of Google or Microsoft.
impossiblefork•5mo ago
I'm sure there will be losers, but I'm not quite sure who.
OtherShrezzing•5mo ago
They’re also acting as a guarantor to lots of infrastructure project - meaning the debt is their responsibility, but not on their books.
If the creditworthiness of any of the hyperscalers slip, even a tiny amount, the tech and banking sectors are in some hot water.
impossiblefork•5mo ago
But 100 billion is still on the order of the current profit of each. I suppose with interest, if it's sustained over time it could be a problem though.
HDThoreaun•5mo ago
sema4hacker•5mo ago