1) Inference is too damn expensive.
2) The models/products aren’t reliable enough.
I also personally think talking to industry folks isn’t a silver bullet. No one knows how to solve #2. No one. We can improve by either waiting for better chips or using bigger models, which has diminishing returns and makes #1 worse.
Maybe OpenAI’s next product should be selling dollars for 99 cents. They just need a billion dollars of SoftBank money, and they can do 100 billion in sales before they need to reraise. And if SoftBank agrees to buy at $1.01 the business can keep going even longer!
People aren't reliable enough.
Nature isn't reliable enough.
For most uses, all that is needed is a system to handle cases where it is not reliable enough.
Then suddenly it becomes reliable enough.
People aren't reliable, for a specific value of reliable.
We expect of technology(machines, software, AI, whatever) to be: deterministically reliable (knowing their failure modes), and significantly more reliable than humans at what they do (because that's what we rely upon, why we use them to replace humans at what they, in ways humans can't: harder, faster, stronger).
Right now, the API cost for asking a single question costs about a fiftieth of a cent on their cheap 4.1-nano model, up to about 2 cents on their o3 model. This is pretty affordable.
On the other end of the spectrum, if you're maxing out the context window on o3 or 4.1, it'll cost you around $0.50 to $2. Pricy, but on 4.1, this is like inputting several novels worth of text (maybe around 2000 pages).
Edit: I am just sharing how our CTO responds to the massive push of AI into everything, because integration of a non deterministic system has massive cost and eventually once the thing is made deterministic, the additional steps add expenses which finally makes the entire solution too expensive compared to the benefit. This is not my opinion, just sharing how typical leaderships hope to tackle the expense issue.
If market prices go down with costs, then we see something like solar power where it’s everywhere but suppliers don’t make money, not even in China.
Or maybe customers spend a lot more on more-expensive models? Hard to say.
> Large Language Models and their associated businesses are a $50 billion industry masquerading as a trillion-dollar panacea for a tech industry that’s lost the plot.
There are very real use cases with high value, but it's not an economy-defining technology, and the total value is going to fall far short of projections. On bottom lines, the overall productivity gain from AI for most companies is almost a rounding error compared to other factors.
Yet, $1T was nevertheless a profound underestimation.
Similarly, the current LLM vendors and cloud providers are likely not where the money will ultimately be made. Some startup 10-15 years from now will likely stack a cheaply hosted or distributed LLM with several other technologies, and create a whole new category of use cases we haven't even thought of yet, and that will actually create the new value.
It's basically the Gartner hype cycle in action.
We in aggregate seem to have developed a collective amnesia due to how fast these trends move and how much is burned in keeping the hype machine going to keep us on the edge. We also need to stop calling LLMs different just like every kid wants to claim mark zuck was diff or bill gates was diff so dropping out like them would make these kids owner of next infinite riches.
After a long decade of fast moving “this will truly revolutionize everything” speech every so often, we need to keep some skepticism. Additionally, the AI bubble is more devastating than the previous as previous money was being spread into multiple hypes from which some emerged silent victors of current trends but now everything is consolidated into one thing, all eggs in one basket. If the eggs break, a large population and industry will metaphorically starve and suffer.
How much is "the internet" an industry? It's an enabler and a commodity as much as electricity or road networks are. Are you counting everything using the internet as contributing a sizable share to the internet industry's value?
Everyone asks “what if this is like the internet” but what if it’s actually like the smartphone, which took decades of small innovations to make work? If in 1980 you predicted that in 30 years handheld computers would be a trillion dollar industry you’d be right but it still required billions in R&D.
There are a ton of non-software innovations out there, they just require more than a million dollar seed to get working. For example making better batteries, better solar panels, fusion power, innovations in modular housing, etc.
It's certainly becoming more common, and there are lots of people who want it to be valuable, and indeed believe it's valuable.
Personally, I find it about as valuable as a really, really good intellisense. Which is certainly valuable, but I feel like that's way off from the type/quality/quantity of value you're suggesting.
Additionally, LLMs are sort of using old day google mastery to find the right result quickly to save a huge waste of wading through junk and SEO spam, which translates to productivity but then we are again balanced out because this gained productivity was lost once SEO spam took off a decade back. I am indifferent about this gain again, as anything with mass adaptation tends to devolve in garbage behavior eventually, slowly the gains will again be eaten up.
Your emails, presentations etc. will all look the same and what’s worse so will the emails, presentations etc. of scammers and phishers.
How is this not indicative of a massive bubble?
Please make a distinction between what people say and what can be measured
If Ed fees this strongly, he should short NVDA and laugh all the way to the bank when it pops.
Until we return to ZIRP somehow, there is just too much money in AI to keep the hype going as long as possible to avoid the investor backlash or cause mass layoffs(cutting costs and recovering because we spent too much on AI but can’t get refunds so layoff people instead), as money seem to only chase hype and due to not enough free and openly available money for other topics, all flows to only thing that everyone chases.
GameStop is still trading at 82 PE. Insane valuation. Apple and Google are money printers and only trade at 20-30.
I can't tell whether this man actually believes that he is the only one critiquing AI? I mean.. I can barely walk 2 feet without tripping over anti-AI blogs, posts, news articles, youtube videos or comments.
Even if they don't 100% figure out agents they are now big enough that they can acquire those that do.
If the future is mostly about the app layer then they'll be very aggressive in consolidating the same way Facebook did with social media, see for example Windsurf.
I don't think OpenAI has a moat here at all. Their brand advantage is very temporary
It will only be years down the road when people start realizing that they're spending millions on AI agents that are significantly less capable than human employees. But by that point OpenAI will already be filthy rich.
To put it a different way - AI is currently a gold rush. While Nvidia sells shovels, OpenAI sells automated prospecting machines. Both are good businesses to be in right now.
OpenAI lost $5B last year. There are many competitors offering similar products, preventing OpenAI from extracting large profits. It isn’t a good business now. Sam Altman is promising it will be in the future.
Does knowing a human wrote this article over days of mulling increase or decrease its value?
Interesting times ahead.
This reminds me of Neon Genesis Evangelion.
Of course, no one realized (at least publicly) that it is a metaphor for "everyone claps at the end" (also the ending of the original series).
Sound of rain sounds like an audience clapping. "Blood rain" means real claps, not some fake condescending simulacra of it.
"So that is how democracy dies? With thunderous applause." is also a reference to the same metaphor.
Both movies relate to the theme of sacrifice, worthiness, humanity survival.
Are you guys are too much into giant robots to even notice those things?
> people are babbling about the "AI revolution" as the sky rains blood and crevices open in the Earth, dragging houses and cars and domesticated animals into their maws. Things are astronomically fucked outside,
It took me 20+ ranty paragraphs to realise that this guy is not, actually, an AI doomer. Dear tech writers, there's camps and sides here, and they're all very deeply convinced of being right. Please make clear which one you're on before going down the deep end in anger.
He makes good points! I just wish he'd make them quicker.
That's the key.
Tech is, at the moment, the only growth engine for capitalism at this time, that sustains the whole world economy (before, it used to be "just" IT [2015], credit [2008], oil [1973], coal, which all have demonstrated their limits to sustain a continuous growth).
AI is the only growth engine left within tech at this moment, supported by GPU/TPU/parallelization hardware.
Given what's at stake, if/when the AI bubble bursts, if there's no alternative growth engine to jump to, the dominos effect will not be pretty.
EDIT: clarified.
I don't see good local models happening on mobile devices anytime soon, and the majority of desktop users will use whatever is the best + most convenient. Competitive open source models running on your average laptop? Seems unlikely
And 2029 is in 4 years. Four years ago leetcode benchmarks still meant something and OpenAI was telling us GPT3 was too dangerous to release.
Check out the MTEB leaderboard:
To my knowledge, a 10,000% growth in revenue over 7 years isn't a reality once you're at that level of volume. Asking 4o about this projection acknowledges the reality -
"OpenAI’s projection to grow from approximately $1 billion in revenue in 2023 to $125 billion by 2029 is extraordinarily ambitious, implying a compound annual growth rate (CAGR) exceeding 90%. Such rapid scaling is unprecedented, even among the fastest-growing tech companies."
Am I missing something? I like OAI and I use ChatGPT every day, but I remain unconvinced of those figures.
[1] https://lh7-rt.googleusercontent.com/docsz/AD_4nXcTvV_KScCMt...
rsynnott•3h ago
It's a bit of an open question which comes first; the AI bubble bursting, or Ed Zitron exploding from pure indignation.
throwanem•1h ago
Xiol32•1h ago