because ?
But this isn't 4.6 . its 5.
I can tell difference between 3 and 4.
And while I do want your money, we can just look at LMArena which does blind testing to arrive at an ELO-based score and shows 4.0 to have a score of 1318 while 4.5 has a 1438 - it's over twice likely to be judged better on an arbitrary prompt, and the difference is more significant on coding and reasoning tasks.
Just like the dot com bubble we'll need to wash out a ton of "unicorn" companies selling $1s for $0.50 before we see the long term gains.
But the only thing I've seen in my life that most resembles what is happening with AI, the hype, its usefulness beyond the hype, vapid projects, solid projects, etc, is the rise of the internet.
Based on this I would say we're in the 1999-2000 era. If it's true what does it mean for the future?
But let’s assume we can for a moment.
If we’re living in a 1999 moment, then we might be on a Gartner Hype Cycle like curve. And I assume we’re on the first peak.
Which means that the "trough of disillusionment" will follow.
This is a phase in Hype Cycle, following the initial peak of inflated expectations, where interest in a technology wanes as it fails to deliver on early promises.
This bubble also seems to combine the worst of the two huge previous bubbles; the hype of the dot-com bubble plus the housing bubble in the way of massive data center buildout using massive debt and security bundling.
No, but GenAI in it's current form is insanely useful and is already shifting the productivity gears into a higher level. Even without 100% reliable "agentic" task execution and AGI, this is already some next level stuff, especially for non-technical people.
How do people trust the output of LLMs? In the fields I know about, sometimes the answers are impressive, sometimes totally wrong (hallucinations). When the answer is correct, I always feel like I could have simply googled the issue and some variation of the answer lies deep in some pages of some forum or stack exchange or reddit.
However, in the fields I'm not familiar with, I'm clueless how much I can trust the answer.
Treat it like a brilliant but clumsy assistant that does tasks for you without complaint – but whose work needs to be double checked.
For one thing AI can not even count. Ask google's AI to draw a woman wearing a straw hat. More often than not the woman is wearing a well drawn hat while holding another in her hand. Why? Frequently she has three arms. Why? Tesla self driving vision can't differentiate between the sky and a light colored tractor trailer turning across traffic resulting in a fatality in Florida.
For something to be intelligent it needs to be able to think and evaluate the correctness of its thinking correctly. Not just regurgitate old web scrapings.
It is pathetic realy.
Show me one application where black box LLM ai is generating a profit that an effectively trained human or rules based system couldn't do better.
Even if AI is able to replace a human in some tasks this is not a good thing for a consumption based economy with an already low labor force participation rate.
During the first industrial revolution human labor was scarce so machines could economically replace and augnent labor and raise standards of living. In the present time labor is not scarce so automation is a solution in search of a problem and a problem itself if it increasingly leads to unemployment without universal bssic income to support consumption. If your economy produces too much with nobody to buy it then economic contraction follows. Already young people today struggle to buy a house. Instead of investing in chat bots maybe our economy should be employing more people in building trades and production occupations where they can earn an income to support consumption including of durable items like a house or a car. Instead because of the fomo and hype about AI investors are looking for greater returns by directing money toward scifi fantasy and when that doesn't materialize an economic contraction will result.
1. For coding, and the reason coders are so excited about GenAI is it can often be 90% right, but it's doing all of the writing and researching for me. If I can reduce how much I need to actually type/write to more reviewing/editing, that's a huge improvement day to day. And the other 10% can be covered by tests or adding human code to verify correctness.
2. There are cases where 90% right is better than the current state. Go look at Amazon product descriptions, especially things sold from Asia in the United States. They're probably closer to 50% or 70% right. An LLM being "less wrong" is actually an improvement, and while you might argue a product description should simply be correct, the market already disagrees with you.
3. For something like a medical question, the magic is really just taking plain language questions and giving concise results. As you said, you can find this in Google / other search engines, but they dropped the ball so badly on summaries and aggregating content in favor of serving ads that people immediately saw the value of AI chat interfaces. Should you trust what it tells you? Absolutely not! But in terms of "give me a concise answer to the question as I asked it" it is a step above traditional searches. Is the information wrong? Maybe! But I'd argue that if you wanted to ask your doctor about something that quick LLM response might be better than what you'd find on Internet forums.
But I've seen some harnesses (i.e., whatever Gemini Pro uses) do impressive things. The way I model it is like this: an LLM, like a person, has a chance to produce wrong output. A quorum of people and some experiments/study usually arrives to a "less wrong" answer. The same can be done with an LLM, and to an extent, is being done by things like Gemini Pro and o3 and their agentic "eyes" and "arms". As the price of hardware and compute goes down (if it does, which is a big "if"), harnesses will become better by being able to deploy more computation, even if the LLM models themselves remain at their current level.
Here's an example: there is a certain kind of work we haven't quite yet figured how to have LLMs do: creating frameworks and sticking to them, e.g. creating and structuring a codebase in a consistent way. But, in theory, if one could have 10 instances of an LLM "discuss" if a function in code conforms to an agreed convention, well, that would solve that problem.
There are also avenues of improvement that open with more computation. Namely, today we use "one-shot" models... you train them, then you use them many times. But the structure, the weights of the model aren't being retrained on the output of their actions. Doing that in a per-model-instance basis is also a matter of having sufficient computation at some affordable price. Doing that in a per-model basis is practical already today, the only limitation are legal terms, NDAs, and regulation.
I say all of this objectively. I don't like where this is going; I think this is going to take us to a wild world where most things are gonna be way tougher for us humans. But I don't want to (be forced to) enter that world wearing rosy lenses.
Of course you don't trust the answer.
That doesn't mean you can't work with it.
One of the key use cases for me other than coding is as a much better search engine.
You can ask a really detailed and specific question that would be really hard to Google, and o3 or whatever high end model will know a lot about exactly this question.
It's up to you as a thinking human to decide what to do with that. You can use that as a starting point for in depth literature research, think through the arguments it makes from first principles, follow it up with Google searches for key terms it surfaces...
There's a whole class of searches I would never have done on Google because they would have taken half a day to do properly that you can do in fifteen minutes like this.
> There are some classic supply chain challenges such as the bullwhip effect. How come modern supply chains seem so resilient? Such effects don't really seem to occur anymore, at least not in big volume products.
> When the US used nuclear weapons against Japan, did Japan know what it was? That is, did they understood the possibility in principle of a weapon based on a nuclear chain reaction?
> As of July 2025, equities have shown a remarkable resilience since the great financial crisis. Even COVID was only a temporary issue in equity prices. What are the main macroeconomic reasons behind this strength of equities.
> If I have two consecutive legs of my air trip booked on separate tickets, but it's the same airline (also answer this for same alliance), will they allow me to check my baggage to the final destination across the two tickets?
> what would be the primary naics code for the business with website at [redacted]
I probably wouldn't have bothered to search any of these on Google because it would just have been too tedious.
With the airline one, for example, the goal is to get a number of relevant links directly to various airline's official regulations, which o3 did successfully (along with some IATA regulations).
For something like the first or second, the goal is to surface the names of the relevant people / theories involved, so that you know where to dig if you wish.
Otherwise, common sense, quick google search or let another LLM evaluate it.
Sure a lot of answers from llms may be inaccurate - but you mostly identify them as such because your ability to verify (using various heuristics) is good too.
Do you learn from asking people advice? Do you learn from reading comments on Reddit? You still do without trusting them fully because you have sniff tests.
The people who use llms to write reports for other people who use llms to read said reports ? It may alleviate a few pain points but it generates an insane amount of useless noise
But once you get out of the tech circles and bullshit jobs, there is a lot of quality usage, as much as there is shit usage. I've met everyone from lawyers and doctors to architects and accountants who are using some form of GenAI actively in their work.
Yes, it makes mistakes, yes, it hallucinates, but it gets a lot of fluff work out of the way, letting people deal with actual problems.
These things, as they are right now, are essentially at the performance level of an intern or recent graduate in approximately all academic topics (but not necessarily practical topics), that can run on high-end consumer hardware. The learning curves suggest to me limited opportunities for further quality improvements within the foreseeable future… though "foreseeable future" here means "18 months".
I definitely agree it's a bubble. Many of these companies are priced with the assumption that they get most of the market; they obviously can't all get most of the market, and because these models are accessible to the upper end of consumer hardware, there's a reasonable chance none of them will be able to capture any of the market because open models will be zero cost and the inference hardware is something you had anyway so it's all running locally.
Other than that, to the extent that I agree with you that:
> GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives
I do so only in that not everyone wants (or would even benefit from) a book-smart-no-practical-experience intern, and not all economic tasks are such that book-smarts count for much anyway. This set of AI advancements didn't suddenly cause all cars manufacturers to suddenly agree that this was the one weird trick holding back level 5 self driving, for example.
But for those of us who can make use of them, these models are already useful (and, like all power tools, dangerous when used incautiously) beyond merely being coding assistants.
What should I do with my ETF? Sell now, wait for the inevitable crash? Be all modern long term investment style: "just keep invested what you don't need in the next 10 years bro"?
This really keeps me up at night.
AI is more-or-less replacing people, not connecting them. In many cases this is economically valuable, but in others I think it just pushes the human connection into another venue. I wouldn’t be surprised if in-person meetup groups really make a comeback, for example.
So if a prediction about AI involves it replacing human cultural activities (say, the idea that YouTube will just be replaced by AI videos and real people will be left out of a job), then I’m quite bearish. People will find other ways to connect with each other instead.
The only reason that we can have such nice things today like retina display screens and live video and secure payment processing is because the original Internet provided enough value without these things.
In my first and maybe only ever comment on this website defending AI, I do believe that in 30 or 40 years we might see this first wave of generative AI in a similar way to the early Internet.
For very simple jobs, like working in a call center? Sure.
But the vast majority of all jobs aren't ones that AI can replace. Anything that requires any amount of context sensitive human decision making, for example.
There's no way that AI can deliver on the hype we have now, and it's going to crash. The only question is how hard - a whimper or a bang?
AI is real just like the net was real, but the current environment is very bubbly and will probably crash.
Same thing now with AI. The capital is going to dry up eventually, no one is profitable right now and its questionable whether or not they can be at a price consumers would be willing or able to pay.
Models are going to become a commodity, just being an "AI Company" isn't a moat and yet every one of the big names are being invested in as if they are going to capture the entire market, or if there even will be a market in the first place.
Investors are going to get nervous, eventually, and start expecting a return, just like .com. Once everyone realizes AGI isn't going to happen, and realize you aren't going to meet the expected return running a $200/month chatbot, it'll be game over.
Can anyone shed light on what is going on between these two groups. I wasn't convinced by the rest of the argument in the article, and I would like something that didn't just rely on "AI" as an explanation.
They are also all tech companies, which had a really amazing run during Covid.
They also resemble companies with growth potential, whereas other companies such as P&G or Walmart might’ve saturated their market already
Only 8 out of the 10 are. Berkshire and JP Morgan are not. It is also arguable whether Tesla is a tech company or whether it is a car company.
Apple is 22% of BRK’s holdings. The next biggest of their investments are Amex, BoA, Coke, Chevron.
They are not a tech company.
In more uncertain scenarios small companies can't take risks as well as big companies. The last 2 years have seen AI, which is a large risk these big companies invested in, pay off. But due to uncertainty smallish companies couldn't capitalize.
But that's only one possible explanation!
LOL. It's paying off right now, because There Is No Alternative. But at some point, the companies and investors are going to want to make back these hundreds of billions. And the only people making money are Nvidia, and sort-of Microsoft through selling more Azure.
Once it becomes clear that there's no trillion dollar industry in cheating-at-homework-for-schoolkids, and nvidia stop selling more in year X than X-1, very quickly will people realize that the last 2 years have been a massive bubble.
And I don't know, because I have about 60 minutes a week to think about this, and also good quantitative market analysis is really hard.
So whilst it may sound like a good reposte to go "wow, I bet you make so much money shorting!" knowing that I don't and can't, it's also facile. Because I don't mind if I'm right in 12, 24 or 60 months. Fwiw, I thought I'd be right in 12 months, 12 months ago. Oops. Good thing I didn't attempt to "make money" in an endeavor where the upside is 100% of your wager, and the downside theoretically infinite.
If you’re referencing Trump’s tariffs, they have only come into effect now, so the economic effects will be felt in the months and years ahead.
At any point in time the world thinks that those top 10 are unstoppable. In the 90's and early 00's... GE was unstoppable and the executive world was filled with acolytes of Jack Welch. Yet here we are.
Five years ago I think a lot of us saw Apple and Google and Microsoft as unstoppable. But 5-10 years from now I bet you we'll see new logos in the top 10. NVDA is already there. Is Apple going to continue dominance or go the way of Sony? Is the business model of the internet changing such that Google can't react quick enough. Will OpenAI go public (or any foundational model player).
I don't know what the future will be but I'm pretty sure it will be different.
[1] https://www.visualcapitalist.com/ranked-the-largest-sp-500-c...
Typically, you probably need to go down to the S&P 25 rather than the S&P 10.
IMO this is an extremely scary situation in the stock market. The AI bubble burst is going to be more painful than the Dotcom bubble burst. Note that an "AI bubble burst" doesn't necessitate a belief that AI is "useless" -- the Internet wasn't useless and the Dotcom burst still happened. The market can crash when it froths up too early even though the optimistic hypotheses driving the froth actually do come true eventually.
Once users get hooked on AI and it becomes an indispensable companion for doing whatever, these companies will start charging the true cost of using these models.
It would not be surprising if the $20 plans of today are actually just introductory rate $70 plans.
So a big concern then? (Although not a death sentence)
That’s not correct. Did you mean something else?
The benefits have just not been that wide ranging to the average person. Maybe I'm wrong but, I don't AI hype as a cornerstone of US jobs, so there's no jobs to suddenly dry up. The big companies are still flush with cash on hand, aren't they?
If/when the fad dies I'd think it would die with a wimper.
1. There profits could otherwise be down.
2. The plan might be to invest a bunch up front in severance and AI Integration that is supposed to pay off in the future.
3. In the future that may or may not happen, and it'll be hard to tell, because it may pay off at the same time an otherwise recession is hitting, which smoothes it out.
It's almost as if it's not that simple.
What about the software? What about the data? What about the models?
But if, or when AI gets a little better, then we will start to see a much more pronounced impact. The thing competent AIs will do is to super-charge the rate at which profits don't go to labor nor to social security, and this time they will have a legit reason: "you really didn't use any humans to pave the roads that my autonomous trucks use. Why should I pay for medical expenses for the humans, and generally for the well-being of their pesky flesh? You want to shutdown our digital CEO? You first need to break through our lines of (digital) lawyers and ChatGPT-dependent bought politicians."
The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic. The article follows on with:
>In the last two years, about 60 percent of the stock market’s growth has come from AI-related companies, such as Microsoft, Nvidia, and Meta.
Which is a statement that's been broadly true since 2020, long before ChatGPT started the current boom. We had the Magnificent Seven, and before that the FAANG group. The US stock market has been tightly concentrated around a few small groups for a decades now.
>You see it in the business data. According to Stripe, firms that self-describe as “AI companies” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.
The current Venn Diagram of "startups" and "AI companies" is two mostly concentric circles. Again, you could have written the following statement at any time in the last four decades:
> According to [datasource], firms that self-describe as “startups” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.
The dollars are being diverted elsewhere.
Intel a chip maker who can directly serve the AI boom, has failed to deploy its 2nm or 1.8nm fabs and instead written them off. The next generation fabs are failing. So even as AI gets a lot of dollars it doesn't seem to be going to the correct places.
USA lost mass manufacturing (screws and rivets and zippers), but now we are losing cream of the crop world class manufacturing (Intel vs TSMC).
If we cannot manufacture then we likely cannot win the next war. That's the politics at play. The last major war between industrialized nations shows that technology and manufacturing was the key to success. Now I don't think USA has to manufacture all by itself, but it needs a reasonable plan to get every critical component in our supply chain.
In WW2, that pretty much all came down to ball bearings. The future is hard to predict but maybe it's chips next time.
Maybe we give up on the cheapest of screws or nails. But we need to hold onto elite status on some item.
> The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic.
That's the really damning thing about all of this, maybe all this capital could have been invested into actually growing the economy instead of fueling this speculation bubble that will burst sooner or later, bringing along any illusion of growth into its demise.
"Amen. And amen. And amen. You have to forgive me. I'm not familiar with the local custom. Where I come from, you always say "Amen" after you hear a prayer. Because that's what you just heard - a prayer."
At this point, everyone is just praying that AI ends up a net positive, rather than bursting and plunging the world into a 5+ year recession.
2. If people think they can get an abnormally high return, they will invest more than otherwise.
3. Whatever other money would've got invested would've gone wherever it could've gotten the highest returns, which is unlikely to have the same ratio as US AI investments - the big tech companies did share repurchases for a decade because they didn't have any more R&D to invest in (according to their shareholders).
So while it's unlikely the US would've had $0 investment if not for AI, it's probably even less likely we would've had just as much investment.
The big US software firms have the cash and they would invest in whatever the market fad is, and thus, bring it into the US economy.
I doubt it. Investors aren't going to just sit on money and let it lose value to inflation.
On the other hand, you could claim non-AI companies wouldn't start a new bubble, so there'd be fewer returns to reinvest, and that might be true, but it's kind of circular.
This doesn't seem to align with the behavior I've observed in modern VCs. It truly amazes me the kind of money that gets deployed into silly things that are long shots at best.
This is very common, and this happens in literally every country.
But their CAPEX would be much smaller, as if you look at current CAPEX from Big Tech, most of it are from NVidia GPUs.
If a Bubble is happening, when it pops, the depreciation applied to all that NVidia hardware will absolute melt the balance sheet and earnings of all Cloud companies, or companies building their own Data centers like Meta and X.ai
Like the Internet boom, it's both. The rosy predictions of the dotcom era eventually came true. But they did not come true fast enough to avoid the dotcom bust. And so it will be with AI.
More people subscribe to/play with a $20/m service than own/admin state-of-the-art machines?! Say it ain't so /s
The problem is, $20/m isn't going to be profitable without better hardware, or more optimized models. Even the $200/month plan isn't making money for OpenAI. These companies are still in the "sell at a loss to capture marketshare" stage.
We don't even know if being an "AI Company" is viable in the first place - just developing models and selling access. Models will become a commodity, and if hardware costs ever come down, open models will win.
What happens when OpenAI, Anthropic, etc. can't be profitable without charging a price that consumers won't/can't afford to pay?
This chart is extremely sparse and very confusing. Why not just plot a random sample of firms from both industries?
I'd be curious to see the shape of the annualized revenue distribution after a fixed time duration for SaaS and AI firms. Then I could judge whether its fair to filter by the top 100. Maybe AI has a rapid decay rate at low annualized revenue values but a slower decay rate at higher values, when compared to SaaS. Considering that AI has higher marginal costs and thus a larger price of entry, this seems plausible to me. If this is the case, this chart is cherry picking.
There, summed it up for you.
AI is propping up the US economy
croes•3h ago
Shouldn’t the customers‘ revenue also rise if AI fulfills its productivity promises?
Seems like the only ones getting rich in this gold rush are the shovel sellers. Business as usual.
mewpmewp2•2h ago
thecupisblue•2h ago
sofixa•2h ago
Not necessarily, see the Jevons paradox.