I wonder if this is a thing the U.S. should be worrying about with regard to China taking the lead. As long as the U.S. is … idling … it seems it could catch up—if in fact there is any there there with AI.
But I've been told by Eric Schmidt and others that AGI is just around the corner—by year's end even. Or, it is already being demonstrated in the lab, but we just don't know about it yet.
The fact that "scaling laws" didn't scale? Go open your favorite LLM in a hex editor, oftentimes half the larger tensors are just null bytes.
LLMs would always bottleneck on one of those two, as computing demand grows crazy quickly with the data amount, and data is necessarily limited. Turns out people threw crazy amounts of compute into it, so the we got the other limit.
Billions of users allowing them to continually refund their models
Hell by then your phone might be the OpenAI 1. The world's first AI powered phone (tm)
It's been tried before, it generally ends in a crater.
I've been traveling in a country where I don't speak the language and or know the customs, and I found LLMs useful.
But I see almost zero difference between paid and unpaid plans, and I doubt I'd pay much or often for this privilege.
All of the tools I use get increasingly better every quarter at the very least (coding tools, research, image generation, etc).
I'm not expressing any judgement on the economics of it.
I'd much rather live in a world of tolerable good and bad opposing each other in moderate ways.
If we produced ASI, things would become truly unpredictable. There are some obvious things that are on the table- fusion, synthetic meat, actual VR, immortality, ending hunger, global warming, or war, etc. We probably get these if they can be gotten. And then it's into unknown unknowns.
Perfectly reasonable to believe ASI is impossible or that LLMs don't lead to AGI, but there is not much room to question how impactful these would be.
Seriously though, there's a part of me that hopes that the technology can help with technological advancement. Fusion, room temperature superconductors, working solid state batteries, ... which will all help in leaping ahead and make sure everyone on the planet has a good life. Is the risk worth it? I don't know, bit that's my reason for wanting AGI
https://aimagazine.com/articles/openai-ceo-chatgpt-would-hav...
Edit: this was serious, if I read the Wikipedia definition of AGI, ChatGPT meets the historical definition at least. Why have we moved the goal posts?
GPT-5 is nowhere close to this. What are you talking about?
1. Functional Definition of AGI
If AGI is defined functionally — as a system that can perform most cognitive tasks a human can, across diverse domains, without retraining — then GPT-4/5 arguably qualifies:
It can write code, poetry, academic papers, and legal briefs.
It can reason through complex problems, explain them, and even teach new skills.
It can adapt to new domains using only language (without retraining), which is analogous to human learning via reading or conversation.
In this view, GPT-5 isn’t just a language model — it’s a general cognitive engine expressed through text.
Again I think the common argument is more a religious argument than a practical one. Yes I acknowledge this doesn’t meet the frontier definition of AGI, but that’s because it would be sad if it was the case, not because there’s any actual practical sense that we’ll get to the sci-fi definition. This view that ChatGPT is already performing most tasks reasonably at the edge of beyond human ability is true.
But I also think it's natural to move the goal posts.
We try to peer at the future and what would convince us of machine intelligence. Academia finally delivers and we have to revise what we mean by intelligence.
If one, settling a pillow by her head,
Should say: "That is not what I meant at all;
That is not it, at all."
And stock holders realized this last week, all at the same time?
https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...
I'm not saying this triggered a sell off, but it is indicative of perception changes.
It was this time last year we were told “2025 will be the year of the agent”, with suggestions that the general population would be booking their vacations and managing their tax returns via Agents.
We’re 7 weeks from the end of the year, and although there are a few notable use cases in coding and math research, agents haven’t shown to be meaningfully disruptive of most people’s economic activity.
Something most people agree is AGI might arrive in the near future, but there’s still a huge effort required to diffuse that technology & its benefits throughout the economy.
We’ve had GPT2 since 2019, almost 6 years now. Even then, OpenAI was claiming it was too dangerous to release or whatever.
It’s been 6 years since the path started. We’ve gone from hundreds of thousands -> millions -> billions -> tens of billions -> now possibly trillions in infrastructure cost.
But the value created from it has not been proportional along the way. It’s lagging behind by a few orders of magnitude.
The biggest value add of AI is that it can now help software engineers write some greenfield code +40% faster, and help people save 30 seconds on a Google search -> reading a website.
This is valuable, but it’s not transformational.
The value returned has to be a lot higher than that to justify these astronomical infrastructure costs, and I think people are realizing that they’re not materializing and don’t see a path to them materializing.
Now, with rates falling, they can pivot the story - call it an AI bubble, let it crash
then use the crash as justification for renewed, open money printing
July 2024, https://x.com/stealthqe4/status/1818782094316712148
> We’ve all been wondering where all of this liquidity is coming from in the markets. Stealth QE was being done somehow. Now we have the answer! It’s all in the Treasury increased t-bill issuance. QE has now been replaced by ATI.
I personally just keep investing in cheap total world market funds and let the market do its thing.
Market cap is mostly a useless number. It's the current stock price multiplied by the number of outstanding shares. But only a small % of shares are bought and sold in a given day, so the current stock price is mostly irrelevant to the shares that aren't moving.
If you hold some stock, and the current stock price goes down, but you don't sell your stock, then you haven't lost any actual money. The so-called "value" of your stock may have dropped, but that's just a theoretical value. Unless you're desperate to sell now, you can wait out the downturn in price.
If it moves enough, shares that aren't moving might become shares that are though. Unless a company's stock is all held by Die Hard True Believers who will HODL through the apocalypse and beyond, the market price can matter.
We'd also have to run the same argument on the upside too. Does the current stock price matter to those who aren't selling when it goes 2x in a year?
I didn't say that stock price is totally irrelevant, but if you're investing for the long term, short-term fluctuations mostly shouldn't change your strategy.
In any case, the headline is inaccurate. Unsold stock losing market value is not the same as stock sold off.
Tech stock market capitalization declined by $1T.
Every share of stock sold by one party was purchased by another party, as always.
For the price of shares to fall, selling pressure in the market has to outweigh buying pressure. The fact that the price dropped is how we know this is a selloff and not a buyoff.
I just checked the stock Oracle Palantir and Nvidia and they don't seem particularly down. Only Meta seems down from 750$ to 620$ which is a 21% drop (to the value it had this year in April 2025 (which would be a drop in 277B$ billions dollars).
Is there any data supporting the article claims for 1T$ stocks value drop?
- Nvidia -11%
- Palantir -16%
- Oracle -11%
- Meta -5%
With some very quick and extremely cursory napkin maths I do get in the 800 billion range, which the original article mentioned. I guess the linked article rounded it up to make it more sensational.
I am also getting annoyed at AI. In the last some days, more and more total garbage AI videos have been swarming youtube. This is a waste of my time, because what I see is no longer real but a fabrication. It never happened.
This is a weekly chart of Nvidia from 2023 to 2024. During that period, the stock dropped from $95 to $75 in just two weeks. How would you defend the idea that a major correction wouldn’t have happened back in 2023–2024? Would you have expected a correction at that time? After all, given a long enough timescale, corrections are inevitable.
Nvidia’s stock price is not the start and end of AI investments. OpenAI is losing over $11bn a quarter. More than they were losing in 2023, and debt accumulates over time. Reality will snap in eventually when investors realize their promised future isn’t coming any time soon. Nvidia’s valuation is in large part due to the money OpenAI and others are giving it right now. What do you think will happen when that money goes away?
Okay?
Last I heard they are bent on mass firings, outsourcing for cheap labor, cutting costs and enriching themselves.
Unless there is strong regulation that forces them to actually contribute or be punished, they will do whatever they can to profit..
Expanding assets can mean building new factories, ordering more raw materials, or entering new markets. Each of these steps involves third-party vendors: construction firms to build facilities, delivery companies to transport materials, mining companies to extract resources, suppliers, logistics providers, marketers, and contractors.
All of this spending creates jobs. Maybe not directly within their own company, but across the many other businesses that support their growth.
> There are also companies like Sweetgreen, the salad company that has tried to position itself as an automation company that serves salads on the side. Indeed, Sweetgreen has tried to dabble in a variety of tech, including AI and robots
Please just make me a good salad.
Anything after that doesn't matter.
They do the robotics part, and then remotely operate them (though on the paper it is officially "hybrid").
No; I’m not paying $15+ for a salad. But; plenty of people do.
(and fwiw lots of people will pay a lot for a good salad…)
The unpredictability of salad composition is what makes our products so unique and loved by people all around the world!*
*While on average it's a very good salad, there's a non zero chance that the salad may contain asbestos, plutonium, chalk, antimonium, rubber, NaN, steel rods
Could you imagine being offered a ticket to Arbyfest or Jambacon?
Like you said, please just make a good salad.
I got Google’s DPA update email included a number of Uber’s Model training side gigs and analytic products. I’m guessing this all came out of the Self Driving car project, but it’s another - albeit less goofy - data point of “We’re and AI Company but we do X for money.”
I feel like I see a few of these every month.
A few years ago Foresquare realized their business model was less profitable than that of the data aggregator they used so they bought the aggregator and basically became that company given the other hooks they have. I sort of wonder if that’s what is running through some of these companies’ C suite meetings.
I find it strange and upsetting when articles talk about the "evaporation" of "market valuation". Market value is already meaningless vapor - it's not like real money was created when the stock price went up, nor has anything of concrete value disappeared.
No matter what, LLM'S are here to stay. Companies are doing huge investments do that they can get ahead early on.
Will it need to be more efficient, yes.
But a lot of money of repetitive tasks are going to llm's. Additionally, most companies are constraint by capacity atm.
I really can't imagine what could revert this trend.
Yes, body shops like Palantir are hugely overrated.
But the big tech? No. They can carry those infra investments. Just curious who will come out on top.
OpenAI loses billions, that's true. That doesn't mean we're in a bubble. They also make billions. If their losses continue, they will keep losing more and more control. Microsoft already owns 30% of OpenAI. Big tech companies have too much cash on their hands and they cannot acquire their competitors, so instead they invest money in them. Either way, it's called consolidation.
Sam Altman said they have revenue, but didn't say how much, did he?
We've heard people saying google is making profit on their Ai offering, but I don't think anyone else has their infrastructure with TPUs etc.
Gizmodo is just regurgitating this Financial Times article into a poor quality opinion piece. Journalism is preferred to someone ranting from an armchair IMO.
nba456_•1h ago
JKCalhoun•1h ago
That's what people are grousing about.
epolanski•1h ago
cess11•1h ago
I wouldn't use it for investment decisions, however.
NaomiLehman•28m ago
fullshark•13m ago
Jabbles•1h ago
a) it's $800B
b) this is the largest such selloff since April
https://archive.ph/bzr5G
chmod775•57m ago
What actually happened is that market cap declined by that amount, where market cap of course is just latest share price multiplied by shares outstanding.
Nobody should be surprised or care that this number fluctuates, which is why certain people try really hard to make it seem more interesting than it really is. Otherwise they'd be out of a job.
There is really nothing dumber than finance news.
expedition32•39m ago
We will never see another 1929 crash in which rich people had to sell off their cars.
mensetmanusman•8m ago
Do you have this data out to 2025?
spwa4•37m ago
Reminds me of Enron, really.
Ologn•1h ago
After spring 2023, Nvidia stock seems to follow a pattern. It has a run-up prior to earnings, it beats the forecast, with the future forecast replaced with an even more amazing forecast, and then the stock goes down for a bit. It also has runs - it went up in the first half of 2024, as well as from April to now.
Who knows how much longer it can go on, but I remember 1999 and things were crazier then. In some ways things were crazier three years ago with FAANG salaries etc. There is a lot of capital spending, the question is are these LLMs with some tweaking worth the capital spending, and it's too early to tell that fully. Of course a big theoretical breakthrough like the utility of deep learning, or transformers or the like would help, but those only come along every few years (if at all).
nextworddev•1h ago