It’s reasonably obvious that there are some very high expectations baked in to certain equity valuations.
Leave it to the reader to take a view on whether it makes sense.
https://www.stlouisfed.org/publications/regional-economist/a...
https://www.theguardian.com/business/1999/dec/20/nasdaq.efin...
This time, "AI" has been hyped up more than tech in 1999 by the media. The media has just reversed course in 2025 because they found out that most people hate "AI". in 2023-2024 it was mainly hype.
Yet, the markets continued rapidly upward for another FOUR years. Shorting the high-flying stocks with negligible income in 1997 or 1998 would have been completely sensible. And it would have wiped you out, as you would have been years too early.
It just proves the adage: "The markets can remain irrational longer than you can remain solvent."
Today, the levels of (over-)investment compared to investment are even more extreme. But when is the time to call it?
"The notion that Amazon.com will be allowed to corner the market in on-line book sales is wholly implausible."
What does Ja Rule think about bubbles? Ask him why markets can remain irrational longer than you can remain solvent.
If you could simply "value the market" in some analytical way, then any simple neural network would (universally) approximate it. Instead you see the vast vast majority of machine learning quant funds fail. Here are the specific details as to why: https://www.youtube.com/watch?v=BRUlSm4gdQ4
That's called "anecodtatal gambling success" and a lot of smart people suffer from this fallacy.
Applying formal causal inference procedures such as propensity score matching, doubly robust estimators, causal forests, and targeted maximum likelihood estimation (TMLE) to test whether standard fundamental variables like P/E, P/B, EV/EBITDA, ROE, ROA, gross profitability, or free cash flow yield causally influence forward equity returns consistently shows that these metrics exhibit negligible average treatment effects once confounders and colliders are controlled for.
In other words, across modern causal inference frameworks, the estimated causal impact of common fundamental signals on subsequent returns is statistically indistinguishable from zero, which indicates that most traditional fundamentals have no meaningful predictive or causal power for future price performance. There's no alternative opinion to be had. You're just wrong. You can continue gambling if you like, but you're not doing any kind of predictive analysis.
He does this because he understands how money works and how Bitcoin works.
This isn't rocket science - do anybody sane believe that OpenAI will spend what, 1.5 Trillion $ they project ? Quite a big chunck of Oracle, Nvidia and others projections are based on this 1.5 Trillion $. They are loosing money and their revenue is 1% of this figure.
And is it really overvalued if AGI is achieved? Sounds like risk:reward profile is already priced in the gamble, and the valuation is appropriate....But I guess if you take it to the logical conclusion... if AGI is achieved, everyone will be out of a job, so scarcity-based economics, based on scarce labor input, itself will have to be redone. Wild speculation there.
What ? Who's profitable except NVidia who is selling shovels (increasingly to itself) ?
Edit. Profitable on AI, if by profitable you mean from other sources then yes.
Is this the same crash that happened when tariffs were announced? What about the 2022 crash? What sort of crash are we talking about?
IMO the AI incumbents want to provoke smaller pullbacks on the way up to A) kill competitors who can't handle it and B) prevent a catastrophic crash that would actually hurt them.
That's why we see stuff like Thiel selling his NVDA holdings. He's just going to buy back in later.
The top is over performing so concentration is real but it’s not the only growth driver:
https://www.fool.com/research/magnificent-seven-sp-500/
The return / valuation concentration prob not as titillating as the PE run up:
https://finance.yahoo.com/news/surprisingly-excellent-run-st...
True or not true hard to say without better definitions of terms but seems like current profile is not ultra common in history but it’s not a bunch of losers getting diluted out.
[1] https://www.reuters.com/business/autos-transportation/record...
[2] https://www.sca.isr.umich.edu
[3] https://www.cbsnews.com/video/october-marks-worst-layoffs-22...
Do you have a test for this?
Or is it based on the presumption that reasoning skills cannot evolve, it can only be the result of "intelligent design"?
> I don’t know how to address the evolve part. LLMs don’t directly mutate and have selective pressure like living organisms.
Sorry, that was poorly worded. I meant "can reasoning skills not be evolved through the neural net training phase?"
Sure, once you deploy an LLM, it does not evolve any more.
But let's say you have a person Tom with 5-minute short term memory loss, meaning he can't ever remember more than 5 minutes back. His reasoning skills are completely static, just based on his previous education before the memory loss accident, and the last 5 minutes.
Is "5-minute Tom" incapable of reasoning because he can't learn new things?
> They appear completely brain dead at times
Yes, definitely. But they also manage to produce what looks like actual reasoning in other cases. Meaning, "reasoning, not pattern matching".
So if a thing can reason at some times and in some cases, but not in other, what do we call that?
An LLM is a lot like a regular CPU. A CPU basically just operates step by step, where it takes inputs, a state memory, and stored read-only data, puts those into combinatorical logic to calculate new outputs and updates to the state memory.
An LLM does the same thing. It runs step by step, takes the user input+its own previous output tokens and stored readonly data, puts those into a huge neural network to perform processing to generate the next output token.
The "state memory" in an LLM (=the context window) is a lot more limited than a CPU RAM+disk, but it's still a state memory.
So I don't have a problem imagining that an LLM can perform some level of reasoning. Limited and flawed for sure, but still a different creature than "pattern matching".
This was very true of the dotcom bubble. The entire "web" was new, and the promise was everything you use it for today.
Pets.com was a laughing stock for years as an example of dotcom excess, and now we have chewy.com, successfully running the same model.
Webvan.com, was a similar example of "excess" and now we have Instacart and others.
I looked up webvan just now--the postmortem seems relevant:
"Webvan failed due to a combination of overspending on infrastructure, rapid and unproven expansion, and an unsustainable business model that prioritized growth over profitability."
I understand training is still costly, but it's not unimaginable for it to turn profitable as well if you think believe they'll generate trillions in value by eliminating millions of jobs.
https://www.ft.com/content/fce77ba4-6231-4920-9e99-693a6c38e...
So, to get a trillion in value, you'd have to eliminate many tens or even hundreds of millions of jobs.
I don't believe this has been the case or claim at all. At best they have recognized some limited use cases in certain models where API tokens have generated a gross profit.
Probably not, but the numbers they've released are too opaque to tell.
The problem of dotcom is we needed a cultural shift. I had my first internet date during the dot com bubble and I remember we would lie to people about how we met because the idea sounded so insane at the time to basically everyone. In 1999 it seemed kind of crazy to even use your real name online let alone put your credit card into the web browser.
Put your credit card into the internet browser then a stranger brings you items in their van? Completely insane culturally in 1999. It would have sounded like the start of an Unsolved Mysteries episode to the average person in 1999. There was no market for that in 1999.
The lesson I take from dotcom is we had this massive bubble and burst over technology that already existed, worked flawlessly and largely just needed time for the culture to adapt to it.
The main difference this time is we are pricing in technology that doesn't actually exist.
I can't think of another bubble that was based on something that doesn't exist. The closest analogy I can think of is the railroad bubble but with the trains not actually existing outside of some vague theoretical idea that we don't actually know how to build. A bubble in laying down rail because of how big it will be when we figure out how to build the trains.
The only way you would get a bubble that stupid would be to have 50-100 years of art, stories and movies priming the entire population on the inevitability of the train.
Nobody blinks twice nowadays at getting into a car with a total stranger.
I think this is a very large overstatement. Many large problems still exist in robotics which can not be papered over with LLMs. I’m familiar with problems in manipulation and affordable sensing, which will not be solved via llms, and are fundamentally necessary for reliable and safe interaction with the real world. I’m confident there are many others. LLMs probably will help with high level planning. But that’s a subset of the many problems that keep robots from becoming mass market products.
I think people are excited about robotics because of the many demo videos that have been coming out of various companies. However, almost all of these videos are smoke and mirrors. If you ask somebody who works on those demos, they will unashamedly tell you all the ways they worked around their technical limitations to get just the right footage. The PR departments are less upfront with that info.
OpenAI, NVIDIA, Microsoft, Apple, Amazon, etc. obviously won’t collapse.
The money being thrown around is mind boggling. However, we’ve been throwing this type of money around for a handful of years now.
Tons of layoffs, homelessness, corruption, unemployment, difficulties for everyone to find a job, the incoming SNAP meltdown, government shutdown and the mess it’s going to cause for a while. None of it makes sense. It’s pure crazy because everything should have imploded by now. The tech layoffs and government layoffs alone should be causing a shitstorm of misery out there, but it’s hidden somehow.
AI isn’t going away. It’s here to stay. It has already become embedded into so many core things we do everyday. So many jobs are affected by it. Like, marketing, graphic design, writing, so many jobs in hollywood, like storyboarding and voiceover work, and the creative process of so many things. So many scenes today in movies are CGI and it’s hard to tell, like 3-second scenes, or CGI overlays. All of that will be created with a prompt in the next couple of years. Sure, some editing will be needed for the generated scenes, but with far less staff. The key takeaway here is that this equates to millions of jobs vanishing rather quickly. Core jobs that people of all ages based their careers on.
Don’t get lost in the details of AI generating garbage or not. The remaining companies that survive will continue to make it better. Don’t think for a second there will be a resurgence in these jobs coming back because everyone thinks a human can do it better.
All those data center GPU buildouts will not go waste. We’re headed for a dystopia that’s even worse than the one we’re living in right now.
Wait until a few people are killed by police for stealing food from a grocery story because they are starving and need to feed their families. It will be the first time we get close to a civil war becoming a reality.
I'm amazed at the sheer volume of resources allocated to the above so quickly for example; more than any other boom I've seen. Society can raise trillions quickly if it means not employing people I guess? Knowing basic economics I don't buy the utopia case with AI for the majority of people, particularly the middle class. To be clear most people I meet anecdotally (especially outside the tech space) are net negative on the changes AI is doing to their lives, even if they are in jobs like trades.
It has had a profound impact; probably much more (no matter which camp wins the argument, bulls or bears on AI) than its inventors ever thought it would. It makes me think of whether the people who invented this, once they see the end result, will be happy with their invention and the changes it will create in the world. If they aren't happy in hindsight it says something about the unfortunately all too common naivete of the techie in general and the impact of their work on society/economics/etc until it does eventually occur.
spking•2mo ago