Michael Burry is betting against AI growth translating into real profits as a whole, not the circular funding.
Its about keeping Wall Street bubble momentum, not financials.
And how inflation and interest are accounted.
[0] consider pagerduty, incredibly profitable with little revenue growth. Trading at 1.5X revenue, where high revenue growth, unprofitable companies are trading at 10X revenue.
The real story is that Nvidia is accepting equity in their customers as a payment for their hardware. "What, you don't have cash to buy our chips? That's OK, you can pay by giving us 10% of everything you earn in perpetuity."
This has happened before, let's call it the "selling the goose that lays golden eggs scan." You can buy our machine that converts electricity into cash, but we will only take preorders, after all it is such a good deal. Then, after bulding the machines with the said preorder money, they of course plugged the machines in themselves instead of shipping them, claiming various "delays" in production. Here I'm talking about the bitcoin mining hardware when the said hardware first appeared.
Nvidia is doing similar thing, just instead of doing it 100% themselves, they are 10% in by acquiring the equity in their customers.
Their shares have been tanking for a month, even after a very good earnings report, so perhaps the market seeks a little more diversity?
Money circulates; it's what it does. The real question is to what extent circulation among a small group of firms is either collusion in disguise (i.e. decisionmaking by only one actual entity falsely measured as multiple independent entities) or a fragile ecosystem masquerading as a healthy one (i.e. an "island economy" where things look great in the current status quo, but the moment the fish go away the entire cycle instantly collapses).
If I wanted to read Gemini's opinion about this issue with the voice of a crank technical analysis, I would
- **The Cash Flow Mystery**: ...
- **The Inventory Balloon**: ...
- **The "Paper" Chase**: ...
More AI slop.It's true SRAM comes with your logic, you get a TSMC N3 (or N6 or whatever) wafer, you got SRAM. Unfortunately SRAM just doesn't have the capacity you have to augment with DRAM which you see companies like D-Matrix and Cerebras doing. Perhaps you can use cheaper/more available LPDDR or GDDR (Nvidia have done this themselves with Rubin CPX) but that also has supply issues.
Note it's not really about parameter storage (which you can ammorotize over multiple users) it's KV cache storage which gets you and that scales with the user count.
Now Groq does appear to be going for a pure SRAM play but if the easily available pure SRAM thing comes at some multiple of the capital cost of the DRAM thing it's not a simple escape hatch from DRAM availability.
It'll be interesting to see if we get any kind of non-NAND persistent memory in the near future, that might beat some performance metrics of both DRAM and NAND flash.
- Strange paragraph-lists with bolded first words. e.g. "The Cash Flow Mystery"
- The 'It's not just X; it's Y' meme: "Buying Groq wouldn't just [...], it could give them a chip that is actually [...]. It’s a supply chain hedge."
Tells like:
- "My personal read? NVIDIA is [...]"
- "[...]. Now I'm looking at Groq, [...]"
However, even if these parts were AI generated, it's simultaneously riddled with typos and weird phrases:
- "it looks like they are squeezing each other [sic] balls."
- Stylization of OpenAI as 'Openai'.
Not sure what to make of this low-quality prose.
Even if the conclusion is broadly correct, that doesn't mean the reasoning used to get there is consistent.
I do, at least, appreciate that the author was honest up-front with respect to use of Gemini and other AI tools.
Final grade: D+.
Here is a possible roadmap for the coming correction:
1. The Timeline: We are looking at a winter. A very dark and cold winter. Whether it hits before Christmas or mid-Q1 is a rounding error; the gap between valuations and fundamentals has widened enough to be physically uncomfortable.
The Burry thesis—focused on depreciation schedules and circular revenue—is likely just the mechanical trigger for a sentiment cascade.
2. The Big Players:
Google: Likely takes the smallest hit. A merger between DeepMind and Anthropic is not far-fetched (unless Satya goes all the way).
By consolidating the most capable models under one roof, Google insulates itself from the hardware crash better than anyone else.
OpenAI: They look "half naked." It is becoming impossible to ignore the leadership vacuum. It’s hard to find people who’ve worked closely with Altman who speak well of his integrity, and the exits of Sutskever, Schulman, and others tell the real story.
For a company at that valuation, leadership credibility isn’t a soft factor—it’s a structural risk.
3. The "Pre-Product" Unicorns: We are going to see a reality check for the ex-OpenAI, pre-product, multi-billion valuation labs like SSI and Thinking Machines.
These are prime candidates for "acquihres" once capital tightens. They are built on assumptions of infinite capital availability that are about to evaporate.
4. The Downstream Impact:
The second and third tier—specifically recent YC batches built on API wrappers and hype—will suffer the most from this catastrophic twister.
When the tide goes out, the "Yes" men who got carried away by the wave will be shouting the loudest, pretending they saw it coming all along
> leadership credibility isn’t a soft factor—it’s a structural risk.
> The Timeline/The Big Players/The "Pre-Product" Unicorns/The Downstream Impact
If you really just write like this entirely naturally then I feel bad, but unfortunately I think this writing style is just tainted.
That, combined with some cooling from an AI hype bubble burst (see separate articles about companies missing quota as folks aren’t buying as much AI as the hype hoped) and there’s a potential ugly future where the headline demand plummets in top of idle chips waiting to be powered on. Suddenly the market is flooded with chips nobody wants.
I can see how you could make an argument that this particular ouroboros has an insufficient loop area to sustain itself, or more significantly, lacks connection to the rest of the economy, but money has to flow in circles/cycles or it doesn't work at all.
I don't see why. Graphcore bet on SRAM and that backfired because unless you go for insane wafer scale integration like Cerebras, you don't remotely get enough memory for modern LLMs. Graphcore's chip only got to 900MB (which is both a crazy amount and not remotely enough). They've pivoted to DRAM.
You could make an argument for buying Cerebras I guess, but even at 3x the price, DRAM is just so much more cost effective than SRAM I don't see how it can make any sense for LLMs.
Dead internet much?
Morizero•58m ago