(Although I think the utility of server farms will not be high after the bubble bursts: even if cheap they will quickly become outdated. In that respect things are different from railway tracks)
•largest publicly-traded company in the world was ~$2T (Saudi Aramco, not even top ten anymore).
•nVidea (current largest @ $4.3T) was "only" ~$0.6T [$600,000 x Million]
•Top 7 public techs are where predominant gains have grown / held
•March 16, 2020, all publicly-traded companies worth ~$78T; at present, ~$129T
•Gold has doubled, to present.
>what kind of effects are we going to see
•Starvation and theft like you've probably barely witnessed in your 1st- or 3rd-world lifetime. Not from former stock-holders, but from former underling employees, out of simple desperation. Everywhere, indiscriminantly from the majority.
•UBI & conscription, if only to avoid previous bullet-point.
¢¢, hoping I'm wrong.
The article mentions
> This is a bubble driven by vibes not returns ...
I think this indicates some investors are seeing a return. I know AI is expensive to train and somewhat expensive to run though so I am not really sure what the reality is.
In my uninformed opinion, though, companies who spent excessively on bad AI initiatives will begin to introspect as the fiscal year comes to an end. By summer 2026 I think a lot of execs will be getting antsy if they can't defend their investments
Each cycle filters out people who are not actually interested in AI, they are grifters and sheisters trying to make money.
I have a private list of these starting from 2006 to today.
LLMs =/= AI and if you don’t know this then you should be worried because you are going to get left behind because you don’t actually understand the world of AI.
Those of us that are “forever AI” people are the cockroaches of the tech world and eventually we’ll be all that is left.
Every former “expert systems scientist”, “Bayesian probably engineers” “Computer vision experts” “Big Data Analysts” and “LSTM gurus” are having no trouble implementing LLMs
We’ll be fine
LLMs are inappropriately hyped. Surrounded in shady practices to make them a reality. I understand why so many people are anti-LLM.
But empty hype? I just can't disagree more.
They are generalized approximation functions that can approximate all manner of modalities, surprisingly quickly.
That's incredibly powerful.
They can be horribly abused, the failure modes unintuitive, using them can open entirely new classes of security vulnerabilities and we don't have proper observability tooling to deeply understand what's going on under the hood.
But empty hype?
Maybe we'll move away from them and adopt something closer to world models or use RL / something more like Sutton's OaK architecture, or replace back prop with something like forward-forward, but it's hard to believe Hal-style AI is going anywhere.
They are just too useful.
Programming and the internet were overhyped too and had many of the same classes of problems.
We have a rough draft of AI we've only seen in sci-fi. Pandora's box is open and I don't see us closing it.
Work for a major research lab. So much headroom, so much left on the table with every project, so many obvious directions to go to tackle major problems. These last 3 years have been chaotic sprints. Transfusion, better compressed latent representations, better curation signals, better synthetic data, more flywheel data, insane progress in these last 3 years that somehow just gets continually denigrated by this community.
There is hype and bullshit and stupid money and annoying influencers and hyperbolic executives, but “it’s a bubble” is absurd to me.
It would be colossally stupid for these companies to not pour the money they are pouring into infrastructure buildouts and R&D. They know it’s going to be a ton of waste, nobody in these articles are surprising anyone. These articles are just not very insightful. Only silver lining to reading the comments and these articles is the hope that all of you are investing optimally for your beliefs.
The thing to remember about the HN crowd is it can be a bit cynical. At the same time, realize that everyone's judging AI progress not on headroom and synthetic data usage, but on how well it feels like it's doing, external benchmarks, hallucinations, and how much value it's really delivering. The concern is that for all the enthusiasm, generative AI's hard problems still seem unsolved, the output quality is seeing diminishing returns, and actually applying it outside language settings has been challenging.
toss1•1h ago
As we learn more about the capabilities and limits of LLMs, I see no serious arguments scaling up LLMs with increasingly massive data centers and training will actually reach anything like breakthrough to AGI or even anything beyond the magnitude of usefulness already available. Quite the opposite — most experts argue fundamental breakthroughs will be needed in different areas to yield orders-of-magnitude greater utility, nevermind yielding AGI (not that much more refinement won't yield useful results, only that it won't break out).
So one question is timing — When will the crash come?
The next is, how can we collect in an open and preferable independently/distributed/locally-usable way the best usable models to retain access to the tech when the VC-funded data centers shut down?
[0] https://en.wikipedia.org/wiki/Gartner_hype_cycle
fred_is_fred•1h ago
heathrow83829•43m ago
jbreckmckye•43m ago
If GenAI really was just a "glorified autocorrect", a "stochastic parrot", etc, it would be much easier to deflate AI Booster claims and contextualise what it is and isn't good at.
Instead, LLMs exist in a blurry space where they are sometimes genuinely decent, occasionally completely broken, and often subtly wrong in ways not obvious to their users. That uncertainty is what breeds FOMO and hype in the investor class.
wood_spirit•19m ago
shishy•41m ago