To me, we're clearly not peak AI exuberance. AI agents are just getting started and getting so freaking good. Just the other day, I used Vercel's v0 to build a small business website for a relative in 10 minutes. It looked fantastic and very mobile friendly. I fed the website to ChatGPT5.1 and asked it to improve the marketing text. I fed those improvements back to v0. Finished in 15 minutes. Would have taken me at least one week in the past to do a design, code it, test it for desktop/mobile, write the copy.
The way AI has disrupted software building in 3 short years is astonishing. Yes, code is uniquely great for LLM training due to open source code and documentation but as other industries catch up on LLM training, they will change profoundly too.
Yes, it is even one of necessary components. Everybody is twitchy afraid of the pop, but immediate returns are too tempting so they keep money in. The bubble pops when something happens and they all start to panicking at the same time. They all need to be sufficiently stressed for that mass run to happen.
In other words, do you think we're in 1995 of the dotcom or 2000?
So what if it's subsidized and companies are in market share grab? Is it going to cost $40 instead of $20 that I paid? Big deal. It still beats the hell out of $2k - $3k that it would have taken before and weeks in waiting time.
100x cheaper, 1000x faster delivery. Further more, v0 and ChatGPT together for sure did much better than the average web designer and copy writer.
Lastly, OpenAI has already stated a few times that they are "very profitable" in inference. There was an analysis posted on HN showing that inference for open source models like Deepseek are also profitable on a per token basis.
Think about the pricing. OpenAI fixed everyone's prices to free and/or roughly the cost of a Netflix subscription, which in turn was pinned to the cost of a cable TV subscription (originally). These prices were made up to sound good to his friends, they weren't chosen based on sane business modelling.
Then everyone had to follow. So Anthropic launched Claude Code at the same price point, before realizing that was deadly and overnight the price went up by an order of magnitude. From $20 to $200/month, and even that doesn't seem to be enough.
If the numbers leaked to Ed Zitron are true then they aren't profitable on inference. But even if that were true, so what? It's a meaningless statement, just another way of saying they're still under-pricing their models. Inferencing and model licensing are their only revenue streams! That has to cover everything including training, staff costs, data licensing, lawsuits, support, office costs etc.
Maybe OpenAI can launch an ad network soon. That's their only hope of salvation but it's risky because if they botch it users might just migrate to Grok or Gemini or Claude.
Then everyone had to follow. So Anthropic launched Claude Code at the same price point, before realizing that was deadly and overnight the price went up by an order of magnitude. From $20 to $200/month, and even that doesn't seem to be enough.
Maybe it was because demand was so high that they didn't have enough GPUs to serve? Hence, the insane GPU demand?The question is: is the value generated by AI aligned with the market projected value as currently priced in AI companies valuation? That's what's more difficult to assess.
The gap between fundamental financial data and valuations is very large. The risk is a brutal reassessment of these prices. That's what people call a bubble bursting and it doesn't mean the underlying technology has no value. The internet bubble burst yet the internet is probably the most significant innovation of the past twenty years.
The problem is no one attained that position, price expectations are set and it turns out that wishful thinking of reducing costs of running the models by orders of magnitude wasn't fruitful.
Is AI useful? of course.
Are the real costs of it justified? in most cases no.
The question is: is the value generated by AI aligned with the market projected value as currently priced in AI companies valuation? That's what's more difficult to assess.
I agree it is difficult to assess. Right now, competitive pressure is causing big players to go all in or get left behind.That said, I don't think the bubble is done growing nor do I think it is about to burst.
I personally think we are in 1995 of the dotcom bubble equivalent. When it bursts, it will still be much bigger than in November 2025.
It's how much money is being poured into it, how much of the money that is just changing hands between the big players, the revenue, and the valuations.
If hyperscalers keep buying GPUs and Chinese companies keep saying they don't have enough GPUs, especially advanced ones, why should we believe someone that it's a bubble based on "feel"?
The vast majority of AI doomers in the mass media have never used tools like v0 or Cursor. How would they know that AI is overvalued?
Startups and other unprofitable companies however...
But unlike 08 crisis, we're getting a heads up to bring out the lube.
Oracle will likely fail. It funded its AI pivot with debt. The Debt-to-Revenue ratio is 1.77, the Debt-to-Equity ratio D/E is 520, and it has a free cash flow problem.
OpenAI, Anthropic, and others will be bought for cents on the dollar.
They are one of the few companies actually making money with AI as they have intelligently leveraged the position of Office 365 in companies to sell Copilot. Their AI investment plans are, well, plans which could be scaled down easily. Worst case scenario for them is their investment in OpenAI becoming worthless.
It would hurt but is hardly life threatening. Their revenue driver is clearly their position at the heart of entreprise IT and they are pretty much untouchable here.
And even then, if that happens when the bubble pops, they'll likely just acquire OpenAI on the cheap. Thanks to the current agreement, it already runs on Azure, they already have access to OpenAI's IP, and Microsoft has already developed all their Copilots on top of it. It would be near-zero cost for Microsoft at that point to just absorb them and continue on as they are today.
Microsoft isn't going anywhere, for better or for worse.
Despite them pissing off users with Windows, what HN forgets, is they aren't Microsoft's customer. The individual user/consumer never was. We may not want what MS is selling, but their enterprise customers definitely do.
Azure is a product all right, but there’s nothing particularly better there than anywhere else.
Tesla (P/E: 273, PEG: 16.3) the car maker without robots, robotaxis is less than 15% of the Tesla valuation at best. When the AI hype dies, selloff starts and negative sentiment hits, we have below $200B market cap company.
It will hurt Elon mentally. He will need a hug.
OpenAI, Anthropic, and others will be bought for cents on the dollar.
OpenAI is existential threat to all big tech including Meta, Google, Microsoft, Apple. Hence, they're all spending lavishly right now to not get left behind.Meta --> GenAI Content creation can disrupt Instagram. ChatGPT likely has more data on a person than Instagram does by now for ads. 800 million daily active users for ChatGPT already.
Google --> Cash cow search is under threat from ChatGPT.
Microsoft --> Productivity/work is fundamentally changed with GenAI.
Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
I'm betting that OpenAI will emerge bigger than current big tech in ~5 years or less.
OpenAI does not expect to be cash-flow positive until 2029. When no new capital comes in, it can't continue.
OpenAI can's survive any kind of price competition.
They have infrastructure that serves 800 million monthly active users.
Investors are lining up to give them money. When they IPO, they'll easily be worth over $1 trillion.
There's price competition right now. They're still surviving. If there is price competition, they're the most likely to survive.
Your premise is that there is no bubble. We are talking about what happens when bubble bursts. Without investor money drying out there is no bubble.
Yeah... No they can't. I don't agree with any of your "disruptions," but this one is just comically incorrect. There was a post on HN somewhat recently that was a simulated computer using LLMs, and it was unusable.
Ah yes, PromptOS will go down in the history books for sure.
I seriously doubt it. If this bubble pops, the best OpenAI can hope for is they just get absorbed into Microsoft.
Survive, yes. I don't think anybody ever questioned this.
I wonder if they will be able to remain as "growth stocks", however. These companies are allergic to be seen as nature companies, with more modest growth profiles, share profits, etc.
Not sure how the situation is in Europe and Asia, but I would guess about the same.
Makes one think that this was the plan all along. I think they saw how SVB went down and realize that if they're reckless and irresponsible at a big enough scale they can get the government to just transfer money to them. It's almost like this is their new business model "we're selling exposure to the $XX trillion dollar bailout industry."
Not really. Sundar is still pretty bullish on GenAI, just not the investor excitement around it (bubble).
Pichai described AI as "the most profound technology" humankind has worked on. "We will have to work through societal disruptions," he said, adding that the technology would "create new opportunities" and "evolve and transition certain jobs." He said people who adapt to AI tools "will do better" in their professions, whatever field they work in.Current admin really, really wants the number going up, and is also incapable of considering or is ignorant to any notion of consequence for any actions of any kind.
To pile on, there's hardly a product being developed that doesn't integrate "ai" in some way. I was trying to figure out why my brand new laptop was running slowly, and (among other things) noticed 3 different services running- microsoft copilot, microsoft 365 copilot (not the same as the first, naturally) and the laptop manufacturer's "chat" service. That same day, I had no fewer than 5 other programs all begging me to try their AI integrations.
Job boards for startups are all filled with "using AI" fluff because that's the only thing investors seem to want to put money into.
We really are all dirty here.
I guess but is it better for an investor to own 2 shares of Google or 1 share of OpenAI and 1 share of TSMC?
Like I have no doubt that being vertically integrated as a single company has lot of benefits but one can also create a trust that invests vertically as well.
https://en.wikipedia.org/wiki/Double_marginalization?wprov=s...
Equities are forward looking. TSMC's valuation doesn't make sense if it doesn't have a backlog to grow into.
Nvidia earnings tomorrow will be the litmus test if things are going to topple over.
That's a reduction of complexity, of course, but the core of the lesson is there. We have actually kept on with all the practices that led to the housing crash (MBS, predatory lending, Mixing investment and traditional banking).
I know financially it will be bad because number not go up and number need go up.
But do we actually depend on generative/agentic AI at all in meaningful ways? I’m pretty sure all LLMs could be Thanos snapped away and there would be near zero material impact. If the studies are at all reliable all the programmers will be more efficient. Maybe we’d be better off because there wouldn’t be so much AI slop.
It is very far from clear that there is any real value being extracted from this technology.
The government should let it burn.
Edit: I forgot about “country girls make do”. Maybe gen AI is a critical pillar of the economy after all.
Not so much for the work I do for my company, but having these agents has been a fairly huge boon in some specific ways personally:
- search replacement (beats google almost all of the time)
- having code-capable agents means my pet projects are getting along a lot more than they used to. I check in with them in moments of free time and give them large projects to tackle that will take a while (I've found that having them do these in Rust works best, because it has the most guardrails)
- it's been infinitely useful to be able to ask questions when I don't know enough to know what terms to search for. I have a number of meatspace projects that I didn't know enough about to ask the right questions, and having LLMs has unblocked those 100% of the time.
Economic value? I won't make an assessment. Value to me (and I'm sure others)? Definitely would miss them if they disappeared tomorrow. I should note that given the state of things (large AI companies with the same shareholder problems as MAANG) I do worry that those use cases will disappear as advertising and other monetizing influences make their way in.
Slop is indeed a huge problem. Perhaps you're right that it's a net negative overall, but I don't think it's accurate to say there's not any value to be had.
Personally, I had the exact opposite experience: Wrong, deceitful responses, hallucinations, arbitrary pointless changes to code... It's like that one junior I requested to be removed from the team after they peed in the codebase one too many times.
On the slop i have 2 sentiments: Lots of slop = higher demand for my skills to clean it up. But also lots of slop = worse software on probably most things, impacting not just me, but also friends, family and the rest of humanity. At least it's not only a downside :/
I mostly agree, but I don't think it's the model developers that would get bailed out. OpenAI & Anthropic can fail, and should be let to fail if it comes to that.
Nvidia is the one that would get bailed out. As would Microsoft, if it came to that.
I also think they should be let to fail, but there's no way the US GOV ever allows them to.
It all depends on whether MAGA survives as a single community. One of the few things MAGA understands correctly is that AI is a job-killer.
Trump going all out to rescue OpenAI or Anthropic doesn't feel likely. Who actually needs it, as a dependency? Who can't live without it? Why bail out entities you can afford to let go to the wall (and maybe then corruptly buy out in a fire sale)?
Similarly, can you actually see him agreeing to bail out Microsoft without taking an absurd stake in the business? MAGA won't like it. But MS could be broken up and sold; every single piece of that business has potential buyers.
Nvidia, now that I can see. Because Trump is surrounded by crypto grifters and is dependent on crypto for his wealth. GPUs are at least real solid products and Nvidia still, I think, make the ones the crypto guys want.
Google, you can see, are getting themselves ready to not be bailed out.
Trump (and by extension MAGA) has the worst job growth of any President in the past 50 years. I don't think that's their brand at all. They put a bunch of concessions to AI companies in the Big Beautiful Bill, and Trump is not running again. He would completely bail them out, and MAGA will believe whatever he says, and congress will follow whatever wind is blowing.
You’d be pretty stuck. I guess SMS might work, but it wouldn’t for most businesses (they use the WhatsApp business functionality, there is no SMS thing backing it).
Most people don’t even use text anymore. China has it’s own Apps, but everyone else uses WhatsApp exclusively at this point.
The only reason WhatsApp is so popular, is because so many people are on it, but you have all you need (their phone number) to contact them elsewhere anyway
Hard disagree
I am shocked at the part they know it is a bubble and they are doing nothing to amortize it. Which means they expect the government to step in and save their butts.
... Well, not that shocked.
finally, some rational thought into the AI insanity. The entire 'fake it til you make it' aspect of this is ridiculous. sadly, the world we live in means that you can't build a product and hold its release until it works. you have to be first to release even if it's not working as advertised. you can keep brushing off critiques with "it's on the road map". those that are not as tuned in will just think it is working and nothing nefarious is going on. with as long as we've had paid for LLM apps, I'm still amazed at the number of people that do not know that the output is still not 100% accurate. there are also people that use phrases as thinking when referring to getting a response. there's also the misleading terms like "searching the web..." when on this forum we all know it's not a live search.
This time they'll be gifted 70 trillion to make up for the shortfall, and life shall continue on for the rich.
It's win-win for them, there's no risk at all
That's what I'm personally hoping for anyway, would rather the economy avoid a big recession.
But as I try to sort of narrative the ideas behind bubbles and bursts, one thing I realize, is that I think in order for a bubble to burst, people essentially have to want it to burst(or the opposite have to want to not keep it going).
But like Bernie Madoff got caught because he couldn't keep paying dividends in his ponzi scheme, and people started withdrawing money. But in theory, even if everyone knew, if no one withdrew their money (and told the FCC) and he was able to use the current deposits to pay dividends a few years. The ponzi scheme didn't _have_ to end, the bubble didn't have to pop.
So I've been wondering, like if everyone knows AI is a bubble, what has to happen to have it collapse? Like if a price is what people are willing to pay, in order for Tesla to collapse, people have to decide they no longer want to pay $400 for Tesla shares. If they keep paying $400 for tesla shares, then it will continue to be worth $400.
So I've been trying to think, in the most simple terms, what would have to happen to have the AI bubble pop, and basically, as long as people perceive AI companies to have the biggest returns, and they don't want to move their money to another place with higher returns (similar to TSLA bulls) then the bubble won't pop.
And I guess that can keep happening as long as the economy keeps growing. And if circular deals are causing the stock market to keep rising, can they just go on like this forever?
The downside of course being, the starvation of investments in other parts of the economy, and giving up what may be better gains. It's game theory, as long as no one decides to stop playing the game, and say pull out all their money and put it into I dunno, bonds or GME, the music keeps playing?
Imagine if interest rates go up and you can get 5% from a savings account. One big player pulls out cash triggering a minor drop in AI stocks. Panic sells happen trying to not be the last one out of the door, margin calls etc.
You're assuming cash will never stop flowing in driving up prices. It will. The only way it goes on forever is if the companies end up being wildly profitable
This one? When China commits to subsidising and releasing cutting-edge open-source models. What BYD did to Tesla's FSD fee dreams, Beijing could do to American AI's export ambitions.
Economically, AI is a bubble, and lots of startups whose current business model is "UI in front of the OpenAI API" are likely doomed. That's just economic reality - you can't run on investor money forever. Eventually you need actual revenue, and many of these companies aren't generating very much of it.
That being said, most of these companies aren't publicly traded right now, and their demise would currently be unlikely to significantly affect the stock market. Conversely, the publicly traded companies who are currently investing a lot in AI (Google, Apple, Microsoft, etc) aren't dependent on AI, and certainly wouldn't go out of business over it.
The problem with the dotcom bubble was that there were a lot of publicly traded companies that went bankrupt. This wiped out trillions of dollars in value from regular investors. Doesn't matter how much you may irrationally want a bubble to continue - you simply can't stay invested in a company that doesn't exist anymore.
On the other hand, the AI bubble bursting is probably going to cost private equity a lot of money, but not so much regular investors unless/until AI startups (startups dependent on AI for their core business model) start to go public in large numbers.
Plus the information they can provide to the State on the sentiment of users is also going to be greatly valued
They can't, not firever. Bubbles pop.
The comparison made to the dotcom bubble is apt. It was a bubble, but that didn't mean that all the internet and e-commerce ideas were wrong, it was more a matter of investing too much too early. When the AI bubble pops or deflates, progress on AI models will continue on.
Nvidia also makes up ~7% of the S&P 500 so if their stock price falls substantially, that's a big chunk of capital just... gone for a lot of people.
Even Vanguard's Total World Index, VT, is roughly 15% MAG 7.
That's not even getting into who's financing whom for what and to whom that debt may be sold to.
If they’ve securitized and sold their data center buildout, will the big clouds and AI labs actually face any severe impact? While the sums are huge, most of these companies have the cash on hand to pay down the debt. The big AI labs have said their models earn enough to cover the cost to train themselves, just not the next one. This means they could at any time walk away from the compute spend for training.
With the heavy securitization of all these deals, will the “bubble pop” just hurt the financial industry?
If a company like CoreWeaver sees their SPV for a Microsoft-specific data center go bankrupt, that means MSFT decided to walk away from the deal. Red flag for the industry, but also a sign of fiscal restraint. Someone else can swoop in and buy the DC for cheap, while MSFT avoids the Opex hit. Seems like the losers will be whoever bought that SPV debt, which probably isn’t a tech company.
What is?
I wonder who’s writing the script.
My biggest worry is that what will be left standing after all of this is the organizations that are quietly all the AI slop everywhere, be it the normal web or YouTube.
If you're young and invested for the long term, just leave all your junk in broad index securities. You can't do better than that, you just have to ride the bumps.
On the other hand, I'm approaching retirement and looking seriously at when to pull the trigger. The aggregate downside to me of a large market drop or whatever is much higher than it is to a 20-something, because losing out on (to make a number up) an extra 30% of net worth is minor when compared to "now you have to work another three years before retiring" (or alternate framings like "you have to retire in Houston and not Miami", etc...).
So most of my assets are moving out of volatiles entirely.
Personally speaking, as somebody that was 100% in equities until earlier this year (I'm in my early 40s and had most of my wealth in VOO), I shifted to a 60-40 portfolio - there are ETFs that maintain the balance for you. I did this knowing full well that this could attenuate my upside, but I figured it's worth it than being so concentrated in a single part of an industry (AI within tech) and so much upside was already acquired up until that point. Also, I figured the chances of the 2nd Trump term adding to volatility weren't going to help tamper volatility. On top of that, my income is tied to tech, so diversifying away further from it is sensible (especially the equity parts of my compensation).
But if you're in your 20s, your nest egg is likely small enough that I'd just continue plugging away in automatic contributions. Investing at all is far more important than anything else at that stage.
If the stock market crashes, New York property probably sings. Stock market crash means ZIRP. And ZIRP means lots of money sloshing through New York.
That's kinda the problem, I'd expect it to be a bit… volatile. I guess it's a valid target to gamble on if that matches your risk profile.
Technically yes, but only because something monotonically increasing in price is volatile.
1. AI companies manage to build AGI and achieve takeoff. I have no idea on how to hedge against that.
2. The market is not allowed to crash. There will likely be some lag between economic weakness and money printing. Safer option is probably to buy split 50% SPY and 50% bonds. A riskier option is trying to time the market.
3. The market is allowed to crash. Bonds, cash, etc.
Depending on what you believe will happen and risk appetite you can blend between the strategies or add a short component. I am holding #2 with no short positions in post-tax accounts and full SPY in tax advantaged accounts.
For example, stock from war profiteering companies (lockheed, raytheon).
Note that investing in war profiteers is a proven way to build wealth. I just don't want to do that.
This argument not only applies to evil companies, but also dumb ones. For example, I have no interest in investing in IBM or Oracle even those both of those are also money makers.
https://www.fidelity.com/learning-center/smart-money/magnifi...
There are other index funds which are equal weighted rather than market weighted. Those have underperformed lately but might be less volatile if the AI bubble pops.
I'm not able to predict what the overall market is going to do short or medium term
What makes you think your guess is better than the rest of the money in the market, most of it acting with better information than you?
It reminds me of the time that everyone said the economy was going to tank and somehow everyone had it wrong a couple years ago.
It feels implausible that it isn't overbuilt but it also feels really strange for everyone to be pushing this narrative that its a bubble - and people taking very public short bets. It feels like the contrarian bet is that its going to keep running hot. Nvidia earnings tommorrow big litmus test.
However, it seems more like the people pumping billions into AI are all still "this is going to the moon" gung-ho, and unless they are investing billions of CASH, then I guess they are borrowing to do so ...
I don't know how this financing works - maybe no fear of having it pulled like a foreclosure on a subprime mortgage holder, or a broker margin call, but it's not going to end well if these investments start to fail and the investors start running for the door.
[0] https://www.proshares.com/our-etfs/strategic/spxt (S&P minus tech stocks)
[1] https://www.defianceetfs.com/xmag/ (S&P minus "Magnificent 7")
And the sentiment that goes around is more: reduce the amount of people needed to do the same amount of work:
https://www.theregister.com/2025/10/09/mckinsey_ai_monetizat...
> McKinsey says, while quoting an HR executive at a Fortune 100 company griping: "All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet."
The problem becomes that eventually all these people who are laid off are not going to find new roles.
Who is going to be buying the products and services if no-one has money to throw around?
Maybe I can make things more efficient by getting rid of you and replacing you with AI, but how long until my boss has the same idea?
The same people who are buying products and services right now. Just 10% of the US population is responsible for nearly 50% of consumption.
We are just going to bifurcate even more into the haves and have-nots. Maybe that 10% now becomes responsible for 70+% of consumption and everyone else is fighting for scraps.
It won't be sustainable and we need UBI. A bunch of unemployed, hungry citizens with nothing left to lose is a combo that equals violent revolution.
If all jobs evaporate, what does the economy look like when just based on interest and dividend payments?
If all jobs evaporate, then only asset owners will have money to spend, everyone else is left to fight for scraps so we either all die off or we get mad max.
The middle class have financially benefited very little from the past 20+ years of productivity gains.
Social media is driving society apart, making people selfish, jealous, and angry.
Do people really think more technology is going to be the path to a better society? Because to me it looks like the opposite. It will just be used to stomp on ordinary people and create even more inequality.
What's that about the falcon and the falconer? The center cannot hold..
I know people raising a family of 4 on 1 income well below the median wage without a collage degree. They do get significant help from government assistance programs for healthcare, but their lifestyle is way better off than what was typical in the 1960’s.
Granted they aren’t doing this in a ultra expensive US city, but on the flip side they’re living in a huge for 1960’s 3 bedroom house with a massive backyard.
With that view, many things oscillate over time, including game theory patterns (average interaction intentions of win-win, win-lose, lose-lose), and integration / mitosis (unions, international treaties, civil wars),etc.
So my optimistic view is that inevitably we will get more tech whether we want it or not, and it will probably make things worse many for a while, but then it will simultaneously enable and force a restructuring at some level that starts a new cycle of prosperity. On the other side it will be clear that all this tech directly enables a better (more free, more diverse, more rewarding, more sustainable) way of life.
I believe this because from studying history it seems this pattern plays out over and over and over again to varying degrees.
Could go either way.
Every kind of a man, or woman?
> Do people really think more technology is going to be the path to a better society? Because to me it looks like the opposite.
Well, this probably why statistics exist.
The short period of boom in 50s/60s US and Canada was driven by WW2 devastation everywhere else. We can see the economic crisis' in the US first in the 70s/80s with Europe and Japan rebounding, then again in 90s/00s with China and East Asia growing, and now again with the rest of the world growing (esp Latin America, India, Indonesia, Nigeria, Philippines, etc). Unless US physically invades and devastates China, India or Brazil the competition will keep getting exponentially higher. It's a shame that US didn't invest all that prosperity into social capital that could have helped create high value jobs.
In short, its easier to have high standards of living in your secure isolated island when the rest of the world (including historical industrial powers) are completely decimated by war.
What does this sentence mean?
The US has pushed a shit ton of money into education. I mean an unreasonable amount of it went to administrators. But the goal and the intent was certainly there.
Things that let workers focus on innovation. IT workers in cheaper countries have it much easier while we have to juggle rising cost of living and cyclical layoffs here. And ever since companies started hiring workers directly and paying 30-50% (compared to 10-15% during the GCC era) the quality is almost at par with US.
> In short, its easier to have high standards of living in your secure isolated island when the rest of the world (including historical industrial powers) are completely decimated by war.
So, what's your point? That the plebs shouldn't expect that much comfort?
Why do so many people miss the point on this?
Instead of making this dream true for all the people who were previously excluded, we have pursued equality by making this dream accessible to NO ONE.
> Well, this probably why statistics exist.
Like the statistics on plummeting mental health and happiness, an obesity epidemic, increases in "deaths of despair", and plateauing or decreasing life expectancy?
Or maybe you're saying that's always how these initiatives turn out? It can't be helped?
> Like the statistics on plummeting mental health and happiness, an obesity epidemic, increases in "deaths of despair", and plateauing or decreasing life expectancy?
In the 60ties, suicide rates went UP. Peaked around 1970 and we did not reached their levels.
Long terms statistics about alcoholism rates and drug use are also a real exiting thing. We know that cirrhosis death rate was going up in the 60ties up to 70ties, peaked and went down. It was the time when drinking and driving campaigns started.
Current drug use is nowhere near what it was a generation ago.
"The good ol' days" ... yeah, but good for who?
At least one sci-fi author has gamed this out:
We have no basis for seriously considering this hypothetical when it comes to LLMs.
Taken loosely, we have seen previous developments which make a large fraction of a population redundant in short periods, and it goes really badly, even though the examples I know of are nowhere near the entire population.
I'm not at all sure how much LLMs or other GenAI are "it" for any given profession: while they impress me a lot, they are fragile and weird and I assume that if all development stopped today the current shinyness would tarnish fast enough.
But on the other hand, I just vibe coded 95% of a game that would have taken me a few months back when I was a fresh graduate, in a few days, using free credit. Similar for the art assets.
How much money can I keep earning for that last 5% the LLM sucks at?
AGI succeeds and there are mass layoffs, money is concentrated further in the hands of those who own compute.
OR
AI bubble pops and there are mass layoffs, with bailouts going to the largest players to prevent a larger collapse, which drives inflation and further marginalizes asset-less people.
I honestly don't see a third option unless there is government intervention, which seems extremely far fetched given it's owned by the group of people who would benefit from either scenario presented above.
In the AGI Succeeds scenario, the situation is unprecedented and it's not clear how it ever gets better.
Of course he’s nervous - what else would you expect him to say?
ZYbCRq22HbJ2y7•15h ago
Not immune, maybe, but pretty well off if they didn't buy in.