> PC market contract by 4.9% compared with a 2.4% year-on-year decline in the November forecast. Under a more pessimistic scenario, the decline could deepen to 8.9%.
The promised AI metaverse is still a long way off and in the meantime people still want the best smartphone.
And if you think that somebody buys an iPhone because they compare the specs with Android :)))))
"What do you mean my status flagship iPhone costs only half as much as a flagship Android???"
Nah. The marginal utility of more smartphone ram is near zero at this point. The vast majority of people wouldn't even notice if the memory in their phone tripled overnight.
How scarce does memory have to get before it makes health care half as expensive?
> the tendency for wages in jobs that have experienced little or no increase in labor productivity to rise in response to rising wages in other jobs that did experience high productivity growth
Specifically, manufacturing sectors have increased productivity and service sectors haven't.
Every functionality be will subscription-based. You'll own nothing and you'll be happy.
The economy says nothing about requiring humans to exist.
Or consuming 2 GB of RAM to have Teams running in the background doing nothing?
Yeah, if we got rid of that as a result of RAM shortages, that’d be great.
The deal was inked on October 1, 2025, and rumors of it started swirling in September. Take a look at the RAM price charts. Anyone who attributes this just to "AI growth" has no idea what they're talking about. AI has been growing rapidly for three years and yet this price increase just happened exactly when Altman signed this deal.
https://pcpartpicker.com/trends/price/memory/
It's also worth noting that IDC, who published this report, is wholly owned by Blackstone, who is also heavily invested in OpenAI. It would be prudent to be cautious about who you believe.
The wafers are not DRAM. This is more likely burning oil wells so your enemy can't use them. Wafers are to chips what steel blanks are to engines. You basically need clean rooms just to accept delivery and entire fabs to do anything. Someone who doesn't own a fab buying the wafers is essentially buying them to destroy them.
https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram...
It would be nice if it were creeping up generation to generation. But if this keeps up I fear the opposite.
There are plenty of workloads where I’d love to double the memory and halve the cores compared to what the memory-optimised R instances offer, or where I could further double the cores and halve the RAM from what the compute-optimised C instances can do.
“Serverless” options can provide that to an extent, but it’s no free lunch, especially in situations where performance is a large consideration. I’ve found some use cases where it was better to avoid AWS entirely and opt for dedicated options elsewhere. AWS is remarkably uncompetitive in some use cases.
c*: 2GB per vCPU
m*: 4GB per vCPU
r*: 8GB per vCPU
x2idn/x8g: 16GB per vCPU (!)
x2iedn/x2iezn/x8aedz: 32GB per vCPU (!)Well, except IBM. Maybe Yahoo.
I wonder if this will result in writing more memory-efficient software? The trend for the last couple of decades has been that nearly all consumer software outside of gaming has moved to browsers or browser-based runtimes like Electron. There's been a vicious cycle of heavier software -> more RAM -> heavier software but if this RAM shortage is permanent, the cycle can't continue.
Apple and Google seemed to be working on local AI models as well. Will they have to scale that back due to lack of RAM on the devices? Or perhaps they think users will pay the premium for more RAM if it means they get AI?
Or is this all a temporary problem due to OpenAI's buying something like 40% of the wafers?
(Source: I maintain an app integrated with llama.cpp, in practice no one likes 1 tkn/s generation times that you get from swapping, and honestly MoE makes RAM situation worse because in practice, model developers have servers and batch inference and multiple GPUs wired together. They are more than happy to increase the resting RAM budget and use even more parameters, limiting the active experts is about inference speed from that lens, not anything else)
What do you mean it can't continue? You'll just have to deal with worse performance is all.
Revolutionary consumer-side performance gains like multi-core CPUs and switching to SSDs will be a thing of distant past. Enjoy your 2 second animations, peasant.
If the consumer market can't get cheap RAM anymore, the natural result is a pivot back to server-heavy technology (where all the RAM is anyway) with things like server-side rendering and thin clients. Developers are far too lazy to suddenly become efficient programmers and there's plenty of network bandwidth.
However, the customers do not care and will not pay more so the business cannot justify it most of the time.
Who will pay twice (or five times) as much for software written in C instead of Python? Not many.
It hasn't gotten 100x harder to display hypermedia than it was 20 years ago. Yet applications use 10x-100x more memory and CPU than they used to. That's not good software, that's lazy software.
I just loaded "aol.com" in Firefox private browsing. It transferred 25MB, the tab is using 307MB of RAM, and the javascript console shows about 100 errors. Back when I actually used AOL, that'd be nearly 10x more RAM than my system had, and would be one of the largest applications on my machine. Aside from the one video, the entire page is just formatted text and image thumbnails.
I do not think it is surprising that there is a Jevons paradox-like phenomena with computer memory and like other instances of it, it does not necessarily follow that this must be a result of a corresponding decline in resource usage efficiency.
Ideally, llm should be able to provide the capability to translate from memory inefficient languages to memory efficient languages, and maybe even optimize underlying algorithms in memory use for this.
But I'm not going to hold my breath
From what I see in other comments, if you can confidently assert “AI bubble; no one will want GPUs soon” it makes sense, but the COVID stuff is a head scratcher.
DRAM is a notoriously cyclical market, though, and wise investors are leery of jumping into a frothy top. So, it’ll take a while before anyone decides the price is right to stand up a new competitor.
> As a result, IDC expects 2026 DRAM and NAND supply growth be below historical norms at 16% year-on-year and 17% year-on-year, respectively.
This is an odd claim. It’s like saying that car companies historically produced more coupes than sedans, but suddenly there are new enormous orders for millions of sedans. All cars get massively more expensive as a result — car makers charge 50-200% more than before. Sure, they need to retool a little bit and buy more doors, but somehow the article claims that “limited … capital expenditure” means that overall production will grow more slowly than historical rates?
This only makes sense either on extremely short timescales (as retooling distracts form expansion) or if the car makers decide not to try to compete with each other. Otherwise some of those immediately available profits would turn into increased capital expenditure and more RAM would be produced. (Heck, if RAM makers think the new demand is sustainable, they should be happy to increase production to sell more units at current prices.)
I mean, the lack of affordable consumer hardware may end up further reducing the need for AI.
On the other hand, it may end up shifting workloads to the cloud instead.
Heck, time will tell.
transcriptase•1mo ago
crazydoggers•1mo ago
willis936•1mo ago
crazydoggers•1mo ago
bongodongobob•1mo ago
ssl-3•1mo ago
pixl97•1mo ago
Forgeties79•1mo ago
I find it very odd when people proudly proclaim they used, say, Grok to answer a question. Their identity is so tied up in it that if you start talking about the quality of the information they get incredibly defensive. In contrast: I have never felt protective of my Google search results, which is basically the same thing given how most people use these tools currently.
It’s kind of wild how hostile some people get if you attempt to open the discussion up at all.
bongodongobob•1mo ago
Forgeties79•1mo ago
They also don’t care about the communities they are impacting in the slightest. https://lailluminator.com/2025/11/22/meta-data-center-crashe...
kubb•1mo ago
willis936•1mo ago
Edit: It's similarly frustrating about the zoomers. Parents are derelict of duty by not defending their kids and preparing them for the world they are in.
rkomorn•1mo ago
It is, though. We're just in the part leading up to WWIII.
natebc•1mo ago
You want to be born into the utopia, not before.
kubb•1mo ago
Just wait until the next great collapse, a disaster big enough to force change. Hopefully we'll have the right ideas lying around at the time to restructure our social communication system.
Until then, it's slow decline. Embrace it.
pixl97•1mo ago
Lutzb•1mo ago
Forgeties79•1mo ago
willis936•1mo ago
Forgeties79•1mo ago
https://news.ycombinator.com/item?id=46413716
tempest_•1mo ago
Boomers might be out there consuming those AI youtube videos that are just tiktok voice over with a generated slide show but Millennials think since they can identify this as slop that they are not affected. That is incorrect, and just as bad.
rewgs•1mo ago
It's shocking how quickly my family normalized consuming obvious AI slop short-form videos, one after the other, for hours. It's horrifying.