Companies (generally) build things with an expectation for a return on their investment: what "regular" data centre usage would necessitate these kind of build-outs?
To sell more Postgres or WordPress VMs/instances? Is that being used to justify the spending in shareholder conference calls and regulatory filings?
Talking to anyone in the space for more than 30 minutes and nuclear with come up.
I very much hope the hype cycle lasts long enough for some of this capital raining down from the sky to get these reactors deployed in the field, because those will be a lasting positive from this hype cycle - much like laying railroad infrastructure and fiber optic cables came from other hype cycles.
I've often said that the robber barons sucked, but at least they left massive amounts of physical infrastructure for the average person to benefit from. The hype cycles of late leave very little in comparison.
Microsoft actually has a design for mini datacenters that stay cool in the ocean and collect tidal energy. But it's way more fun to have states trying to court you into building datacenters cause it'll bring some jobs.
Already the case in Europe. And in the US, most of the biggest player are doing this: Google, Microsoft, Meta, AWS. By now those 4 are the largest buyers of renewable purchase agreement in the entire world. MS alone invested something like 20B in renewable purchase.
But the issue is that installation of renewable in the US is not bottlenecked by lack of demand, it's bottlenecked by permitting, zoning issue etc. The queue for power deployment right now is something like 100GW (i.e. how much production is paid to be built, but not yet built), that is around 10% of the current total US power capacity. So it's not really clear to me if buying more renewables helps making it's deployment faster through economies of scale, or if the purchase order is just sitting in a queue for years and years.
One notable exception is xAI/Grok, who has one of the biggest cluster, is powering it 100% with gas and afaik did not offset it by buying the equivalent in renewable. Having built the cluster in a what was a washing machine factory that does not have adequate power supply or cooling tech, they have been rolling in 35 mobile gas turbines (large trailer trucks that you connect to gas pipes) and 50+ refrigeration trucks. IMHO, it should be illegal to build such an energy consuming system with such a poor efficiency, but well.
Data centers and energy infrastructure investments go hand in hand. A lot of that money is actually being allocated for energy infrastructure. Because a data center is useless if you can't power it. That is also why companies like Amazon and MS are investing in nuclear. They need large amounts of cheap power to lower their energy bills long term and they need to secure access to energy supply. They see nuclear as a way of getting that. Coal and gas is just too expensive and undesirable. There's not enough of it and it's expensive to operate. Renewables & clean power (including nuclear here) is what they want really.
What would actually help is making the permitting around renewables and nuclear faster and more efficient. The energy demand is there. And that includes from data centers. And the capital is there. It's just that energy projects are bottle-necked on bureaucracy. Many countries have loads of viable energy projects stuck in their planning pipelines. For example, it takes years to get a grid connection for new wind or solar plants that can otherwise be built and delivered in less than a year. And likewise it takes years to get construction projects approved. This also affects investments in grid infrastructure. That's nuts. These countries have energy shortages that are slowing down their economic growth. And the capital to fix that. It's just that they are blocked on their own bureaucracy.
Unlike data centers, investments in energy infrastructure provide decades of return of investment. These are long term investments. Even if AI flops (which I think is hard to argue given how useful it is already), we get to keep the energy infrastructure. And we'll find a way to repurpose those data centers. I don't see that as a write off either.
Using sus statistics to draw weird conclusions.
Surely you only get one of the two, because for diverted investments the multiplier applies equally on both sides of the equation.
He is making two arguments. One is that AI capex is starving other industries. And the other that AI capex is causing major GDP growth, attributed to both the direct investments themselves, as well as the multiplier effects.
One of those could be true. But I assert that both cannot be true at the same time. If these direct investments were going to happen elsewhere if they weren't happening for AI infrastructure then that counterfactual spending would show up in the GDP instead, as would the multiplier effect from that spending.
Increasing demand does increase the price in most markets.
But I don't hear anyone worried about the massive power consumption without a clear indication if this is a net positive for our society.
Maybe that's something that can only be determined looking back. There are so many unknown unknowns.
We're yet to see if it's going to be a winner takes all market or whether there will end up a Linux equivalent pop up that destroys all the investment from the big players because programmers are too tight to pay for software.
The 1880s 6% on railroads is an interesting number, I didn't know it was that much.
- Apollo program: 4%
- Railroads: 6% (mentioned by the author)
- Covid stimulus: 27%
- WW2 defense: 40%
- 40% of long-distance ton miles travel by rail in the US. This represents a VAST part of the economic activity within the country.
- A literal plague, and the cessation of much economic activity, with the goal of avoiding a total collapse.
- ...Come on.
So we're comparing these earth-shaking changes and reactions to crisis with "AI"? Other than the people selling pickaxes and hookers to the prospectors, who is getting rich here exactly? What essential economic activity is AI crucial to? What war is it fighting? It mostly seems to be a toy that costs FAR more than it could ever hope to make, subsidized by some obscenely wealthy speculators, executives fantasizing about savings that don't materialize, and a product looking for a purpose commensurate to the resources it eats.
The continued devaluing of skilled labor and making smaller pools of workers able to produce at higher levels, if not their automation entirely.
And yeah AI generated code blows. It's verbose and inefficient. So what? The state of mainstream platform web development has been a complete shit show since roughly 2010. Websites for a decade plus just... don't load sometimes. Links don't load right, you get in a never-ending spinning loading wheel, stuff just doesn't render or it crashes the browser tab entirely. That's been status quo for Facebook, YouTube, Instagram, fuck knows how many desktop apps which are just web wrappers around websites, for just.. like I said, over a decade at this point. Nobody even bats an eye.
I don't see how ChatGPT generating all the code is going to make anything substantively worse than hundreds of junior devs educated at StackOverflow university with zero oversight already have.
Literally every profession around me is radically changing due to AI. Legal, tech, marketing etc have adopted AI faster than any technology I have ever witnessed.
I’m gobsmacked you’re in denial.
I mean, companies are trying to force it onto us, but it's not ready for any real work, so the "adoption" is artificial.
Each of the 15 charts would have been a page of boilerplate + Python, and frankly there was a huge amount of interdisciplinary work that went into the hundreds of thought steps in the deep reasoning model. It would have taken days to fill in the gaps and finish the analysis. The new crop of deep reasoning models that can do iteration is powerful.
The gap between previous "scratch work" of poking around a spreadsheet, and getting pages of advanced data analytics tabula rasa, is a gap so large I almost don't have words for it. It often seems larger than the gap between pen and paper and a computer.
And then later, off of work, I wanted to show real average post-inflation returns for housing areas that gentrify and compare it with non-gentrifying areas. Within a minute all of the hard data was pulled in and summed up. It then codes up a graph for the typical "shape of gentrification", which I didn't even need to clarify to get a good answer. Again, this is as large a jump as moving from an encyclopedia to an internet search engine.
I know it's used all over finance though. At Jane Street (upper echelon proprietary trading) they have it baked into their code development in multiple layers. In actual useful ways, not "auto completion" like mass market tools. Well it is integrated into the editor and can generate code, but there is also AI that screens all of the code that is submitted, and an AI "director" tracks all of the code changes from all of the developers, so if a program starts failing an edge case that wasn't apparent earlier, the director will be able to reverse engineer all of the code commits, find out where the dev went wrong, and explain it.
Then that data generated from all of the engineers and AI agents is fed back into in-house AI model training, which then feeds back into improvements in the systems above.
All of the dismissiveness reminds me of the early days of the internet. On that note, this suite of technologies seems large. Somewhere in-between the introduction of the digital office suite (word/excel/etc) and perhaps the Internet itself. In some respects, when it comes to the iterative nature of it all (which often degrades to noise if mindlessly fed back into itself, but in time will be honed to, say, test thousands of changes to an engineering Digital Twin) it seems like something that may be more powerful than both.
But then we saw the same thing with Crypto, tons of money poured into that, the Metaverse was going to be the next big thing! People who didn't see and accept that must not understand the obvious appeal...
I refuse to believe this will not have long term consequences.
I WISH that after this, companies will put up quality guardrails to basically offer the same product 60% cheaper at better quality, but I don't trust companies.
Like 1.2% isn’t a big percentage, but neither is 3.4% - our total military expenditures this year.
I will never understand people who use tiny European countries as meaningful comparisons to continent sized ones.
The population of Queens and Brooklyn is one of the most densely populated areas on the planet. I will never understand people who use massively outlier-sized cities as meaningful comparisons to nations.
Slightly off-topic, but ~9% of GDP is generated by "financial services" in the US. Personally I think it's a more alarming data point.
Financial services makes the unrealistic consumption of rich countries possible. That’s worth 9%.
The finance industry's ability to teleport value across time and space is a massive boon for quality of life across the world.
The comment is an uninformed take.
Trivially verifiable by Visa’s revenue being $35B, which is not even close to 1% of just US GDP (about $30T).
visa is saving the country a lot of time/money.
OpenAI isn’t putting two billion in GPUs on their corporate credit card.
Or your retirement account. Everyone is mad about investors and companies making money. Sure, there are ultra wealthy people (mostly founders) that benefit disproportionately. However, most people who hope to retire some day rely on a 401k, pension, etc which is dependent on stocks. Retirement accounts have about $36T in the US, mostly in equities and corporate bonds.
The richest 1% own half the wealth in the world and the gap is getting wider. Since 2020, for every dollar of new global wealth gained by someone in the bottom 90%, one of the world’s billionaires has gained $1.7 million. (Source: https://www.globalcitizen.org/en/content/wealth-inequality-o...)
So yes, some of the wealth is going to your retirement account. But for every penny going to a middle-class professional workers retirement, there's about a thousand dollars going to some hedge fund manager or the trust fund of the grandson of some robber baron who got rich a hundred years ago.
Generational inheritance is a cancer, and many American ills are a direct result of allowing it to fester unsolved.
Buffett was right — enough to get a good start, but no more.
And the industry itself greases the wheels of other industries. In other words without financial services like lending and payment processing there would be less spending and investment overall, so other industries would shrink along with it.
Central planning is drastically more efficient, for example. It’s why large companies use it internally.
If it were, why do we have more than one company?
> I take your point that companies themselves are usually centrally planned internally
Well, sort of. It is true that companies exist solely for the reason of exploiting efficiencies in central planning. If central planning was always inefficient, companies wouldn't exist! But, as I alluded to earlier, no company has found central planning to be efficient in all cases. Not even the largest company in the world centrally plans everything. Not even close.
As with most things in life, a bit of balance will serve you well.
Typically, central planning does not imply micromanagement. The "broad direction" you speak of is the central planning.
> Companies are reorganizing for efficiency all the time.
But, of course, companies wouldn't exist if markets were perfectly efficient. The sole reason for companies is to exploit the efficiencies of central planning. But, of course, just as if markets were perfectly efficient there would be no companies, if central planning was perfectly efficient there would only be one company, so... Like always, there are tradeoffs that we have to find balance in.
Banking used to really suck. Walk into an old bank building and it looks empty with spaces for a dozen tellers never actually used, this is a good thing as nobody actually wants to stand in line at a bank. People have largely stopped using cash because swiping a card is just more pleasant.
Meanwhile payment networks (Visa, Mastercard) have over a 50% profit margin, that’s a huge loss for the US economy. Financial services dropping to 1% of the overall economy would represent a vast improvement over the current system.
The Retail Bank's main function isn't providing cash either, it's keeping deposits which they loan out for profits. Whether you use cards or cash won't affect those margins.
While LLM’s are nowhere near this capacity today, it’s likely future AI systems will be able to handle such complexities just fine. Competition + automation means the financial sector really is on a long term decline. Some things aren’t automated due to customer preference, but preferences change over time.
> The Retail Bank's main function isn't providing cash either, it's keeping deposits which they loan out for profits. Whether you use cards or cash won't affect those margins.
The margins on loans have decreased significantly as shown by much lower effective interest rates relative to inflation.
The effort associated with loans have been reduced significantly as credit checks, automated repayment, etc have reduced the risks and overhead. Competition between banks means their profits are a function of costs, thus driving down costs has reduced in the overhead on loans.
Singapore: 5.6%, 82.9
Israel: 7.2%, 83.2
Estonia: 6.9%, 78.5
Poland: 6.7, 78.5
Luxembourg: 5.7%, 83.4
Czech Republic: 8.1%, 79.9
and a couple which spend a bit more, though again, this includes private spending: France: 11.9%, 82.9
Japan: 11.5%, 84
Portugal: 10.5%, 82.3
Spain: 10.7%, 83.9
So it seems like we could have universal coverage and higher life expectancy if the US government simply spent exactly what it is currently spending, but on everyone, rather than just the old, poor, and veterans.Interesting to see countries like Spain and Italy, where the spend is one third of the US but the life expectancy is significantly higher.
This drives an enormous amount of innovation, and the near complete dominance of US healthcare companies in the west reflects that.
The US moving to a universal healthcare model would likely kill the lucrative US market, and while providing cheaper healthcare, it likely wouldn't make them dramatically cheaper while also having the effect of driving up costs in other western countries.
A bit like a balloon, where the profits are swelled in the US and limp elsewhere, squeezing the US will ha global effects.
More concerning to me are that these visualizations are not so trivial to find. Here's one
https://www.bea.gov/system/files/gdp1q25-3rd-chart-03_0.png
Health care is growing but not as much as real estate
He was convicted of fraud a few years later.
* Movement of capital from other fields to "AI". * Duration of asset value (eg, AI in months/years vs railroad in decades/centuries). * "Without AI datacenter investment, Q1 GDP contraction could have been closer to –2.1%".
The Q1 GDP comment is stunning because what it says is that if the same Q1 had happened just two years ago there’s a very good chance we’d be looking at a modestly sized recession. Now of course things aren’t zero-sum and it’s impossible to really make a useful claim like that but it’s still striking.
More than a decade long. The technology and industry here was broadly shared. They did things like highjacked bra manufacturers to make space suits.
> Railroads: 6% (mentioned by the author)
We're still using this investment today.
> Covid stimulus: 27%
The virus that was killing us the fizzled is probably not the best example... Only arguments will ensue if I even attempt to point thing out in this one.
> WW2 defense: 40%
I mean Russia made its last lend lease payment in 2006. It lead to America dominance of the globe. It looks like an investment that paid it self off.
How much of the hardware spend on AI is going to be usable in 5years?
There are some deep fundamental questions one should be asking if they pay attention to the hardware space. Who is going to solve the power density problem? Does their solution mean we're moving to exotic cooling (hint: yes)? Have we hit another Moores law style wall (IPC is flat and we dont have a lot of growth left in clock, back to that pesky power and cooling problem). If a lot of it is usable in 5 years thats great, but then the industry isnt going to get any help from the hardware side and thats a bad omen for "scaling".
Meanwhile capex does not include power, data, constables or people. It may include training, but we know that can't be amortized. (how long does a trained system last before you need another, or before you need a continuation/update).
Everyone is going after AI under the assumption that they can market capture, or build some sort of moat, or... The problem is that no one has come up with the killer app where the tech will pay for itself. And many in the industry are smart enough not to try to build their product on someone else's platform (cause rug pulls are a thing).
"AI" could go the way of 3d tv's, VR, metaverse, where the hype never meets up with hope. That doesn't mean we wont get a bunch of great tooling out of it. It is going to need less academics and more engineering (and for that hardware costs have to drop...)
All these investments are a chump change for big.
so presumably, the people spending those money have considered the opportunity cost and reckoned it be worth it.
Unless you proposed some alternative which would've been better, you cannot say that those spending were bad because "opportunity cost".
Do you think WW3 defense should be on the charts yet?
For white-collar jobs replacement - we can always evolve up the knowledge/skills/value chain. It is the blue-collar jobs where bloodbath with all the robotics is coming.
I'm not so sure about this one. I partially agree with the statement, but less-abled collegues might have troubles with this :( Ultimately there will be less stuff to do for a plain human being.
Once you recognize that all ML techniques, including LLMs, are fundamentally compression techniques you should be able to come up with some estimates of the minimum feasible size of an LLM based on: information that can be encoded in a given parameter size, relationship between loss of information and model performance, and information contained in the original data set.
I simultaneously believe LLMs are bigger than the need to be, but suspect they need to be larger than most people think given that you are trying to store a fantastically large amount of information. Even given lossy compression (which ironically is what makes LLMs "generalize"), we're still talking about an enormous corpus of data we're trying to represent.
HBM is also very expensive.
That’s an interesting perspective. It does feel a bit like we’re setting money on fire.
Q1*4 is highly likely to be a better estimate of their eventual 2025 calendar revenue than their current trailing 12 months revenue would be. Probably still a bit conservative, but easier to justify than projecting that growth continues at exactly the same pace.
Precisely AI is being built out today because the value returned is expected to be massive. I would argue this value will be far bigger than railroads ever could be.
Overspending will happen, for sure, in certain geographies or for specialty hardware, maybe even capacity will outpace demand for a while, but I don’t think the author makes a good case that we are there yet.
Be cautious making assessments as to compounding effects; while it remains the critical attribute, the compounding nature of a system is not always obvious. For example, the author is correct that financing for AI CapEx is starving other fields of investment at least in the short term.
The modern internet came from folks getting connected over-exuberantly based on near-term returns (with a lot of investors losing their shirts) but then humans figured out what the actual best use of the technology.
Highly recommend this book for more, Carlota Perez is very insightful: https://en.wikipedia.org/wiki/Technological_Revolutions_and_...
No offense but you sound naive here. This is exactly how dry powder manifests in PE/VC and is even predictable under cash rich corporate regines via M&A.
Trends like this are ripe for acceleration under favorable environmental conditions like high interest rates. Not to mention, a lot of it develops out of peer pressure.
Sometimes when your job is to deploy capital, you just deploy it. Of course you try to put it in the best possible places. But when those are few, well yeah... This happens.
Of 30% of the work is done 10% faster that leaves a 3% gain for other economic activities. If that is true the CapEx is justified.
For example at one point nails were 0.5% of the economy and today owning a nail factory is a low margin business that has no social status.
Similarly the percentage of the economy and social status associated with frontend software dev will get automated and become a smaller percentage of the economy.
Since social status is a zero sum game people increase spending in other areas where social status can be helped.
So you believe in zero sum economy? I think new capabilities lead to demand expansion, they mobilize latent demand that was sleeping. There is no limit to desires, not even AI automation could outrun them.
Social status is zero sum and as more of the economy gets automated that becomes all that is left that is scarce
Potentially all of those, and more, become smaller employers in relative terms.
A better nail-making machine is a single-purpose technology. It's not going to affect productivity much in unrelated industries such as healthcare, for example.
AI is a general-purpose technology like electric light or electric motors. It has the potential to improve productivity in a great many productive activities.
As another person said here, even if progress in AI stopped now, we have twenty years' sustained productivity growth ahead of us in adapting processes to use AI more effectively across the whole economy.
Whether the economy as a whole grows or shrinks depends mainly on whether households will buy more entertainment, legal services, financial services, or all the rest because they are now cheaper, and to a lesser extent on whether we can discover new things that households want to buy.
That said even if we somehow reach A5 in 2035, we are only at about 12x density increase. If we include system packaging, chiplet, interconnect advancement pushing this to 30 to 40x. This is still a far cry from the 1000 to 10000x compute demands from a lot of AI companies. And that is assuming memory bandwidth could scale with it.
Ai leads to a capital concentration on the hands of those that already have money and might lead to a long term reduction of wealth for the middle class.
Less purchasing power in the population usually is not good for economic development, so I have my doubts with respect to a boom .
does it? I recall that railroads were monopolies (the vanderbilts). The gov't had to pass an act to break them up, because there were price collusions and farmers were forced to pay a higher price for their transport of foodstuffs.
> Ai leads to a capital concentration on the hands of those that already have money
that is true for a lot of other capital intensive ventures. Why pick ai specifically?
And AI is less monopolistic - at least it's not a natural monopoly. There are competition, and there are alternatives.
ai does not, and seems to suffer the "resource curse" https://www.lesswrong.com/posts/Mak2kZuTq8Hpnqyzb/the-intell...
They were giant infrastructure projects build by thousands of humans.
Ai is not.
Feel free to skip the next section about the culture-aware interpretation of that quote. Xi's warning is more about the so-called "political tournament". Usually the central government sets some industrial policy goal (AI and EV now, but it could be other things like real estate, chips, or drones from before). The local government would collude with companies to start "projects". The goal is to get brownies points for the former and subsidy package for the latter. Of course, nothing usually comes out of these "projects". Most just stops at the factory building phase. Such practice has always been a real headache to the central government. Xi is warning about it showing up in the AI and EV industry now.
[^1]: https://paper.people.com.cn/rmrb/pc/content/202507/17/conten...
This reminds me of the .com bubble when bandwidth, especially transatlantic bandwidth, was dire but then all of a sudden it wasn't and we had huge overcapacities for years.
I wonder if this will happen witn compute?
oytis•6mo ago
te_chris•6mo ago
We should be so lucky
midnightclubbed•6mo ago
Can’t believe I have to state the obvious and say that is only a potential gain if the power/cooling is from renewable sources. But I do
intended•6mo ago
sour-taste•6mo ago
toomuchtodo•6mo ago
teaearlgraycold•6mo ago
Bluestein•6mo ago
dgfitz•6mo ago
Retr0id•6mo ago
apwell23•6mo ago
actionfromafar•6mo ago
lenerdenator•6mo ago
esseph•6mo ago
-- > From 2013 to 2020, cloud infrastructure capex rose methodically—from $32 billion to $119 billion. That's significant, but manageable. Post-2020? The curve steepens. By 2024, we hit $285 billion. And in 2025 alone, the top 11 cloud providers are forecasted to deploy a staggering $392 billion—MORE than the entire previous two years combined.
https://www.wisdomtree.com/investments/blog/2025/05/21/this-...
armchairhacker•6mo ago
lenerdenator•6mo ago
"What do you mean the women in this game have proportions roughly equivalent to what's actually possible in nature?!?!"
drdaeman•6mo ago
intended•6mo ago
Dota, league, he’ll - Roblox, twitch, discord - have some of the most data on how angry humans are when they play vidya.
tuatoru•6mo ago
Imagine the reception that studies of female aggression get.
TheBicPen•6mo ago
lm28469•6mo ago
smokel•6mo ago
amelius•6mo ago
https://en.wikipedia.org/wiki/Fermi_paradox#Hypothetical_exp...
"Alien species may isolate themselves in virtual worlds"
etherlord•6mo ago
827a•6mo ago
If you destroy the GPU, you can write it off as a loss, which reduces your taxable income.
Its possible you could come out ahead by selling everything off, but then you'd have to pay expensive people to manage the sell off, logistics, etc. What a mess. Easier to just destroy everything and take the write-off.
phil21•6mo ago
It is a giant pain to sell off this gear if you are using in-house folks to do so. Usually not worth it, and why things end up trashed as you state. If I have a dozen 10 year old servers to get rid of - it's usually not worth anyone's time or energy to list them for $200 on ebay and figure out shipping logistics.
However, at scale the situation and numbers change - you can call in an equipment liquidator who can wheel out 500 racks full of gear at a time and you get paid for their disposal on top of it. Usually a win/win situation since you no longer have expensive people trying to figure out who to call to get rid of it, how to do data destruction properly, etc. This usually is a help to the bottom line in almost all cases I've seen, on top of it saving internal man-hours.
If you're in "failed startup being liquidated for asset value" territory, then the receiver/those in charge typically have a fiduciary duty to find the best reasonable outcome for the investors. It's rarely throwing gear with residual value in the trash. See: used Aeron chair market.
triceratops•6mo ago
Unless GPUs are like post-Covid used cars you're going to sell them at a loss which can be written off. Write-offs don't have to involve destroying the asset. I don't know where you got that idea.
barbazoo•6mo ago
lm28469•6mo ago
We have much better things to do with these billions
ribosometronome•6mo ago
lm28469•6mo ago
What's easier, educate your people and feed them well to build a strong and healthy nation OR let them rot and shovel billions to pharma corps in the hope of finding a magic cure?
astrange•6mo ago
> shovel billions to pharma corps in the hope of finding a magic cure?
What do you mean finding? We already found it (GLP-1 inhibitors). Ozempic is even owned by a nonprofit (Novo Nordisk). See, everything's fine.
ribosometronome•6mo ago
A number of them seem to have skyrocketed with quality of life and personal wealth. I suspect my ancestors were skinny not because they were educated on eating well but because they lacked the same access to food we have in modern society, especially super caloric ones. I don't super want to go back to an ice cream scarce world. Things like meat consumption are linked to colon cancer and most folk are unwilling to give that up or practice meat-light diets. People generally like smoking! Education campaigns got that down briefly but it was generally not because people didn't want to smoke, it's because they didn't want cancer. Vaping is clearly popular nowadays. Alcohol, too! The WHO says there is no safe amount of alcohol consumption and attributes lots of cancer to even light drinking. I suspect people would enjoy being able to regularly have a glass of wine or beer and not have it cost them their life.
logicchains•6mo ago
Humans have so far completely failed to develop any drug with minimal side effects to cure lifestyle diseases; it's magical to think AI can definitely do it.
astrange•6mo ago
Oh, in this case GP seems to be including sunscreen as a treatment for lifestyle diseases. Pretty sure those don't have side effects, but Americans don't get the good ones.
ip26•6mo ago
HN, where "going outside" is considered a lifestyle.
nerevarthelame•6mo ago
schmidtleonard•6mo ago
We would have to 100x medical research spending before it was clearly overdone.
lm28469•6mo ago
You're not going to fix lifestyle diseases with drugs, and lifestyle diseases are the leading cause of death
schmidtleonard•6mo ago
bdangubic•6mo ago
schmidtleonard•6mo ago
conception•6mo ago
alphazard•6mo ago
Terr_•6mo ago
alphazard•6mo ago
e.g. if OpenAI is responsible for any damages caused by ChatGPT then the service shuts down until you waive liability and then it's back up. Similarly if companies are responsible for the chat bots they deploy then they can buy insurance or put up guard rails around the chat bot, or not use it.
Terr_•6mo ago
In a reality with perfect knowledge, complete laws always applied, and populated by un-bankrupt-able immortals with infinite lines of credit, yes. :P
sterlind•6mo ago
whereas my experience describing my problem and actually asking the AI is much, much smoother.
I'm not convinced the "LLM+scaffolding" paradigm will work all that well. sanity degrades with context length, and even the models with huge context windows don't seem to use it all that effectively. RAG searches often give lackluster results. the models fundamentally seem to do poorly with using commands to accomplish tasks.
I think fundamental model advances are needed to make most things more than superficially automatable: better planning/goal-directed behavior, a more organic connection to RAG context, automatic gym synthesis, and RL-based fine tuning (that holds up to distribution shift.)
I think that will come, but I think if LLMs plateau here they won't have much more impact than Google Search did in the '90s.
break_the_bank•6mo ago
I’d give building with sonnet 4 a fair shot. It’s really good, not accurate all the time but pretty good.
fragmede•6mo ago
Given that Google IPOd in 99, and is one of the biggest tech companies in the world, I'm not sure what you mean by that.
Winsaucerer•6mo ago
I've said the same thing as you, that there is a LOT left to be done with current AI capabilities, and we've barely scratched the surface.
baxtr•6mo ago
To me, this all sounds like an “end-of-the-world” nihilistic wet dream, and I don’t buy the hype.
Is it’s just me?
ToucanLoucan•6mo ago
Because the only thing that gets the executive class hornier than new iPhone-tier products is getting to layoff tons of staff. It sends the stock price through the roof.
It follows from there that an iPhone-tier product that also lets them layoff tons of staff would be like fucking catnip to them.
astrange•6mo ago
There's no such thing as taking people's jobs, nobody and nothing is going to take your job except for Jay Powell, and productivity improvements cause employment to increase not decrease.
ivape•6mo ago
noitpmeder•6mo ago
kulahan•6mo ago
Your response doesn’t explain why so many people are hyped about it, just why CEOs are.
fnimick•6mo ago
linotype•6mo ago
JumpCrisscross•6mo ago
You're correct. But it doesn't matter. Remember the San Francisco protests against tech? People will kill a golden goose if it's shinier than their own.
oytis•6mo ago
JumpCrisscross•6mo ago
It's self-defeating but predictable. (Hence why the protests were tolerated to backed by NIMBY interests.)
My point is the same nonsense can be applied to someone not earning a tech wage celebrating tech workers getting replaced by AI. It makes them poorer, ceteris paribus. But they may not understand that. And the few that do may not care (or may have a way to profit off it, directly or indirectly, such that it's acceptable).
oytis•6mo ago
rcpt•6mo ago
https://sloanreview.mit.edu/article/the-multiplier-effect-of...
daedrdev•6mo ago
satyrun•6mo ago
The reason to be excited economically for this is if it happens it will be massively deflationary. Pretending CEOs are just going to pocket the money is economically stupid.
Being able to use a super intelligence has been a long time dream too.
What is depressing is the amount of tech workers who have no interest in technological advancement.
overgard•6mo ago
And since when do business executives NOT pocket the money? Pretty much the only exception is when they reinvest the savings into the business, for more growth, but that reinvestment and growth usually is only something the rest of us care about if it involves hiring..
crystal_revenge•6mo ago
This doesn't even require any "conspiracy" among CEOs, just people with a vested interest in AI hype who act in that interest, shaping the type of content their organizations will produce. We saw something lessor with the "return to office" frenzy just because many CEOs realized a large chunk of their investment portfolio was in commercial real estate. That was only less hyped because I suspect there were larger numbers of CEOs with an interest in remaining remote.
Outside of the tech scene, AI is far less hyped and in places where CEOs tend to have little impact on the media it tends to be resisted rather than hyped.
citrin_ru•6mo ago
mulmen•6mo ago
rpcope1•6mo ago
phil21•6mo ago
It's difficult to have much empathy for the "learn to code" crowd who seemingly almost got a sense of joy out of watching those jobs and lifestyles get destroyed. Almost some form of childhood high school revenge fantasy style stuff - the nerd finally gets one up on the prom king. Otherwise I'm not sure where the vitriol came from. Way too many private conversations and overheard discussion in the office to make me think these were isolated opinions.
That said, it's not everyone in tech. Just a much larger percentage than I ever thought, which is depressing to think about.
It's certainly been interesting to watch some folks who a decade ago were all about "only skills matter, if you can be outcompeted by a robot you deserve to lose your job" make a 180 on the whole topic.
rightbyte•6mo ago
ModernMech•6mo ago
eastbound•6mo ago
I’m paid about 16x an electronics engineer. Salaries in IT are completely unrelated to the person’s effort compared to other white collar jobs. It would take an entire career to some manager to reach what I made after 5 years. I may be 140IQ but I’m also a dumbass in social terms!
ivape•6mo ago
bcrosby95•6mo ago
MyOutfitIsVague•6mo ago
throw1235435•6mo ago
lttlrck•6mo ago
eastbound•6mo ago
phil21•6mo ago
I had the same thought you did back then. If I could build a company with 3 people pulling a couple million of revenue per year, what did that mean to society when the average before that was maybe a couple dozen folks?
Technology concentrates gains to those that can deploy it - either through knowledge, skill, or pure brute force deployment of capital.
2944015603•6mo ago
For the same reason people are obsessed with replacing all blue-collar jobs. Every cent that a company doesn't have to spend on its employees is another cent that can enrich the company's owners.
oytis•6mo ago
Maybe it's my post-communist background though and not relevant for the rest of the world
detourdog•6mo ago
I’m skeptical that this is a good use of resources or energy consumption.
overgard•6mo ago
Ekaros•6mo ago
That is what allowed our current lifestyles. It is good thing. Now it is just coming to next area.
rightbyte•6mo ago
Ekaros•6mo ago
With AI it is white collar work.
rpdillon•6mo ago
And I was explaining that I work in tech, so I live in the future to some degree, but that ultimately, even with HIPAA and other regulations, there's too much of a gain here for it not to be deployed eventually, And those people in their time are going to be used differently when that happens. I was speculating that it could be used for interviews as well, but I think I'm less confident there.
LinXitoW•6mo ago
We're all far closer to poor than we are to having enough capital to live off of efficiency increases. AI is the last thing the capitalist class requires to finally throw of the shackles of humanity, of keeping around the filthy masses for their labor.
AuryGlenz•6mo ago
overgard•6mo ago
Producing things cheaper sounds great, but just because its produced cheaper doesn't mean it is cheaper for people to buy.
And it doesn't matter if things are cheap if a massive number of people don't have incomes at all (or even a reasonable way to find an income - what exactly are white collar professionals supposed to do when their profession is automated away, if all the other professions are also being automated away?)
Sidenote btw, but I do think it's funny that the investor class doesn't think AI will come for their role..
To me the silver lining is that I don't think most of this comes to pass, because I don't think current approaches to AGI are good enough. But it sure shows some massive structural issues we will eventually face
chii•6mo ago
investors don't perform work (labour); they take capital risk. An ai do not own capital, and thus cannot "take" that role.
If you're asking about the role of a manager of investment, that's not an investor - that's just a worker, which can and would be automated eventually. Robo-advisors are already quite common. The owner of capital can use AI to "think" for them in choosing what capital risk to take.
And as for massive number of people who don't have income - i dont think that will come to pass either (just as you dont think AGI will come to pass). Mostly because the speed of these automation will decline as it's not that trivial to do so - the low hanging fruits would've been picked asap, and the difficult ones left will take ages to automate.
imtringued•6mo ago
If a bootstrapped startup has immediate access to the equivalent of 20 administrative employees via AI, then what purpose does the investor have?
chii•6mo ago
And if a single bootstrapped "investor" can support such a company, that's an even better world than today isnt it? It means everyone has a chance at breaking out with a successful company/product.
rightbyte•6mo ago
oytis•6mo ago
Like you have a brilliant idea, but unfortunately don't have any hard skills. Now you don't have to pay enormous sums of money to geeks and have to suffer them to make it come true. Truly a dream!
jahewson•6mo ago
olalonde•6mo ago
jdietrich•6mo ago
If you want to understand our current moment, I would urge you to study that history.
https://en.wikipedia.org/wiki/Swing_Riots
noncoml•6mo ago
cheschire•6mo ago
Legend2440•6mo ago
cheschire•6mo ago
noncoml•6mo ago
olddustytrail•6mo ago
Calculators didn't replace mathematicians, they replaced Computers (as an occupation). To the point that most people don't even know it used to be a job for people.
I say calculators but there is a blurry line between early electronic computers and calculators. Portable electronic calculators also replaced the slide rule, around the late 1970s, which had been the instrument of choice for engineers for around 350 years!
noncoml•6mo ago
Imho you are underestimating the work of programmers if you compare them to “Computers”
olddustytrail•6mo ago
In fact the first programmers were mainly women because they came from a Computer background.
detourdog•6mo ago
blibble•6mo ago
the AI parallel is quite apt actually
crop_rotation•6mo ago
> They seem to be totally convinced that this will happen.
The two groups of people are not same. I for example belong to the 2nd but not the 1st. If you have used the current gen LLM coding tools you will realize they have gotten they are scary good.
throw310822•6mo ago
Personally, however, I would find it possibly even more depressing to spend my day doing a job that has economic value only because some regulation prevents it being done more efficiently. At that point I'd rather get the money anyway and spend the day at the beach.
throw1235435•6mo ago
Believe it or not most SWE's and white collar workers in general don't get these perks especially outside the US where most firms have made sure tech workers in general are paid "standard wages" even if they are "good".
throw310822•6mo ago
lelanthran•6mo ago
That's true for many jobs. The only reason many people have a job is because of a variety of regulations preventing that job from being outsourced.
> At that point I'd rather get the money anyway and spend the day at the beach.
You won't get the money and spend the day at the beach; you'll starve to death.
throw310822•6mo ago
In any case, there's also a difference between the idea that it can be me or another person doing the same job, and maybe that person can be paid less because of their lower cost of living, but in the end they will put in the same effort as I do; and the idea that a tool can do the job effortlessly and the only reason I have to suffer over it is to justify a salary that has no reason to exist. Then, again, just force the company to pay me while allowing them to use whatever tool they want to get the job done.
miki123211•6mo ago
If you replace lawyers with AI, poor people will be able to take big companies to court and defend themselves against frivolous lawsuits, instead of giving in and settling. If you replace doctors, the cost of medicine will go down dramatically, and so will waiting times. If you replace financial advisors, everybody will have their money managed in an optimal way, making them richer and less likely to make bad financial decisions. If you replace creative workers, everybody will have access to the exact kind of music, books, movies and video games they want, instead of having to settle for what is available. If you automate away delivery and drivers (particularly with drones), the price of prepared food will fall dramatically.
mgraczyk•6mo ago
kadushka•6mo ago
sensanaty•6mo ago
whydoyoucare•6mo ago
jahewson•6mo ago
charleshn•6mo ago
Just looking at what happened with chess, go, strategy games, protein folding etc, it's obvious that pretty much any field/problem that can be formalised and cheaply verified - e.g. mathematics, algorithms etc - will be solved, and that it's only a matter of time before we have domain-specific ASI.
I strongly encourage everyone to read about the bitter lesson [0] and verifier's law [1].
[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
[1] https://www.jasonwei.net/blog/asymmetry-of-verification-and-...
oytis•6mo ago
I don't mind if software jobs move from writing software to verifying software either if it makes the whole process more efficient and the software becomes better as a result. Again, not what is happening here.
What is happening, at least in AI optimist CEO minds is "disruption". Drop the quality while cutting costs dramatically.
charleshn•6mo ago
But the next step is obviously increased formalism via formal methods, deterministic simulators etc, basically so that one could define an environment for a RL agent.
bigyabai•6mo ago
puchatek•6mo ago
yeasku•6mo ago
bigyabai•6mo ago
So... where's the kaboom? Where's the giant, earth-shattering kaboom? There are solid applications for AI in computer vision and sentiment analysis right now, but even these are fallible and have limited effectiveness when you do deploy them. The grander ambitions, even for pared-back "ASI" definitions, is just kicking the can further down the road.
TheBicPen•6mo ago
mvieira38•6mo ago
For the average consumer, LLM chatbots are infinitely better than Google at search-like tasks, and in effect solve that problem. Remember when we had to roll our eyes at dad because he asked Google "what are some cool restaurants?" instead of "nice restaurants SF 2018 reddit"? Well, that is over, he can ask that to ChatGPT and it will make the most effective searches for him, aggregate and answer. Remember when a total noob had to familiarize himself with a language by figuring out hello world, then functions, etc? Now it's over, these people can just draft a toy example of what they want to build with Cursor instantly, tell it to make everything nice and simple, and then have ChatGPT guide them through what is happening.
In some industries you just don't need that much more code quality than what LLMs give you. A quick .bat script doesn't need you to know the best implementation of anything, neither does a Python scraper using only the stdlib, but these were locked behind programming knowledge before LLMs
dontlaugh•6mo ago
tim333•6mo ago
overgard•6mo ago
kadushka•6mo ago
charleshn•6mo ago
At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.
[0] https://x.com/alexwei_/status/1946477742855532918
bwfan123•6mo ago
Based on the past history with frontier-math & AIME 2025 [1],[2] I would not trust announcements which cant be independently verified. I am excited to try it out though.
Also, the performance of LLMs was not even bronze [3].
Finally, this article shows that LLMs were just mostly bluffing [4].
[1] https://www.reddit.com/r/slatestarcodex/comments/1i53ih7/fro...
[2] https://x.com/DimitrisPapail/status/1888325914603516214
[3] https://matharena.ai/imo/
[4] https://arxiv.org/pdf/2503.21934
yeasku•6mo ago
kadushka•6mo ago
yeasku•6mo ago
It took us about 5 seconds.
overgard•6mo ago
But even if they can appear to reason, if it's not reliable, it doesn't matter. You wouldn't trust a tax advisor that makes things up 1/10 times, or even 1/100 times. If you're going to replace humans, "reliable" and "reproducible" are the most important things.
kadushka•6mo ago
mvieira38•6mo ago
It isn't entirely clear what problem LLMs are solving and what they are optimizing towards... They sound humanlike and give some good solutions to stuff, but there are so many glaring holes. How are we so many years and billions of dollars in and I can't reliably play a coherent game of chess with ChatGPT, let alone have it be useful?
throw310822•6mo ago
Sometimes I have the feeling that what happened with LLMs is so enormous that many researches and philosophers still haven't had time to gather their thoughts and process it.
I mean, shall we have a nice discussion about the possibility of "philosophical zombies"? On whether the Chinese room understands or not? Or maybe on the feasibility of the mythical Turing test? There's half a century or more of philosophical questions and scenarios that are not theory anymore, maybe they're not even questions anymore- and almost from one day to the other.
jpc0•6mo ago
There’s this paper[1] you should read, is sparked an entire new AI dawn, it might answer your question
1. https://arxiv.org/abs/1706.03762
mvieira38•6mo ago
"What happened with LLMs" is what exactly? From some impressive toy examples like chatbots we as a society decided to throw all our resources into these models and they still can't fit anywhere in production except for assistant stuff
throw310822•6mo ago
I think they have the capability to do it, yes. Maybe it's not the best tool you can use- too expensive, or too flexible to focus with high accuracy on that single task- but yes you can definitely use LLMs to understand literary style and extract data from it. Depending on the complexity of the text I'm sure they can do jobs that BERT can't.
> they still can't fit anywhere in production
Not sure what do you mean for "production" but there's an enormous amount of people using them for work.
charcircuit•6mo ago
Why would it play like the average? LLMs pick tokens to try and maximize a reward function, they don't just pick the most common word from the training data set.
rcpt•6mo ago
bwfan123•6mo ago
Many of us have been through previous hype-cycles like the dot-com boom, and have learned to be skeptical. Some of that learning has been "reinforced" by layoffs in the ensuing bust (reinforcement learning). A few claims in your note like "it's only a matter of time before we have domain-specific ASI" are jarring - as you are "assuming the sale". LLMs are great as a tool for some usecases - nobody denies that.
The investment dollars are creating a class of people who are fed by those dollars, and have the incentive to push the agenda. The skeptics in contrast have no ax to grind.
charleshn•6mo ago
[0] https://x.com/alexwei_/status/1946477742855532918
bwfan123•6mo ago
If the questions were given as-is (without a human formalizing it) and the llm didnt need domain solvers, and the llm was not trained on it already (which happened with frontier math) - I would be impressed.
Based on the past history with frontier math [1][2] I remain skeptical. The skeptic in me says that this happens prior to big announcements (GPT-5) to create the hype.
Finally, this article shows that LLMs were just bluffing in the usamo 2025 [3].
[1] https://www.reddit.com/r/slatestarcodex/comments/1i53ih7/fro...
[2] https://x.com/DimitrisPapail/status/1888325914603516214
[3] https://arxiv.org/pdf/2503.21934
Tainnor•6mo ago
It can already be "cheaply verified" in the sense that if you write a proof in, say, Lean, the compiler will tell if you if it's valid. The hard part is coming up with the proof.
It may be possible that some sort of AI at some stage becomes as good, or even better than, research mathematicians in coming up with novel proofs. But so far it doesn't look like it - LLMs seem to be able to help a little bit with finding theorems (e.g. stuff like https://leansearch.net/), but to my understanding they are rather poor beyond that.
tim333•6mo ago
I guess maybe it isn't that obvious - I've read quite a lot in the area. People saying LLMs aren't very good are a bit like people long ago saying chess programs aren't very good. It was true but there was an inevitable advance as the hardware got better and then that led to enthusiasm to improve the software and computers became better than humans in a rather predictable way. It's driven in the end by hardware improvements. Whether the software is LLM or some other algo is kind of unimportant.
tootie•6mo ago
jandrewrogers•6mo ago
After the dotcom crash, much of this infrastructure became distressed assets that could be picked up for peanuts. This fueled a large number of new startups in the aftermath that built business models figuring out how to effectively leverage all of this dead fiber when you don't have to pay the huge capital costs of building it out yourself. At the time, you could essentially build a nationwide fiber network for a few million dollars if you were clever, and people did.
These new data centers will find a use, even if it ends up being by some startup who picks it up for nothing after a crash. This has been a pattern in US tech for a long time. The carcass of the previous boom's whale becomes cheap fuel for the next generation of companies.
segmondy•6mo ago
cute_boi•6mo ago
suddengunter•6mo ago
When a few years ago I moved from Eastern Europe (where I had 1GB/s to my apartment for years) to the UK I was surprised that "the best" internet connection I was able to get was about 40MBit/s phone line. But it's a small town, and during past years even we have fiber up to 2GB/s now.
I'm surprised US still has issues that you mentioned. Have you considered Starlink(fuck Musk, but the product is decent)/alternatives?
Fade_Dance•6mo ago
One is, of course, the size of the country, but that's hardly an "excuse." It does contribute though.
The other big reason is lack of competition in the ISP space, and this is compounded by a distinctly American captured system where the owners/operators of the "public" utility poles shut out new entrants and have no incentive to improve the situation.
Meanwhile the nationwide regulatory agencies have been stripped down and courts have de-toothed them, reducing likelihood of top-down reform, and often these sorts of problems inevitably end up running into the local and state government vs national government split that is far more severe in the US.
So it's one of those problems that is surprising to some degree, but when you read about things like public utility telephone poles captured by corporate interests, it's also distinctly ridiculous and American, and not surprising at all.
mad0•6mo ago
soared•6mo ago
alexey-salmin•6mo ago
ruined•6mo ago
numpad0•6mo ago
For one entire rented or owned house, it's just a call and a drill away.
9rx•6mo ago
La ti da. My 50Mbps in an urban area doesn't even provide 10Mbps up.
> In an urban area too.
Funnily enough, my farmland has gigabit service.
But I, unfortunately, don't live there. Maybe some day I'll figure out how to afford to build a house on that land. But, until then, shitty urban internet it is, I guess.
mycall•6mo ago
How often are they the same players in different costumes?
afiodorov•6mo ago
Aeolun•6mo ago
So it’s still being used now. That’s good right?
tim333•6mo ago
casey2•6mo ago