frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

https://github.com/KittenML/KittenTTS
116•divamgupta•1h ago•74 comments

Open models by OpenAI

https://openai.com/open-models/
1640•lackoftactics•13h ago•621 comments

Marines now have an official drone-fighting handbook

https://www.marinecorpstimes.com/news/your-marine-corps/2025/08/04/the-marines-now-have-an-official-drone-fighting-handbook/
70•Gaishan•3h ago•48 comments

Software Rot

https://permacomputing.net/software_rot/
61•pabs3•3h ago•18 comments

The Amaranth hardware description language

https://amaranth-lang.org/docs/amaranth/latest/intro.html#the-amaranth-language
36•pabs3•2h ago•7 comments

I'm Archiving Picocrypt

https://github.com/Picocrypt/Picocrypt/issues/134
74•jaden•3h ago•7 comments

Genie 3: A new frontier for world models

https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/
1239•bradleyg223•16h ago•427 comments

Spotting base64 encoded JSON, certificates, and private keys

https://ergaster.org/til/base64-encoded-json/
263•jandeboevrie•10h ago•111 comments

Ollama Turbo

https://ollama.com/turbo
329•amram_art•11h ago•180 comments

Create personal illustrated storybooks in the Gemini app

https://blog.google/products/gemini/storybooks/
138•xnx•9h ago•42 comments

Ozempic shows anti-aging effects in trial

https://trial.medpath.com/news/5c43f09ebb6d0f8e/ozempic-shows-anti-aging-effects-in-first-clinical-trial-reversing-biological-age-by-3-1-years
230•amichail•15h ago•336 comments

Scientific fraud has become an 'industry,' analysis finds

https://www.science.org/content/article/scientific-fraud-has-become-industry-alarming-analysis-finds
338•pseudolus•19h ago•281 comments

Consider using Zstandard and/or LZ4 instead of Deflate

https://github.com/w3c/png/issues/39
155•marklit•12h ago•87 comments

Things that helped me get out of the AI 10x engineer imposter syndrome

https://colton.dev/blog/curing-your-ai-10x-engineer-imposter-syndrome/
781•coltonv•16h ago•569 comments

Claude Opus 4.1

https://www.anthropic.com/news/claude-opus-4-1
728•meetpateltech•13h ago•271 comments

Ask HN: Have you ever regretted open-sourcing something?

174•paulwilsonn•3d ago•230 comments

Bourdain, My Camera, and Me (2021)

https://www.melaniedunea.com/essays/blog-post-title-one-phd62
4•NaOH•2d ago•0 comments

uBlock Origin Lite now available for Safari

https://apps.apple.com/app/ublock-origin-lite/id6745342698
1017•Jiahang•21h ago•398 comments

Build Your Own Lisp

https://www.buildyourownlisp.com/
240•lemonberry•18h ago•63 comments

The first widespread cure for HIV could be in children

https://www.wired.com/story/the-first-widespread-cure-for-hiv-could-be-in-children/
88•sohkamyung•3d ago•15 comments

US tech rules the European market

https://proton.me/blog/us-tech-rules-europe
61•devonnull•3h ago•34 comments

Kyber (YC W23) is hiring enterprise account executives

https://www.ycombinator.com/companies/kyber/jobs/6RvaAVR-enterprise-account-executive-ae
1•asontha•9h ago

AI is propping up the US economy

https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-its-propping
207•mempko•10h ago•213 comments

Show HN: Stagewise (YC S25) – Front end coding agent for existing codebases

https://github.com/stagewise-io/stagewise
40•juliangoetze•15h ago•46 comments

Tell HN: Anthropic expires paid credits after a year

231•maytc•1d ago•98 comments

Show HN: Whittle – A shrinking word game

https://playwhittle.com/
89•babel16•12h ago•45 comments

US reportedly forcing TSMC to buy 49% stake in Intel to secure tariff relief

https://www.notebookcheck.net/Desperate-measures-to-save-Intel-US-reportedly-forcing-TSMC-to-buy-49-stake-in-Intel-to-secure-tariff-relief-for-Taiwan.1079424.0.html
369•voxadam•12h ago•411 comments

Los Alamos is capturing images of explosions at 7 millionths of a second

https://www.lanl.gov/media/publications/1663/dynamics-of-dynamic-imaging
120•LAsteNERD•15h ago•92 comments

When Disney Went Digital

https://animationobsessive.substack.com/p/when-disney-went-digital
14•zdw•2d ago•2 comments

Under the Hood of AFD.sys Part 1: Investigating Undocumented Interfaces

https://leftarcode.com/posts/afd-reverse-engineering-part1/
33•omegadev•2d ago•7 comments
Open in hackernews

AI is propping up the US economy

https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-its-propping
205•mempko•10h ago

Comments

0cf8612b2e1e•10h ago

  Over the last six months, capital expenditures on AI—counting just information processing equipment and software, by the way—added more to the growth of the US economy than all consumer spending combined. You can just pull any of those quotes out—spending on IT for AI is so big it might be making up for economic losses from the tariffs, serving as a private sector stimulus program.
Wow.
bravetraveler•10h ago
Tepidly socially-acceptable welfare
intended•9h ago
Yes, wow. When I heard that data point I was floored.
electrondood•9h ago
For context though, consumer spending has contracted significantly.
gruez•9h ago
It's not as bad as the alarmist phrasing would suggest. Consider a toy example: suppose consumer spending was $100 and grew by $1, but AI spending was $10 and grew by $1.5, then you can rightly claim that "AI added more to the grow of the US economy than all consumer spending combined"[1]. But it's not as if the economy consists mostly of AI, or that if AI spending stopped the economy will collapse. It just means AI is a major contributor to the economy's growth right now. It's not even certain that the AI bubble popping would lead to all of that growth evaporating. Much of the AI boom involves infrastructure build out for data centers. That can be reallocated to building houses if datacenters are no longer needed.

[1] Things get even spicier if consumer growth was zero. Then what would the comparison? That AI added infinitely more to growth than consumer spending? What if it was negative? All this shows how ridiculous the framing is.

agent_turtle•8h ago
[flagged]
gruez•7h ago
>Have you heard of the Dunning-Kruger effect?

Have you heard of the disagreement hierarchy? You're somewhere between 1 and 3 right now, so I'm not even going to bother to engage with you further until you bring up more substantive points and cool it with the personal attacks.

https://paulgraham.com/disagree.html

agent_turtle•7h ago
One of the major reasons there’s such a shortage of homes in the US is the extensive permit process required. Pivoting from data centers to home construction is not a straightforward process.

Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on. A resilient economy has multiple growth areas; an unstable one has one or two.

While you could certainly argue that we may already be in rough shape even without the bubble popping, it would undoubtedly get worse for the reasons I listed above,

gruez•6h ago
>One of the major reasons there’s such a shortage of homes in the US is the extensive permit process required. Pivoting from data centers to home construction is not a straightforward process.

Right, I'm not suggesting that all of the datacenter construction will seamlessly switch over to building homes, just that some of the labor/materials freed would be allocated to other sorts construction. That could be homes, amazon distribution centers, or grid connections for renewable power projects.

>A resilient economy has multiple growth areas; an unstable one has one or two.

>[...] it would undoubtedly get worse for the reasons I listed above,

No disagreement there. My point is that if AI somehow evaporated, the hit to GDP would be less than $10 (total size of the sector in the toy example above), because the resources would be allocated to do something else, rather than sitting idle entirely.

>Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on.

That's a fair point, although to be fair the federal government is pretty good at stimulus after the GFC and covid that any credit crunch would be short lived.

dang•6h ago
Please don't cross into personal attack. We ban accounts that do that.

If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.

troyastorino•8h ago
I've seen this quote in a couple places and it's misleading.

Using non-seasonally adjusted St. Louis FRED data (https://fred.stlouisfed.org/series/NA000349Q), and the AI CapEx spending for Meta, Alphabet, Microsoft, and Amazon from the WSJ article (https://www.wsj.com/tech/ai/silicon-valley-ai-infrastructure...):

-------------------------------------------------

Q4 2025 consumer spending: ~$5.2 trillion

Q4 2025 AI CapEx spending: ~$75 billion

-------------------------------------------------

Q1 2025 consumer spending: ~$5 trillion

Q1 2025 AI CapEx spending: ~$75 billion

-------------------------------------------------

Q2 2025 consumer spending: ~$5.2 trillion

Q2 2025 AI CapEx spending: ~$100 billion

-------------------------------------------------

So, non-seasonally adjusted consumer spending is flat. In that sense, yes, anything where spend increased contributed more to GDP growth than consumer spending.

If you look at seasonally-adjusted rates, consumer spending has grown ~$400 billion, which might outstrips total AI CapEx in that time period, let alone growth. (To be fair the WSJ graph only shows the spending from Meta, Google, Microsoft, and Amazon. But it also says that Apple, Nvidia, and Tesla combined "only" spent $6.7 billion in Q2 2025 vs the $96 billion from the other four. So it's hard to believe that spend coming from elsewhere is contributing a ton.)

If you click through the the tweet that is the source for the WSJ article where the original quote comes from (https://x.com/RenMacLLC/status/1950544075989377196) it's very unclear what it's showing...it only shows percentage change, and it doesn't even show anything about consumer spending.

So, at best this quote is very misleadingly worded. It also seems possible that the original source was wrong.

raincole•4h ago
> growth

Is the keyword here. US consumers have been spending so much so of course that sector doesn't have that much room to grow.

lisbbb•3h ago
That's bad because you just know at some point the bell is getting rung and then the bubble bursts. It was the same thing with office space in the late 1990s--they overbuilt like crazy predicting huge demand that never appeared and then the dot-com bubble burst and that was that.
lenerdenator•10h ago
And that's why there's a desire to make interest rates lower: cheap money is good for propping up bubbles.

Now, it does that at the expense of the average person, but it will definitely prop up the bubble just long enough for the next election cycle to hit.

thrance•9h ago
Trump and his administration harassing the Fed and Powell over interest rates is like a swarm of locust salivating at ripened wheat fields. They want a quick feast at the expense of everything and everyone else, including themselves over the long term.
dylan604•9h ago
Trump knows that the next POTUS can just reverse his decisions much like he's done in both of his at bats. Only thing is there is no next at bat for Trump (without major changes that would be quite devastating), so he's got to get them in now. The sooner the better to take as much advantage of being in control.
pessimizer•9h ago
The left is almost completely unanimous in their support for lowering interest rates, and have been screaming about it for years, since the first moment they started being raised again. And for the same reasons that Trump wants it, except without the negative connotations for some reason.

Recently, I've heard many left wingers, as a response to Trump's tariffs, start 1) railing about taxes being too high, and that tariffs are taxes so they're bad, and 2) saying that the US trade deficit is actually wonderful because it gives us all this free money for nothing.

I know all of these are opposite positions to every one of the central views of the left of 30 years ago, but politics is a video game now. Lefties are going out of their way to repeat the old progressive refrain:

> "The way that Trump is doing it is all wrong, is a sign of mental instability, is cunning psychopathic genius and will resurrect Russia's Third Reich, but in a twisted way he has blundered into something resembling a point..."

"...the Fed shouldn't be independent and they should lower interest rates now."

mason_mpls•8h ago
I have not heard a single left wing pundit demand interest rates go down
rockemsockem•8h ago
Elizabeth Warren has gone on several talk shows insisting interest rates should be lowered. If you look at video from the last time Powell was being questioned by Congress there were many other Democratic congress-people asking him why he wouldn't lower rates.

Personally I trust Jerome Powell more than any other part of the government at the moment. The man is made of steel.

mason_mpls•8h ago
Jerome Powell belongs on Mt Rushmore if you ask me
no_wizard•5h ago
He's entirely too cozy with big banks. He's one of their biggest advocates when it comes to policy. I think Elizabeth Warren had a point here[0]

[0]: https://www.bloomberg.com/news/articles/2024-07-03/senator-w...

rockemsockem•4h ago
Coziness with banks can certainly be an issue, I don't know specifics and that article is pay walled for me, but it sounds very believable to me.

That doesn't really change what I said regarding interest rates though.

thrance•6h ago
Who cares? Even if it were true, why is your first reflex to point the finger at progressives when they're absolutely irrelevant to the current government?
skinnymuch•5h ago
People are upset at the tariffs as taxes because they hurt poorer people more. That’s how it works when everyone pays the same amt of taxes
decimalenough•9h ago
You really think the AI bubble can be sustained for another three years?
dylan604•9h ago
15 months. Mid-terms are next November. After that, legacy cannot be changed by election. If POTUS loses control of either/both chambers, he might have some 'splanin to do. If POTUS keeps control and/or makes further gains, there might not be an election in 3 years.
Hikikomori•9h ago
Gerrymandering in Texas and elsewhere they might stay in power, if they do it's unlikely to change. Basically speed running a fascist takeover.
dylan604•8h ago
Interesting to see if California follows suit. Governor Newsom has his eye on the 2028 prize it seems. If the Dems do not wake up and start playing the same game the GOP is playing, they will never win. Taking the higher ground is such a nice concept, but it's also what losers say to feel good about not winning. Meanwhile, those willing to break/bend/change rules to ensure they continue to win will, well, continue to win.
SpicyLemonZest•4h ago
I think it's important to remember how California got here. In the 2000 redistricting, the state legislature agreed to conduct an extreme bipartisan gerrymander, drawing every seat to be as safe as possible so that no incumbent could get voted out without losing a primary. This was widely understood to be a conspiracy of politicians against democratic accountability, and thus voters decided (with the support of many advocacy orgs and every major newspaper in the state) to put an end to it.

That's not the redistricting Newsom wants for 2028, and I tend to agree that Dems have to play the game right now, but I'd really like to see them present some sort of story for why it's not going to happen again.

smackeyacky•8h ago
It's not really a speed run.

The seeds were planted after Nixon resigned and it was decided to re-shape the media landscape and move the overton window rightwards in the 1970s, dismantling social democracy across the west and leading to a gradual reversal of the norms of governance in the US (see Newt Gingrich).

It's been gradual, slow and methodical. It has definitely accelerated but in retrospect the intent was there from the very beginning.

mathiaspoint•8h ago
The way most of you define "fascism" America has always been fascist with a brief perturbation where we tried Democracy and some Communism.

If you see it that way this is just a reversion to the mean.

smackeyacky•7h ago
True. We have collectively forgotten segregation was a thing in the US. Perhaps it has always been a right wing country that flirts with fascism.
dylan604•6h ago
The Constitution was clearly written for rich land owning white men first of thought, and everything else being left out or only in fractions. They added some checks and balances as a hand wavy idea of trying to stay away from autocracy, but they kind of made them toothless. I'd guess they just didn't have the imagination that people would willingly allow someone to go back towards autocracy since they were fighting so hard to leave it.
fzeroracer•5h ago
It's been an unfortunate truth that the US has long been a country that's flirted with fascism. Ultimately, Thaddeus Stevens was right in his conviction that after the civil war the southern states should've been completely crushed and the land given to the freedmen.
tharmas•7h ago
Excellent post.

You could say that was when things reverted back to "normal". The FDR social reconstruction and post WW2 economic boom were the exception, anomaly. But the Scandinavian countries seem to be doing alright. Sure, they have some big size problems (Sweden in particular) but daily life for the majority in those countries appears to be better than a lot of people in the Anglosphere.

skinnymuch•5h ago
A difference also is neoliberalism ramping up in that time period of the 80s. The concept of privatizing anything and everything and bullshit like “private public partnership” are fairly recent.
tick_tock_tick•8h ago
> he might have some 'splanin to do

About what? Like seriously what would they even do other then try and lame duck him?

The big issue is Dem approval ratings are even lower then Trumps so how the hell are they going to gain any seats?

dylan604•7h ago
Gerrymandering helps. Just look at Texas
chasd00•4h ago
And California
icedchai•9h ago
Possibly. For comparison, how long did the dot-com bubble last? From roughly 1995 to early 2000.
tick_tock_tick•8h ago
Honestly it's not really bubbling like we expected revenues are growing way too fast income from AI investment is coming back to these companies way sooner then anyone thought possible. At this rate we have another couple of 20+% years in the stock market for there to be anything left of a "bubble".

Nvidia the poster-child of this "bubble" has been getting effectively cheaper every day.

bluecalm•9h ago
I am curious, why do you think lower interest rates are bad for an average person?
mason_mpls•9h ago
We want interest rates as close to zero as possible. However they’re also the only reliable tool available to stop inflation.

Youre implying the country exerting financial responsibility to control inflation isn’t good.

Not using interest rates to control inflation caused the stagflation crisis of the 70s, and ended when Volcker set rates to 20%.

verdverm•9h ago
more money in the economy drives inflation, which largely affects those with less disposable income

This is why in a hot economy we raise rates, and in a not economy we lower them

(oversimplification, but it is a commonly provided explanation)

tharmas•7h ago
>more money in the economy drives inflation

Not necessarily. Sure, it that money is chasing fixed assets like housing but if that money was invested into production of things to consume its not necessarily inflation inducing is it? For example, if that money went into expanding the electricity grid and production of electric cars, the pool of goods to be consumed is expanding so there is less likelihood of inflation.

verdverm•6h ago
> if that money was invested into production of things to consume its not necessarily inflation inducing is it

People are paid salaries to work at these production facilities, which means they have more money to spend, and the competition drives people to be willing to spend more to get the outputs. Not all outputs will be scaled, those that aren't experience inflation, like food and housing today

827a•1h ago
The issue with this theory post-internet economy is: its only true if that money is spent chasing a limited amount of scarce goods and services. But the majority of the US economy today is spent on goods and services that are no longer scarce (more accurately, whose unit costs are so low that they might as well be unlimited). We are in a very different world than the one Volker presided over, and this is the core axiom as to why: The economists who correctly invented this central bank interest rate lever could never have foreseen a world so supply-unconstrained.

Another way to look at this: Low interest rates can induce demand and drive inflation. But they also control the rates when financing supply-side production; so they can also ramp up supply to meet increased demand.

1. Not all goods and services are like this, obviously. Real estate is the big one that low interest rates will continue to inflate. We need legislative-side solutions to this, ideally focused at the state and local levels.

2. None of this applies if you have an economy culturally resistant to consumerism, like Japan. Everything flips on its head and things get weird. But that's not the US.

micromacrofoot•8h ago
Low interest rates make borrowing cheap, so companies flood money into real estate and stocks, inflating prices. This also drives up costs for regular people, fuels risky lending (remember subprime mortgages?), and when the bubble bursts... guess who gets hit the hardest when companies start scaling back and lenders come calling?
nr378•8h ago
Interest rates set the exchange rate between future cashflows (i.e. assets) and cash today. Lower interest rates mean higher asset values, higher interest rates mean lower asset values. Higher asset values generally disproportionately benefit those that own assets (wealthy people) over those that don't (average people).

Of course, this is just one way that interest rates affect the economy, and it's important to bear in mind that lower interest rates can also stimulate investment which help to create jobs for average people as well.

tharmas•7h ago
> it's important to bear in mind that lower interest rates can also stimulate investment which help to create jobs for average people as well.

Precisely! Yet the big problem in the Anglosphere is that most of that money has been invested in asset accumulation, namely housing, causing a massive housing crisis in these countries.

throwmeaway222•9h ago
- Microsoft’s AI-fueled $4 trillion valuation

As someone in an AI company right now - Almost every company we work with is using Azure wrapped OpenAI. We're not sure why, but that is the case.

hnuser123456•9h ago
Nobody gets fired for choosing Microsoft
ElevenLathe•9h ago
MS salespeople presumably already have weekly or monthly meetings with all the people with check-cutting authority, and OpenAI doesn't. They're already an approved vendor, and what's more the Azure bill is already really really big, so a few more AI charges barely register.

It's the same reason you would use RDS at an AWS shop, even if you really like CloudSQL better.

This is the main reason the big cloud vendors are so well-positioned to suck up basically any surplus from any industry even vaguely shaped like a b2b SaaS.

guidedlight•9h ago
It’s because most companies already have a lot of confidence with Microsoft contracts, and are generally very comfortable storing and processing highly sensitive data on Microsoft’s SaaS platforms. It’s a significant advantage.

Also Microsoft Azure hosts its own OpenAI models. It isn’t a proxy for OpenAI.

chung8123•8h ago
All of their files are likely on a Microsoft store already too.
edaemon•4h ago
Lots of AI things are features masquerading as products. Microsoft already has the products, so they just have to add the AI features. Customers can either start using a new and incomplete product just for one new feature, or they can stick with the mature Microsoft suite of products they're already using and get that same feature.
micromacrofoot•9h ago
This is going to be an absolute disaster, the government is afraid of regulating AI because it's so embedded in our economy now too
jcgrillo•7h ago
I think we're already starting to see the cracks with OpenAI drastically tightening their belt across various cloud services. Depends how long it takes to set in, but seems like it could be starting this quarter.
jackcosgrove•9h ago
I'm not sure the comparison is apples to apples, but this article claims the current AI investment boom pales compared to the railroad investment boom in the 19th century.

https://wccftech.com/ai-capex-might-equal-2-percent-of-us-gd...

> Next, Kedrosky bestows a 2x multiplier to this imputed AI CapEx level, which equates to a $624 billion positive impact on the US GDP. Based on an estimated US GDP figure of $30 trillion, AI CapEx is expected to amount to 2.08 percent of the US GDP!

Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century. This means that the ongoing AI CapEx boom has lots of legroom to run before it reaches parity with the rail road boom of that bygone era.

decimalenough•9h ago
There is obvious utility to railroads, especially in a world with no cars.

The net utility of AI is far more debatable.

gruez•9h ago
>The net utility of AI is far more debatable.

I'm sure if you asked the luddites the utility of mechanized textile production you'd get a negative response as well.

shadowgovt•8h ago
With this generation of AI, it's too early to tell whether it's the next railroad, the next textile machine, or the next way to lock your exclusive ownership of an ugly JPG of a multicolored ape into a globally-referenceable, immutable datastore backed by a blockchain.
decimalenough•8h ago
Railroads move people and cargo quickly and cheaply from point A to point B. Mechanized textile production made clothing, a huge sink of time and resources before the industrial age, affordable to everybody.

What does AI get the consumer? Worse spam, more realistic scams, hallucinated search results, easy cheating on homework? AI-assisted coding doesn't benefit them, and the jury is still out on that too (see recent study showing it's a net negative for efficiency).

azeirah•8h ago
For learning with self-study it has been amazing.
gamblor956•8h ago
Until you dive deeper and discover that most of what the AI agents provided you was completely wrong...

There's a reason that AI is already starting to fade out of the limelight with customers (companies and consumers both). After several years, the best they can offer is slightly better chatbots than we had a decade ago with a fraction of the hardware.

Gud•8h ago
That has not been the case for me. I use LLMs to study German, so far it’s been an excellent teacher.

I also use them to help me write code, which it does pretty well.

rockemsockem•6h ago
I almost always validate what I get back from LLMs and it's usually right. Even when it isn't it still usually gets me closer to my goal (e.g maybe some UX has changed where a setting I'm looking for in an app has changed, etc).

IDK where you're getting the idea that it's fading out. So many people are using the "slightly better chatbots" every single day.

Btw if you only think chat GPT is slightly better than what we had a decade ago then I do not believe that you have used any chat bots at all, either 10 years ago or recently because that's actually a completely insane take.

simonw•5h ago
> So many people are using the "slightly better chatbots" every single day.

To back that up, here's a rare update on stats from OpenAI: https://x.com/nickaturley/status/1952385556664520875

> This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year.

simonw•5h ago
"Until you dive deeper and discover that most of what the AI agents provided you was completely wrong..."

Oddly enough, I don't think that actually matters too much to the dedicated autodidact.

Learning well is about consulting multiple sources and using them to build up your own robust mental model of the truth of how something works.

If you can really find the single perfect source of 100% correct information then great, I guess... but that's never been my experience. Every source of information has its flaws. You need to build your own mental model with a skeptical eye from as many sources as possible.

As such, even if AI makes mistakes it can still accelerate your learning, provided you know how to learn and know how to use tips from AI as part of your overall process.

Having an unreliable teacher in the mix may even be beneficial, because it enforces the need for applying critical thinking to what you are learning.

JimDabell•5h ago
> > "Until you dive deeper and discover that most of what the AI agents provided you was completely wrong..."

> Oddly enough, I don't think that actually matters too much to the dedicated autodidact.

I think it does matter, but the problem is vastly overstated. One person points out that AIs aren’t 100% reliable. Then the next person exaggerates that a little and says that AIs often get things wrong. Then the next person exaggerates that a little and says that AIs very often get things wrong. And so on.

Before you know it, you’ve got a group of anti-AI people utterly convinced that AI is totally unreliable and you can’t trust it at all. Not because they have a clear view of the problem, but because they are caught in this purity spiral where any criticism gets amplified every time it’s repeated.

Go and talk to a chatbot about beginner-level, mainstream stuff. They are very good at explaining things reliably. Can you catch them out with trick questions? Sure. Can you get incorrect information when you hit the edges of their knowledge? Sure. But for explaining the basics of a huge range of subjects, they are great. “Most of what they told you was completely wrong” is not something a typical beginner learning a typical subject would encounter. It’s a wild caricature of AI that people focused on the negatives have blown out of all proportion.

fc417fc802•5h ago
At a minimum, presumably once it arrives it will provide the consumer custom software solutions which are clearly a huge sink of time and resources (prior to the AI age).

You're looking at the prototype while complaining about an end product that isn't here yet.

osigurdson•4h ago
I don't have that negative of a take but agree to some extent. The internet, mobile, AI have all been useful but not in the same way as earlier advancements like electricity, cars, aircraft and even basic appliances. Outside of things that you can do on screens, most people live exactly the same way as they did in the 70s and 80s. For instance, it still takes 30-45 minutes to clean up after dinner - using the same kind of appliances that people used 50 years ago. The same goes for washing clothes, sorting socks and other boring things that even fairly rich people still do. Basically, the things people dreamed about in the 50s - more wealth, more leisure time, robots and flying cars really were the right dream.
bgwalter•8h ago
The mechanical loom produced a tangible good. That kind of automation was supposed to free people from menial work. Now they are trying to replace interesting work with human supervised slop, which is a stolen derivative work in the first place.

The loom wasn't centralized in four companies. Customers of textiles did not need an expensive subscription.

Obviously average people would benefit more if all that investment went into housing or in fact high speed railways. "AI" does not improve their lives one bit.

trod1234•7h ago
Apples to oranges.

Luddites weren't at a point where every industry sees individual capital formation/demand for labor trend towards zero over time.

Prices are ratios in the currency between factors and producers.

What do you suppose happens when the factors can't buy anything because there is nothing they can trade. Slavery has quite a lot of historic parallels with the trend towards this. Producers stop producing when they can make no profit.

You have a deflationary (chaotic) spiral towards socio-economic collapse, under the burden of debt/money-printing (as production risk). There are limits to systems, and when such limits are exceeded; great destruction occurs.

Malthus/Catton pose a very real existential threat when such disorder occurs, and its almost inevitable that it does without action to prevent it. One cannot assume action will happen until it actually does.

harimau777•5h ago
I mean, for them it probably was.
no_wizard•5h ago
Luddites weren’t anti technology at all[0] in fact they were quite adept at using technology. It was a labor movement that fought for worker rights in the face of new technologies.

[0]: https://www.newyorker.com/books/page-turner/rethinking-the-l...

rockemsockem•8h ago
I'm continually amazed to find takes like this. Can you explain how you don't find clear utility, at the personal level, from LLMs?

I am being 100% genuine here, I struggle to understand how the most useful things I've ever encountered are thought of this way and would like to better understand your perspective.

agent_turtle•8h ago
There was a study recently that showed how not only did devs overestimate the time saved using AI, but that they were net negative compared to the control group.

Anyway, that about sums up my experience with AI. It may save some time here and there, but on net, you’re better off without it.

keeda•8h ago
That study gets mentioned all the time, somehow this one and many of the others it cites don't get much airtime: https://www.stlouisfed.org/on-the-economy/2024/sep/rapid-ado...

>This implies that each hour spent using genAI increases the worker’s productivity for that hour by 33%. This is similar in magnitude to the average productivity gain of 27% from several randomized experiments of genAI usage (Cui et al., 2024; Dell’Acqua et al., 2023; Noy and Zhang, 2023; Peng et al., 2023)

Our estimated aggregate productivity gain from genAI (1.1%) exceeds the 0.7% estimate by Acemoglu (2024) based on a similar framework.

To be clear, they are surmising that GenAI is already having a productivity gain.

agent_turtle•7h ago
The article you gave is derived from a poll, not a study.

As for the quote, I can’t find it in the article. Can you point me to it? I did click on one of the studies and it indicated productivity gains specifically on writing tasks. Which reminded me of this recent BBC article about a copywriter making bank fixing expensive mistakes caused by AI: https://www.bbc.com/news/articles/cyvm1dyp9v2o

keeda•5h ago
The quote is from the paper linked in the article: https://s3.amazonaws.com/real.stlouisfed.org/wp/2024/2024-02...

It's actually based on the results of three surveys conducted by two different parties. While surveys are subject to all kinds of biases and the gains are self-reported, their findings of 25% - 33% producitivity do match the gains shown by at least 3 other randomized studies, one of which was specifically about programming. Those studies are worth looking at as well.

foolswisdom•3h ago
It's worth noting that the METR paper that found decreased productivity also found that many of the developers thought the work was being sped up.
rockemsockem•6h ago
I'm not talking about time saving. AI seems to speed up my searching a bit since I can get results quicker without having to find the right query then find a site that actually answers my question, but that's minor, as nice as it is.

I use AI in my personal life to learn about things I never would have without it because it makes the cost of finding any basic knowledge basically 0. Diet improvement ideas based on several quick questions about gut functioning, etc, recently learning how to gauge tsunami severity, and tons of other things. Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.

How much have you actually tried using LLMs and did you just use normal chat or some big grand complex tool? I mostly just use chat and prefer to enter my code in artisanally.

fzeroracer•5h ago
Why not just look up the information directly instead of asking a machine that you can never truly validate?

If I need information, I can just keyword search wikipedia, then follow the chain there and then validate the sources along with outside information. An LLM would actually cost me time because I would still need to do all of the above anyways, making it a meaningless step.

If you don't do the above then it's 'cheaper' but you're implicitly trusting the lying machine to not lie to you.

rockemsockem•5h ago
See my previous statement

> Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.

You're practically saying that looking at an index in the back of a book is a meaningless step.

It is significantly faster, so much so that I am able to ask it things that would have taken an indeterminate amount of time to research before, for just simple information, not deep understanding.

Edit:

Also I can truly validate literally any piece of information it gives me. Like I said previously, it makes it very easy to validate via Wikipedia or other places with the right terms, which I may not have known ahead of time.

fzeroracer•5h ago
Again, why would you just not use Wikipedia as your index? I'm saying why would you use the index that lies and hallucinates to you instead of another perfectly good index elsewhere.

You're using the machine that ingests and regurgitates stuff like Wikipedia to you. Why not skip the middleman entirely?

rockemsockem•5h ago
Because the middleman is faster and practically never lies/hallucinates for simple queries, the middleman can handle vague queries that Google and Wikipedia cannot.

The same reasons you use Wikipedia instead of reading all the citations on Wikipedia.

lisbbb•3h ago
A lot of formerly useful search tools, particularly Google, are just trash now, absolute trash.
flkiwi•5h ago
This is kind of how I use it:

1. To work through a question I'm not sure how to ask yet 2. To give me a starting point/framework when I have zero experience with an issue 3. To automate incredibly stupid monkey-level tasks that I have to do but are not particularly valuable

It's a remarkable accomplishment that has the potential to change a lot of things very quickly but, right now, it's (by which I mean publicly available models) only revolutionary for people who (a) have a vested interest in its success, (b) are easily swayed by salespeople, (c) have quite simple needs (which, incidentally, can relate to incredible work!), or (d) never really bothered to check their work anyway.

lisbbb•3h ago
How much of that is junk knowledge, though? I mean, sure, I love looking up obscure information, particularly about cosmology and astronomy, but in reality, it's not making me better or smarter, it's just kind of "science junk food." It feels good, though. I feel smarter. I don't think I am, though, because the things I really need to work on about myself are getting pushed aside.
agent_turtle•35m ago
OpenAI is currently being evaluated in terms of hundreds of billions. That’s an insane number for creating a product that “speeds up searching a bit”.
ares623•5h ago
Is the "here and there" tasks that were previously so little value that they are always stuck in the backlog? i.e. the parts where it helps have very little value in the first place.
decimalenough•5h ago
I actually do get clear utility, with major caveats, namely that I only ask things where the answer is both well known and verifiable.

I still do 10-20x regular Kagi searches for every LLM search, which seems about right in terms of the utility I'm personally getting out of this.

harimau777•5h ago
I think that there's a strong argument to be made that the negatives of having to wade through AI slop outweights the benefits that AI may provide. I also suspect that AI could contribute to enshittification of society; e.g. AI therapy being substituted for real therapy, AI products displacing industrial design, etc.
rockemsockem•5h ago
What is this AI slop that you're wading through and where is it?

Spam emails are not any worse for being verbose, I don't recognize the sender, I send it straight to spam. The volume seems to be the same.

You don't want an AI therapist? Go get a normal therapist.

I have not heard of any AI product displacing industrial design, but if anything it'll make it easier to make/design stuff if/when it gets there.

Like are these real things you are personally experiencing?

fc417fc802•5h ago
> e.g. AI therapy being substituted for real therapy, AI products displacing industrial design, etc.

That depends on the quality of the end product and the willingness to invest the resources necessary to achieve a given quality of result. If average quality goes up in practice then I'd chalk that up as a net win. Low quality replacing high quality is categorically different than low quality filling a previously empty void.

Therapy in particular is interesting not just because of average quality in practice (therapists are expensive experts) but also because of user behavior. There will be users who exhibit both increased and decreased willingness to share with an LLM versus a human.

There's also a very strong privacy angle. Querying a local LLM affords me an expectation of privacy that I don't have when it comes to Google or even Wikipedia. (In the latter case I could maintain a local mirror but that's similar to maintaining a local LLM from a technical perspective making it a moot point.)

kazinator•5h ago
Gemini wasted my time today assuring me that if I want a git bundle that only has the top N commits, yet is cleanly clone-able, I can just make a --depth N clone of the original repo, and and do a git bundle create ... --all.

Nope; cloning a bundle created from a depth-limited clone results in error messages about missing commit objects.

So I tell the parrot that, and it comes back with: of course, it is well-known that it doesn't work, blah blah. (Then why wasn't it well known one prompt ago, when it was suggested as the definitive answer?)

Obviously, I wasn't in the "the right mindset" today.

This mindset is one of two things:

- the mindset of a complete n00b asking a n00b question that it will nail every time, predicting it out of its training data richly replete with n00b material.

- the mindset of a patient data miner, willing to expend all they keystrokes. needed to build up enough context to in effect create a query which zeroes in on the right nugget of information that made an appearance in the training data.

It was interesting to go down this #2 rabbit hole when this stuff was new, which it isn't any more. Basically do most of the work, while it looks as if it solved the problem.

I had the right mindset for AI, but most of it has worn off. If I don't get something useful in one query with at most one follow up, I quit.

The only shills who continue to hype AI are either completely dishonest assholes, or genuine bros bearing weapons-grade confirmation bias.

Let's try something else:

Q: "What modes of C major are their own reflection?"

A: "The Lydian and Phrygian modes are reflections of each other, as are the Ionian and Aeolian modes, and the Dorian and Mixolydian modes. The Locrian mode is its own reflection."

Very nice sounding and grammatical, but gapingly wrong in every point. The only mode that is its own reflection is Dorian. Furthermore, Lydian and Phrygian are not mutual reflections. Phrygian reflected around is root is Ionian. The reflection of Lydian is Locrian; and of Aeolian, Mixolydian.

I once loaded a NotebookLM with materials about George Russel's concept of the Lydian Chromatic, and Tonal Gravity. It made an incomprehensible mess of explaining the stuff, worse than the original sources.

AI performs well on whatever is the focus of its purveyors. When they want to shake down entry-level coding, they beef it up on entry-level coding and let it loose, leaving it unable to tell Mixolydian from mixing console.

lisbbb•3h ago
Thank you! This is what I've been trying to tell people about LLMs. They don't hold up. They're like those Western movie set towns that look normal from the front, but when you walk around behind them, you see it is all just scaffolding with false fronts.
ishyaboibro•3h ago
what model did you ask? here's the exact reply I received from Claude Sonnet, which appears to be exactly the answer you were expecting:

"Among the seven modes of C major, only Dorian is its own reflection.

Understanding Mode Reflections When we reflect a mode, we reverse its interval pattern. The modes of C major and their interval patterns are:

Ionian: W-W-H-W-W-W-H

Dorian: W-H-W-W-W-H-W

Phrygian: H-W-W-W-H-W-W

Lydian: W-W-W-H-W-W-H

Mixolydian: W-W-H-W-W-H-W

Aeolian: W-H-W-W-H-W-W

Locrian: H-W-W-H-W-W-W

The Palindromic Nature of Dorian Dorian mode is palindromic, meaning it produces the same scale whether you read its interval pattern forwards or backwards. When you reverse the Dorian interval pattern W-H-W-W-W-H-W, you get exactly the same sequence: W-H-W-W-W-H-W.

Mirror Pairs Among the Other Modes The remaining modes form mirror pairs with each other:

Ionian-Phrygian: Mirror pair

Lydian-Locrian: Mirror pair

Mixolydian-Aeolian: Mirror pair

For example, when you reflect the C major scale (Ionian), which has the interval pattern W-W-H-W-W-W-H, you get H-W-W-W-H-W-W, which corresponds to the Phrygian mode.

This symmetrical relationship exists because the whole diatonic scale system can be symmetrically inverted, creating these natural mirror relationships between the modes"

kazinator•3h ago
> what model did you ask?

Are you hoping to disprove my point by cherry picking the AI that gets the answer?

I used Gemini 2.5 Flash.

Where can I get an exact list of stuff that Gemini 2.5 Flash does not know that Claude Sonnet does, and vice versa?

Then before deciding to consult with AI, I can consult the list?

simianwords•2h ago
2.5 flash is particularly cheap and fast, I think 2.5 pro would have got all the answers correct - at least it gets this one correct.
kazinator•1h ago
Why doesn't Flash get it correct, yet comes up with plausible sounding nonsense? That means it is trained on some texts in the area.

What would make 2.5 Pro (or anything else) categorically better would be if it could say "I don't know".

There will be things that Claude 3.7 or Gemini Pro will not know, and the interpolations they come up with will not make sense.

simianwords•15m ago
Model accuracy goes up as you use heavier models. Accuracy is always preferable and the jump from Flash to Pro is considerable.

You must rely on your own internal model in your head to verify the answers it gives.

On hallucination: it is a problem but again, it reduces as you use heavier models.

iusewindows•1h ago
Today I read a stupid Hackernews comment about how AI is useless. Therefore Hackernews is stupid. Oh, I need a filtered list of which comments to read?

Do you build computers by ordering random parts off Alibaba and complaining when they are deficient? You are complaining that you need to RTFM for a piece of high tech?

kazinator•1h ago
> Oh, I need a filtered list of which comments to read?

If they are about something you're not sure about, and you're making decisions based on them ... maybe it would actually help, so yes?

> Do you build computers by ordering random parts off Alibaba and complaining when they are deficient?

We build computers using parts which are carefully documented by data sheets, which tell you exactly for what ranges of parameters their operation is defined and in what ways. (temperatures, voltages, currents, frequencies, loads, timings, typical circuits, circuit board layouts, programming details ...)

bluefirebrand•4h ago
> Can you explain how you don't find clear utility, at the personal level, from LLMs?

Sure. They don't meaningfully improve anything in my life personally.

They don't improve my search experience, they don't improve my work experience, they don't improve the quality of my online interactions, and I don't think they improve the quality of the society I live in either

rockemsockem•4h ago
Have you even tried using them though? Like in earnest? Or do you see yourself as a conscientious objector of sorts?
bluefirebrand•3h ago
I have tried using them frequently. I've tried many things for years now, and while I am impressed I'm not impressed enough to replace any substantial part of my workflow with them

At this point I am somewhat of a conscientious objector though

Mostly from a stance of "these are not actually as good as people say and we will regret automating away jobs held by competent people in favor of these low quality automations"

ruszki•1h ago
This whole topic makes me remember the argument for vi, and quick typing. I was always baffled because for the 25 years since I can code, typing was never that huge block of my time that it would matter.

I have the same feeling with AI.

It clearly cannot produce the quality of code, architecture, features which I require from myself. And I also want to understand what’s written, and not saying “it works, it’s fine <inserting dog with coffee image here>”, and not copy-pasting a terrible StackOverflow answer which doesn’t need half of the code in reality, and clearly nobody who answered sat down and tried to understand it.

Of course, not everybody wants these, and I’ve seen several people who were fine with not understanding what they were doing. Even before AI. Now they are happy AI users. But it clears to me that it’s not beneficial salary, promotion, and political power wise.

So what’s left is that it types faster… but that was never an issue.

It can be better however. There was the first case just about a month ago when one of them could answer better to a problem than anything else which I knew or could find via Kagi/Google. But generally speaking it’s not there at all. Yet.

knowitnone2•48m ago
so you never read the summary at the top of Google search results to get the answer because it provides the answer to most of my searches. "they don't improve my work experience" that's fair but perhaps you haven't really given it a try? "they don't improve the quality of my online interactions" but how do you know? LLMs are being used to create websites, generate logos, images, memes, art videos, stories - you've already been entertained by them and not even know it. "I don't think they improve the quality of the society I live in either" That's a feeling, not a fact.
bluefirebrand•40m ago
> so you never read the summary at the top of Google search results to get the answer because it provides the answer to most of my searches

Unfortunately yes I do, because it is placed in a way to immediately hijack my attention

Most of the time it is just regurgitating the text of the first link anyways, so I don't think it saves a substantial amount of time or effort. I would genuinely turn it off if they let me

> That's a feeling, not a fact

So? I'm allowed to navigate my life by how I feel

ryao•6m ago
If you find it annoying, why not configure a custom blocking rule in an adblocker to remove it?
tikhonj•4h ago
Can't speak for anyone else, but for me, AI/LLMs have been firmly in the "nice but forgettable" camp. Like, sometimes it's marginally more convenient to use an LLM than to do a proper web search or to figure out how to write some code—but that's a small time saving at best, it's less of a net impact than Stack Overflow was.

I'm already a pretty fast writer and programmer without LLMs. If I hadn't already learned how to write and program quickly, perhaps I would get more use out of LLMs. But the LLMs would be saving me the effort of learning which, ultimately, is an O(1) cost for O(n) benefit. Not super compelling. And what would I even do with a larger volume of text output? I already write more than most folks are willing to read...

So, sure, it's not strictly zero utility, but it's far less utility than a long series of other things.

On the other hand, trains are fucking amazing. I don't drive, and having real passenger rail is a big chunk of why I want to move to Europe one day. Being able to get places without needing to learn and then operate a big, dangerous machine—one that is statistically much more dangerous for folks with ADHD like me—makes a massive difference in my day-to-day life. Having a language model... doesn't.

And that's living in the Bay Area where the trains aren't great. Bart, Caltrain and Amtrak disappearing would have an orders of magnitude larger effect on my life than if LLMs stopped working.

And I'm totally ignoring the indirect but substantial value I get out of freight rail. Sure, ships and trucks could probably get us there, but the net increase in costs and pollution should not be underestimated.

knowitnone2•54m ago
No matter how good or fast you are, you will never beat the LLM. What you're saying is akin to "your math is faster than a calculator" and I'm willing to bet it's not. LLMs are not perfect and will require intervention and fixing but if it can get you 90% there, that's pretty good. In the coming years, you'll soon find your peers are performing much faster than you (assuming you program for a living) and you will have no choice but you do you.
tikhonj•19m ago
Fun story: when I interned at Jane Street, they gave out worksheets full of put-call parity calculations to do in your head because, when you're trading, being able to do that sort of calculation at a glance is far faster and more fluid than using a calculator or computer.

So for some professionals, mental math really is faster.

Make of that what you will.

WD-42•17m ago
LLMs do not work the same way calculators do, not even close.
roncesvalles•3h ago
As a dev, I find that the personal utility of LLMs is still very limited.

Analyze it this way: Are LLMs enabling something that was impossible before? My answer would be No.

Whatever I'm asking of the LLM, I'd have figured it out from googling and RTFMing anyway, and probably have done a better job at it. And guess what, after letting the LLM do it, I probably still need to google and RTFM anyway.

You might say "it's enabling the impossible because you can now do things in less time", to which I would say, I don't really think you can do it in less time. It's more like cruise control where it takes the same time to get to your destination but you just need to expend less mental effort.

Other elephants in the room:

- where is the missing explosion of (non-AI) software startups that should've been enabled by LLM dev efficiency improvements?

- why is adoption among big tech SWEs near zero despite intense push from management? You'd think, of all people, you wouldn't have to ask them twice.

The emperor has no clothes.

knowitnone2•46m ago
do cars enable something that was impossible before? bikes? shoes? clothing? Your answer would be No.
roncesvalles•34m ago
If your implication is that LLM-assisted coding to non-LLM-assisted coding is like motorcar to horse buggy, that is just not the case.
ryao•4m ago
> Are LLMs enabling something that was impossible before?

I would say yes. It was previously impossible for me to research a subject within 5 minutes when it required doing several searches and review dozens of search results. A LLM with function calling can do this.

shusaku•2h ago
The concerning thing is that AI contrarianism is being left wing coded. Imagine you’re fighting a war and one side decides “guns are overhyped, let’s stick with swords”. While there is a lot of hype about AI, even the pessimistic take has to admit it’s a game changing tech. If it isn’t doing anything useful for you, that’s because you need to get off your butt and start building tools on top of it.

Especially people on the left need to realize how important their vision is to the future if AI. Right now you can see the current US admin having zero concern for AI safety or carbon use. If you keep your head in the dirt saying “bubble!” that’s no problem. But if this is here to stay then you need to get involved.

mopsi•46m ago
Last night, I asked a LLM to produce an /etc/fstab entry for connecting to a network share with specific options. I was too lazy to look up the options from the manual. It gave me the options separated by semicolons, which is invalid because the config file requires commas as separators.

I honestly don't see technology that stumbles over trivial problems like these as something that will replace my job, or any job that is not already automatable within ten thousand lines of Python, anytime soon. The gap between hype and actual capabilities is insane. The more I've tried to apply LLMs to real problems, the more disillusioned I've become. There is nothing, absolutely nothing, no matter how small the task, that I can trust LLMs to do correctly.

peab•8h ago
the goal of the major AI labs is to create AGI. The net utility of AGI is at least on the level of electricity, or the steam engine. It's debatable whether or not they'll achieve that, but if you actually look at what the goal is, the investment makes sense.
jcgrillo•7h ago
what? crashing the economy for a psychotic sci-fi delusion "makes sense"? how?
rockemsockem•6h ago
How exactly is AI crashing the economy....? Do you walk around with these beliefs every day?
jcgrillo•4h ago
when bubbles burst crashes follow. this is a colossal bubble. i do walk around with that belief every day, because every day that passes is yet another day when this overblown AI hype bullshit fails to deliver the goods.
Fomite•53m ago
'It's debatable whether or not they'll achieve that, but if you actually look at what the goal is, the investment makes sense.'

The first clause of that sentence negates the second.

The investment only makes sense if the the expectation of success * the investment < the payoff of that goal.

If I don't think the major AI labs will succeed, then it's not justified.

skybrian•5h ago
Computing is fairly general-purpose, so I suspect that the data centers at least will be used for something. Reusing so many GPU's might be harder, but not as bad as ASICs. There are a lot of other calculations they could do.
blibble•5h ago
a data centre is a big warehouse

the vast expense is on the GPU silicon, which is essentially useless for compute other than parallel floating point operations

when the bubble pops, the "investment" will be a very expensive total waste of perfectly good sand

BobaFloutist•5h ago
Maybe they can use it all to mine crypto.
fc417fc802•4h ago
Most scientific HPC workloads are designed to utilize GPU equipped nodes. If AI completely flops scientific modeling will see huge benefits. It's a win-win (except for the investors I guess).
skybrian•1h ago
I don't think we're too worried about wasting sand, though? What are the major costs of producing a GPU? Which of those are we worried about wasting?

I'm not going to do the homework for a Hacker News comment, but here are a few guesses:

I suspect that a lot of it is TSMC's capex for building new fabs. But since the fabs are already built, they could run them for longer. (Possibly producing different chips.)

Meanwhile, carbon emissions due to electricity use by data centers can't be taken back.

But also, much of an investment bubble popping wouldn't be about wasting resources. It would be investors' anticipated profits turning out to be a mirage - that is, investors feel poorer, but nothing material was lost.

kazinator•5h ago
> Reusing so many GPU's might be harder

It could have some unexciting applications like, oh, modeling climate change and other scientific simulations.

eru•4h ago
> There is obvious utility to railroads, especially in a world with no cars.

> The net utility of AI is far more debatable.

As long as people are willing to pay for access to AI (either directly or indirectly), who are we to argue?

In comparison: what's the utility of watching a Star Wars movie? I say, if people are willing to part with their hard earned cash for something, we must assume that they get something out of it.

tharmas•7h ago
Isn't the US economy far more varied than it was in the 19th century? More dense? And therefore wouldn't be more difficult for one industry to dominate the US economy today than it was in the 19th century?
tripletao•6h ago
> Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century.

Has anyone found the source for that 20%? Here's a paper I found:

> Between 1848 and 1854, railroad investment, in these and in preceding years, contributed to 4.31% of GDP. Overall, the 1850s are the period in which railroad investment had the most substantial contribution to economic conditions, 2.93% of GDP, relative to 2.51% during the 1840s and 2.49% during the 1830s, driven by the much larger investment volumes during the period.

https://economics.wm.edu/wp/cwm_wp153.pdf

The first sentence isn't clear to me. Is 4.31 > 2.93 because the average was higher from 1848-1854 than from 1850-1859, or because the "preceding years" part means they lumped earlier investment into the former range so it's not actually an average? Regardless, we're nowhere near 20%.

I'm wondering if the claim was actually something like "total investment over x years was 20% of GDP for one year". For example, a paper about the UK says:

> At that time, £170 million was close to 20% of GDP, and most of it was spent in about four years.

https://www-users.cse.umn.edu/~odlyzko/doc/mania18.pdf

That would be more believable, but the comparison with AI spending in a single year would not be meaningful.

jefftk•5h ago
> Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century.

When you go so far back in time you run into the problem where GDP only counts the market economy. When you count people farming for their own consumption, making their own clothes, etc, spending on railroads was a much smaller fraction of the US economy than you'd estimate from that statistic (maybe 5-10%?)

eru•4h ago
Yes, that was a problem back then, and is also a problem today, but in different ways.

First, GDP still doesn't count you making your own meals. Second, when eg free Wikipedia replaces paid for encyclopedias, this makes society better off, but technically decreases GDP.

However, having said all that, it's remarkably how well GDP correlates with all the goods things we care about, despite its technical limitations.

jefftk•3h ago
This has always been an issue with GDP, but it's a much larger issue the father back you go.

While GDP correlates reasonably well, imagine very roughly what it would be like if GDP growth averaged 3% annually while the overall economy grew at 2%. While correlation would be good, if we speculate that 80% of the economy is counted in GDP today, then only 10% would have been counted 200 years ago.

epicureanideal•1h ago
It would be great if there was a "GDP + non-transactional economy" metric. Does one exist, or is there a relatively straightforward way to construct one?
onlyrealcuzzo•5h ago
I don't know if the economy could ever be accurately reduced to "good" or "bad".

What's good for one class is often bad for another.

Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?

For some people that's great. For others, not so great.

Maybe some economies are great for everyone, but this is definitely not one of those.

This economy is great for some people and bad for others.

fc417fc802•5h ago
> Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?

In today's US? Debatable, but on the whole probably not.

In a hypothetical country with sane health care and social safety net policies? Yes that would be hugely beneficial. The tax base would bear the vast majority of the burden of those displaced from their jobs making it a much more straightforward collective optimization problem.

eru•4h ago
The US spends more per capita on their social safety net than almost all other countries, including France and the UK.

The US spends around 6.8k USD/capita/year on public health care. The UK spends around 4.2k USD/capita/year and France spends around 3.7k.

For general public social spending the numbers are 17.7k for the US, 10.2k for the UK and 13k for France.

(The data is for 2022.)

Though I realise you asked for sane policies. I can't comment on that.

I'm not quite sure why the grandfather commenter talks about unemployment: the US had and has fairly low unemployment in the last few decades. And places like France with their vaunted social safety net have much higher unemployment.

lossolo•3h ago
It’s not just how much you spend on healthcare, but what that spending actually delivers. How much does an emergency room visit cost in the U.S. compared to the UK or France? How do prescription drug prices in the U.S. compare to those in the EU? When you look at what Americans pay relative to outcomes, the U.S. has one of the most inefficient healthcare systems among OECD countries.
eru•3h ago
If you want to see an efficient healthcare system in a rich country, have a look at Singapore. They spend far less than eg the UK.
bugglebeetle•3h ago
> The US spends more per capita on their social safety net than almost all other countries, including France and the UK.

To a vast and corrupt array of rentiers, middlemen, and out-and-out fraudsters, instead of direct provision of services, resulting in worse outcomes at higher costs!

Turns out if I’m forced to have visits with three different wallet inspectors on the way to seeing a doctor, I’ve somehow spent more money and end up less healthy than my neighbors who did not. Curious…

dash2•33m ago
It's easier to see your own society's faults. The NHS also has waste, most obviously the deadweight loss caused by queuing. I know someone who went back to get treated to her own country. Not remarkable except that country was Ukraine.
esseph•59m ago
You're looking at costs, not outcomes. Our outcomes aren't in alignment with our costs.

(Too many people getting their metaphorical pound of flesh, and bad incentives.)

fc417fc802•39m ago
This. My intent was to refer to outcomes. My hypothetical country was one where being unemployed might lose you various luxuries but would still see you with guaranteed food on the table and a roof over your head. Under such conditions there's no need to consider a rise in the unemployment metric to be a major downside except for the inevitable ballooning cost to the tax base.
marcus_holmes•4h ago
Agree completely. The idea that an increasing GDP or stock market is always good has taken a beating recently. Mostly because it seems that the beneficiaries of that number increase are the same few who already have more than enough, and everyone else continues to decline.

We need new metrics.

eru•4h ago
What's a class?
esseph•1h ago
Another interesting claim I have come across is that AI investment is now larger that consumer spending:

https://sherwood.news/markets/the-ai-spending-boom-is-eating...

troyastorino•52m ago
Put a comment on this below, but the claim is highly misleading...consumer spending is ~$5 trillion, AI investment is ~$100 billion. The graph is looking at something like contribution to GDP growth (not contribution to GDP), but that is even misleading b/c if you don't adjust for seasonality, H1 consumer spending is almost always lower than H2 consumer spending of the previous year (because Q4 always has a higher level of consumer spending).

(comment below: https://news.ycombinator.com/item?id=44804528 )

zzleeper•36m ago
To clarify, AI investment has contributed more to GDP GROWTH than consumer spending.

So they are talking about changes not levels.

asdev•9h ago
look at the S&P 500 chart when ChatGPT came out. We were just on our way to flushing out the Covid excess money and then the AI narrative saved the market. AI narrative + inflation that is definitely way more than reported is propping up this market.
rglover•9h ago
> There could be a crash that exceeds the dot com bust, at a time when the political situation through which such a crash would be navigated would be nightmarish.

If the general theme of this article is right (that it's a bubble soon to burst), I'm less concerned about the political environment and more concerned about the insane levels of debt.

If AI is indeed the thing propping up the economy, when that busts, unless there are some seriously unpopular moves made (Volcker level interest rates, another bailout leading to higher taxes, etc), then we're heading towards another depression. Likely one that makes the first look like a sideshow.

The only thing preventing that from coming true IMO is dollar hegemony (and keeping the world convinced that the world's super power having $37T of debt and growing is totally normal if you'd just accept MMT).

Hikikomori•9h ago
Ai bubble, economy in the trash already, inflation from tariffs. Dollar might get real cheap when big holders start selling stocks and exchanging it, nobody wants to be left holding their bag, and they have a lot of dollars.

Which is their (Thiel, project2025, etc) plan, federal land will be sold for cheap.

decimalenough•9h ago
Selling stocks for what? If the dollar is going down the toilet, the last thing you want to have is piles of rapidly evaporating cash.
marcusestes•8h ago
Never totally discount _deflationary_ scenarios.
heathrow83829•5h ago
based on my understanding of what all the financial pros are saying: they'll never let that happen. they'll inflate away to the moon before they allow for a deflationary bust. that's why everyone's in equities in the first place. it's almost insured, at this point.
margalabargala•9h ago
> Likely one that makes the first look like a sideshow.

The first Great Depression was pretty darn bad, I'm not at all convinced that this hypothetical one would be worse.

agent_turtle•8h ago
Some of the variables that made the Great Depression what it was included very high tariff rates and lack of quality federal oversight.

Today, we have the highest tariffs since right before the Great Depression, with the added bonus of economic uncertainty because our current tariff rates change on a near daily basis.

Add in meme stocks, AI bubble, crypto, attacks on the Federal Reserve’s independence, and a decreasing trust in federal economic data, and you can make the case that things could get pretty ugly.

margalabargala•7h ago
Sure, you can make the case that things could get pretty ugly. You could even make the case that things could get about as bad as the Great Depression.

But for things to be much worse than the Great Depression, I think is an extraordinary claim. I see the ingredients for a Great Depression-scale event, but not for a much-worse-than-Great-Depression event.

ronald_raygun•1h ago
Throw some nukes and a war over Taiwan into the mix?
BLKNSLVR•1h ago
How much worse could it be if the President was likely to fire the individual holding the position responsible for announcing "it's official, this is a recession"? And so on in that head-in-the-sand direction for as long as their loyalists are willing and able to defend the Presidents proclamations of fake news?

How long will the foot stay on the accelerator after (almost literally) everyone else knows we might be in a bit of strife here?

If the US can put off the depression for the next three years then it has a much better chance of working it's way out gracefully.

Gabriel_Martin•4h ago
MMT is just a description of the monetary reality we're in. If everything changed, the new reality would be MMT.
BLKNSLVR•1h ago
I'm currently reading The Mandibles[0], which is feeling increasingly inevitably prophetic.

[0]: https://en.wikipedia.org/wiki/The_Mandibles

Animats•9h ago
"Over the last six months, capital expenditures on AI—counting just information processing equipment and software, by the way—added more to the growth of the US economy than all consumer spending combined."

If this isn't the Singularity, there's going to be a big crash. What we have now is semi-useful, but too limited. It has to get a lot better to justify multiple companies with US $4 trillion valuations. Total US consumer spending is about $16 trillion / yr.

Remember the Metaverse/VR/AR boom? Facebook/Meta did somehow lose upwards of US$20 billion on that. That was tiny compared to the AI boom.

rockemsockem•8h ago
Tbf I think most would say that the VR/AR boom is still ongoing, just with less glitz.

Edit: agree on the metaverse as implemented/demoed not being much, but that's literally one application

keeda•8h ago
I posted a comment yesterday regarding this with links to a couple relevant studies: https://news.ycombinator.com/item?id=44793392 -- briefly:

* Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs.

* Surveys are showing that most people are only using AI for a fraction of the time at work, and still reporting significant productivity benefits, even with current models.

The AGI/ASI hype is a distraction, potentially only relevant to the frontier model labs. Even if all model development froze today, there is tremendous untapped demand to be met.

The Metaverse/VR/AR boom was never a boom, with only 2 big companies (Meta, Apple) plowing any "real" money into it. Similarly with crypto, another thing that AI is unjustifiably compared to. I think because people were trying to make it happen.

With the AI boom, however, the largest companies, major governments and VCs are all investing feverishly because it is already happening and they want in on it.

Animats•7h ago
> Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs.

Are they constrained on resources for training, or resources for serving users using pre-trained LLMs? The first use case is R&D, the second is revenue. The ratio of hardware costs for those areas would be good to know.

keeda•5h ago
Good question, I don't believe they break out their workloads into training versus inference, in fact they don't even break out any mumbers in any useful detail. But anecdotally the public clouds did seem to be most GPU-constrained whenever Sam Altman was making the rounds asking for trillions in infra for training.

However, my understanding is that the same GPUs can be used for both training and inference (potentially in different configurations?) so there is a lot of elasticity there.

That said, for the public clouds like Azure, AWS and GCP, training is also a source of revenue because other labs pay them to train their models. This is where accusations of funny money shell games come into play because these companies often themselves invest in those labs.

brotchie•5h ago
Look at the induced demand due to Claude code. I mean, they wildly underestimated average token usage by users. There's high willingness to pay. There's literally not enough inference infra available.

I was working on crypto during the NFT mania, and THAT felt like a bubble at the time. I'd spend my days writing smart contracts and related infra, but I was doing a genuine wallet transaction at most once a week, and that was on speculation, not work.

My adoption rate of AI has been rapid, not for toy tasks, but for meaningful complex work. Easily send 50 prompts per day to various AI tools, use LLM-driven auto-complete continuously, etc.

That's where AI is different from the dot com bubble (not enough folks materially transaction on the web at the time), or the crypto mania (speculation and not utility).

Could I use a smarter model today? Yes, I would love that and use the hell out of it. Could I use a model with 10x the tokens/second today? Yes, I would use it immediately and get substantial gains from a faster iteration cycle.

sothatsit•4h ago
Claude Code was the tipping point for me from "that's neat" to "wow, that's really useful". Suddenly, paying $200/month for an AI service made sense. Before that, I didn't want to pay $20/month for access to Claude, as I already had my $20/month subscription to ChatGPT.

I have to imagine that other professions are going to see similar inflection points at some point. When they do, as seen with Claude Code, demand can increase very rapidly.

lisbbb•2h ago
Everything I have worked on as a fullstack developer for multiple large companies over the past 25 years tells me that AI isn't just going to replace a bunch of workers. The complexity of those places is crazy and it takes teamwork to keep them running. Just look what happens internally over a long holiday weekend at most big companies, they are often just barely meeting their uptime guarantees.

I was recently at a big, three-letter pharmacy company and I can't be specific, but just let me say this: They're always on the edge of having the main websites going down for this or that reason. It's a constant battle.

How is adding more AI complexity going to help any of that when they don't even have a competent enough workforce to manage the complexity as it is today?

You mention VR--that's another huge flop. I got my son a VR headset for Christmas in like 2022. It was cool, but he couldn't use it long or he got nauseaus. I was like "okay, this is problematic." I really liked it in some ways, but sitting around with that goofy thing on your head wasn't a strong selling point at all. It just wasn't.

If AI can't start doing things with accuracy and cleverness, then it's not useful.

cheevly•1h ago
You have it so backwards. The complexity of those places is exactly why AI will replace it.
827a•1h ago
So, to give a tactile example that helped me recently: We have a frontend web application that was having some issues with a specific feature. This feature makes a complex chain of a maybe dozen API requests when a resource is created, conditionally based on certain things, and there's a similar process that happens when editing this resource. But, there was a difference in behavior between the creating and editing routes, when a user expected that the behavior would be the same.

This is crusty, horrible, old, complex code. Nothing is in one place. The entire editing experience was copy-pasted from the create resource experience (not even reusable components; literally copy-pasted). As the principal on the team, with the best understanding of anyone about it, even my understanding was basically just "yeah I think these ten or so things should happen in both cases because that's how the last guy explained it to me and it vibes with how I've seen it behave when I use it".

I asked Cursor (Opus Max) something along the lines of: Compare and contrast the differences in how the application behaves when creating this resource versus updating it. Focus on the API calls its making. It responded in short order with a great summary, and without really being specifically prompted to generate this insight it ended the message by saying: It looks like editing this resource doesn't make the API call to send a notification to affected users, even though the text on the page suggests that it should and it does when creating the resource.

I suspect I could have just said "fix it" and it could have handled it. But, as with anything, as you say: Its more complicated than that. Because while we imply we want the app to do this, its a human's job (not the AI's) to read into what's happening here: The user was confused because they expected the app to do this, but do they actually want the app to do this? Or were they just confused because text on the page (which was probably just copy-pasted from the create resource flow) implied that it would?

So instead I say: Summarize this finding into a couple sentences I can send to the affected customer to get his take on it. Well, that's bread and butter for even AIs three years ago right there, so off it goes. The current behavior is correct; we just need to update the language to manage expectations better. AI could also do that, but its faster for me to just click the hyperlink in Claude's output, jumps right to the file, and I make the update.

Opus Max is expensive. According to Cursor's dashboard, this back-and-forth cost ~$1.50. But let's say it would have taken me just an hour to arrive at the same insight it did (in a fifth the time): that's easily over $100. That's a net win for the business, and its a net win for me because I now understand the code better than I did before, and I was able to focus my time on the components of the problem that humans are good at.

827a•1h ago
I honestly disagree (mostly). Sure, we might see some adjustments to valuations to better account for the expected profit margins; those might have been overblown. But if you had access to any dashboard inside these companies ([1]) all you'd see is numbers going up and to the right. Every day is a mad struggle to find capacity to serve people who want what they're selling.

The average response to that is "its just fake demand from other businesses also trying to make AI work". Then why are the same trends all but certainly happening at Cursor, for Claude Code, Midjourney, entities that generally serve customers outside of the fake money bubble? Talk to anyone under the age of 21 and ask them when they used Chat last. McDonalds wants to deploy Gemini in 43,000 US locations to help "enhance" employees (and you know they won't stop there) [2]. Students use it to cheat at school, while their professors use it to grade their generated papers. Developers on /r/ClaudeAI are funding triple $200/mo claude max subscriptions and swapping between them because the limits aren't high enough.

You can not like the world that this technology is hurtling us toward, but you need to separate that from the recognition that this is real, everyone wants this, today its the worst it'll ever be, and people still really want it. This isn't like the metaverse.

[1] https://openrouter.ai/rankings

[2] https://nypost.com/2025/03/06/lifestyle/mcdonalds-to-employ-...

kogasa240p•9h ago
Surprised the SVB collapse wasn't mentioned, the LLM boom gained a huge amount of steam right after that happened.
johng•8h ago
There has to be give and take to this as well. The AI increase is going to cost jobs. I see it in my work flow and our company. We used to pay artists to do artwork and editors to post content. Now we use AI to generate the artwork and AI to write the content. It's verified by a human, but it's still done by AI and saves a ton of time and money.

These are jobs that normally would have gone to a human and now go to AI. We haven't paid a cent for AI mind you -- it's all on the ChatGPT free tier or using this tool for the graphics: https://labs.google/fx/tools/image-fx

I could be wrong, but I think we are at the start of a major bloodbath as far as employment goes.... in tech mostly but also in anything that can be replaced by AI?

I'm worried. Does this mean there will be a boom in needing people for tradeskills and stuff? I honestly don't know what to think about the prospects moving forward.

gamblor956•8h ago
This is backwards.

The AI bubble is so big that it's draining useful investment from the rest of the economy. Hundreds of thousands of people are getting fired so billionaires can try to add a few more zeros to their bank account.

The best investment we can make would be to send the billionaires and AI researchers to an island somewhere and not let them leave until they develop an AI that's actually useful. In the meanwhile, the rest of us get to live productive lives.

agent_turtle•7h ago
There’s no evidence of AI causing layoffs. There’s lots of evidence of CEOs using AI as a scapegoat to entice investors: https://apnews.com/article/ai-layoffs-tech-industry-jobs-ece...
add-sub-mul-div•7h ago
Discounting the evidence of it being explicitly cited as a reason for layoffs and that its purpose to business is to replace human labor, there's no evidence that its replacing human labor. Got it.
rockemsockem•5h ago
Citing AI for layoffs is great cover for "we over hired during Covid".

There probably are a few nuts out there that actually fired people to be replaced with AI, I feel like that won't go well for them

There really is no evidence.

no_wizard•5h ago
There's strong sentiment bubbling that supports AI driven layoffs are going to happen or are happening[0].

I'll say its okay to be reserved on this, since we won't know until after the fact, but give it 6-12 months, then we'll know for sure. Until then, I see no reason not to believe there is a culture in the boardrooms forming around AI that is driving closed door conversations about reducing headcount specifically to be replaced by AI.

[0]: https://gizmodo.com/the-end-of-work-as-we-know-it-2000635294

lispisok•4h ago
I dont think the comment is saying AI was able to replace the work people were doing but people are getting fired and their salary is being redirected into funding AI development.
BigglesB•8h ago
I also wonder the extent to which "pseudo-black-box-AI" is potentially driving some of these crazy valuations now due to it actually being used in a lot algorithmic trading itself... seems like a prevalence of over-corrected models, all expecting "line go up" from recent historical data would be the perfect way to cook up a really "big beautiful bubble" so to speak...
xg15•8h ago
So what will happen to all those massive data centers when the bubble bursts? Back to crypto?
GianFabien•7h ago
After the dot bomb of 2000, the market got flooded with CISCO and Sun gear for pennies on the dollar. Lots of post 2000 startups got their gear from those auctions and were able to massively extend their runway. Same could happen again.
dboreham•1h ago
Aeron chairs too.
GianFabien•7h ago
There's only two reasons to buy stocks:

  1) for future cashflows (aka dividends) derived from net profits.

  2) to on-sell to somebody willing to pay even more.
When option (2) is no longer feasible, the bubble pops and (1) resets the prices to some multiple of dividends. Economics 101.
Nition•5h ago
Just like land :)
heathrow83829•5h ago
yes, but there will always be a #2 with QE being normalized now.
HocusLocus•5h ago
If AI gets us into orbit ( https://news.ycombinator.com/item?id=44800051#44804687 ) or revitalizes nuclear, I'm fine with those things. It's true that AI usage can scale with availability better than most things but that's not a path to world domination.
BriggyDwiggs42•7m ago
Is the linked post referring to literal orbit? Hell will freeze over before orbital datacenters make sense (assuming no antigrav etc gets invented tomorrow).
m0llusk•4h ago
Interesting piece, but the idea that this guy understands how oligarchs think seems way off. Jack Welch took General Electric from a global leader to a sad bag holder and he and his fans cheered progress with every positive quarterly report.
throw0101d•4h ago
From a few weeks ago, see "Honey, AI Capex is Eating the Economy" / "AI capex is so big that it's affecting economic statistics" (365 points by throw0101c 18 days ago | hide | past | favorite | 355 comments):

* https://paulkedrosky.com/honey-ai-capex-ate-the-economy/

* https://news.ycombinator.com/item?id=44609130

diogenescynic•4h ago
They've been saying the same thing about whatever the trend of the moment is for years. Before this it was Magnificent 7 and before that it was FANG, and before that it was something else. Isn't this just sort of fundamental to how the economy works?
morpheos137•3h ago
Imagine, the world's biggest economy propped up by hopes and dreams. Has anyone successfully monetized "AI" at a scale that generates a reasonable return on investment?
jus3sixty•3h ago
The article's comparison to the 19th century railroad boom is pretty spot on for how big it all feels, but maybe not so much for what actually happened.

Back then, the money poured into building real stuff like actual railroads and factories and making tangible products.

That kind of investment really grew the value of companies and was more about creating actual economic value than just making shareholders rich super fast.

johncole•3h ago
While some of the investment in railroads (and canals before it, and shipping before that) was going into physical assets of economic value, there were widespread instances of speculation, land "rights" fraud, and straight up fraud without any economic value added.
poopiokaka•3h ago
You lost me at 404 wanna be event. 13 viewers looking ass
akomtu•3h ago
More like the corpos are really excited about the post-human AI future, so they are pouring truckloads of money on it and this raises the GDP number. The well-being of the average folk is in decline.
BriggyDwiggs42•6m ago
I think the corpos are excited for a new source of meaningful revenue growth.
tharmas•2h ago
Here's a comprehensive explanation of the coming employment apocalypse as a result of AI:

Post-Labor Economics Lecture 01 - "Better, Faster, Cheaper, Safer" (2025 update) https://www.youtube.com/watch?v=UzJ_HZ9qw14

neuroelectron•1h ago
As designed
IAmGraydon•1h ago
Seems like the title should be “AI hype is propping up US company valuations.”
nowittyusername•1h ago
Correction ... Nvidia is propping up the economy. its like 24% of the tech sector and is the only source of gpus for most companies. This is really , really bad. Talk about all eggs in one basket. If that company was to take a shit, the domino effect would cripple the whole sector and have unimaginable ramifications to the US economy.
knowitnone2•1h ago
yes and no. AI may be propping up some tech companies who make items in the AI space but the number of jobs lost due to AI is pretty overwhelming. I spent the last week using LLMs to build an app using libraries I am unfamiliar with. It was amazing - to the point where I won't have to write a line of code anymore. These developers probably have some money saved up but that'll dry up. Plus the new grads are now competing against people with some experience. Tariffs + inflation + a presidential grifter = bad times.
jaimex2•55m ago
It's all fun and games till the LLM can't fix the issue it created. It can regurgitate and mash up all the blog tutorials that exist online but eventually it'll come across a new problem and it'll be sweet out of luck.

Vibe coding is great for Shanty town software and the aftermath from storms is equally entertaining to watch.

jaredcwhite•21m ago
I'm genuinely scared of what the crash will do to society. As much as I loathe AI boosterism, I'm starting to think my personal desire for schadenfreude may not outweigh the fear that I and everyone else I care about will get swept under the tsunami of the bubble burst.
rapsey•12m ago
If you are worried about a stock market crash, the current AI boom is absolutely nothing compared to the dotcom boom.
antman•6m ago
Althought we know that there is no empirical evidence for trickle down economy, a worst case scenario was that some of the profit would be needed to be cost of labor and through great economy expansion and regardless of rising inequality and some reskilling it was somewhat positive for everybody.

This will not be the case anymore. There is not labor restructuring to be made, the lists for the future safe jobs are humorous to say the least. There has been a difficulty in finding skilled labor in sustainable wages for the companies and that has been highlighted as a key blocker for growth econony will rise because of this blocker being removed by AI. Rise of the economy due to AI invalidates old models and trickle down spurious correlations. Rise of the economy through AI enables directly the most extreme inequality and no reflexes or economics experience exists to manage it.

There have been many theories for revolutions, social financial ideological and others. I will not comnent on those but I will make a practical observation: It boils down to the ratio of controlers vs controlled. AI also enables an extremely minimal number of controllers through the AI managment of the flow information and later a large number of drones keep everyone at bay. Cheaply, so good for the economy.