Now, it does that at the expense of the average person, but it will definitely prop up the bubble just long enough for the next election cycle to hit.
Recently, I've heard many left wingers, as a response to Trump's tariffs, start 1) railing about taxes being too high, and that tariffs are taxes so they're bad, and 2) saying that the US trade deficit is actually wonderful because it gives us all this free money for nothing.
I know all of these are opposite positions to every one of the central views of the left of 30 years ago, but politics is a video game now. Lefties are going out of their way to repeat the old progressive refrain:
> "The way that Trump is doing it is all wrong, is a sign of mental instability, is cunning psychopathic genius and will resurrect Russia's Third Reich, but in a twisted way he has blundered into something resembling a point..."
"...the Fed shouldn't be independent and they should lower interest rates now."
Personally I trust Jerome Powell more than any other part of the government at the moment. The man is made of steel.
[0]: https://www.bloomberg.com/news/articles/2024-07-03/senator-w...
That doesn't really change what I said regarding interest rates though.
That's not the redistricting Newsom wants for 2028, and I tend to agree that Dems have to play the game right now, but I'd really like to see them present some sort of story for why it's not going to happen again.
The seeds were planted after Nixon resigned and it was decided to re-shape the media landscape and move the overton window rightwards in the 1970s, dismantling social democracy across the west and leading to a gradual reversal of the norms of governance in the US (see Newt Gingrich).
It's been gradual, slow and methodical. It has definitely accelerated but in retrospect the intent was there from the very beginning.
If you see it that way this is just a reversion to the mean.
You could say that was when things reverted back to "normal". The FDR social reconstruction and post WW2 economic boom were the exception, anomaly. But the Scandinavian countries seem to be doing alright. Sure, they have some big size problems (Sweden in particular) but daily life for the majority in those countries appears to be better than a lot of people in the Anglosphere.
About what? Like seriously what would they even do other then try and lame duck him?
The big issue is Dem approval ratings are even lower then Trumps so how the hell are they going to gain any seats?
Nvidia the poster-child of this "bubble" has been getting effectively cheaper every day.
Youre implying the country exerting financial responsibility to control inflation isn’t good.
Not using interest rates to control inflation caused the stagflation crisis of the 70s, and ended when Volcker set rates to 20%.
This is why in a hot economy we raise rates, and in a not economy we lower them
(oversimplification, but it is a commonly provided explanation)
Not necessarily. Sure, it that money is chasing fixed assets like housing but if that money was invested into production of things to consume its not necessarily inflation inducing is it? For example, if that money went into expanding the electricity grid and production of electric cars, the pool of goods to be consumed is expanding so there is less likelihood of inflation.
People are paid salaries to work at these production facilities, which means they have more money to spend, and the competition drives people to be willing to spend more to get the outputs. Not all outputs will be scaled, those that aren't experience inflation, like food and housing today
Another way to look at this: Low interest rates can induce demand and drive inflation. But they also control the rates when financing supply-side production; so they can also ramp up supply to meet increased demand.
1. Not all goods and services are like this, obviously. Real estate is the big one that low interest rates will continue to inflate. We need legislative-side solutions to this, ideally focused at the state and local levels.
2. None of this applies if you have an economy culturally resistant to consumerism, like Japan. Everything flips on its head and things get weird. But that's not the US.
Of course, this is just one way that interest rates affect the economy, and it's important to bear in mind that lower interest rates can also stimulate investment which help to create jobs for average people as well.
Precisely! Yet the big problem in the Anglosphere is that most of that money has been invested in asset accumulation, namely housing, causing a massive housing crisis in these countries.
As someone in an AI company right now - Almost every company we work with is using Azure wrapped OpenAI. We're not sure why, but that is the case.
It's the same reason you would use RDS at an AWS shop, even if you really like CloudSQL better.
This is the main reason the big cloud vendors are so well-positioned to suck up basically any surplus from any industry even vaguely shaped like a b2b SaaS.
Also Microsoft Azure hosts its own OpenAI models. It isn’t a proxy for OpenAI.
https://wccftech.com/ai-capex-might-equal-2-percent-of-us-gd...
> Next, Kedrosky bestows a 2x multiplier to this imputed AI CapEx level, which equates to a $624 billion positive impact on the US GDP. Based on an estimated US GDP figure of $30 trillion, AI CapEx is expected to amount to 2.08 percent of the US GDP!
Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century. This means that the ongoing AI CapEx boom has lots of legroom to run before it reaches parity with the rail road boom of that bygone era.
The net utility of AI is far more debatable.
I'm sure if you asked the luddites the utility of mechanized textile production you'd get a negative response as well.
What does AI get the consumer? Worse spam, more realistic scams, hallucinated search results, easy cheating on homework? AI-assisted coding doesn't benefit them, and the jury is still out on that too (see recent study showing it's a net negative for efficiency).
There's a reason that AI is already starting to fade out of the limelight with customers (companies and consumers both). After several years, the best they can offer is slightly better chatbots than we had a decade ago with a fraction of the hardware.
I also use them to help me write code, which it does pretty well.
IDK where you're getting the idea that it's fading out. So many people are using the "slightly better chatbots" every single day.
Btw if you only think chat GPT is slightly better than what we had a decade ago then I do not believe that you have used any chat bots at all, either 10 years ago or recently because that's actually a completely insane take.
To back that up, here's a rare update on stats from OpenAI: https://x.com/nickaturley/status/1952385556664520875
> This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year.
Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
Learning well is about consulting multiple sources and using them to build up your own robust mental model of the truth of how something works.
If you can really find the single perfect source of 100% correct information then great, I guess... but that's never been my experience. Every source of information has its flaws. You need to build your own mental model with a skeptical eye from as many sources as possible.
As such, even if AI makes mistakes it can still accelerate your learning, provided you know how to learn and know how to use tips from AI as part of your overall process.
Having an unreliable teacher in the mix may even be beneficial, because it enforces the need for applying critical thinking to what you are learning.
> Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
I think it does matter, but the problem is vastly overstated. One person points out that AIs aren’t 100% reliable. Then the next person exaggerates that a little and says that AIs often get things wrong. Then the next person exaggerates that a little and says that AIs very often get things wrong. And so on.
Before you know it, you’ve got a group of anti-AI people utterly convinced that AI is totally unreliable and you can’t trust it at all. Not because they have a clear view of the problem, but because they are caught in this purity spiral where any criticism gets amplified every time it’s repeated.
Go and talk to a chatbot about beginner-level, mainstream stuff. They are very good at explaining things reliably. Can you catch them out with trick questions? Sure. Can you get incorrect information when you hit the edges of their knowledge? Sure. But for explaining the basics of a huge range of subjects, they are great. “Most of what they told you was completely wrong” is not something a typical beginner learning a typical subject would encounter. It’s a wild caricature of AI that people focused on the negatives have blown out of all proportion.
You're looking at the prototype while complaining about an end product that isn't here yet.
The loom wasn't centralized in four companies. Customers of textiles did not need an expensive subscription.
Obviously average people would benefit more if all that investment went into housing or in fact high speed railways. "AI" does not improve their lives one bit.
Luddites weren't at a point where every industry sees individual capital formation/demand for labor trend towards zero over time.
Prices are ratios in the currency between factors and producers.
What do you suppose happens when the factors can't buy anything because there is nothing they can trade. Slavery has quite a lot of historic parallels with the trend towards this. Producers stop producing when they can make no profit.
You have a deflationary (chaotic) spiral towards socio-economic collapse, under the burden of debt/money-printing (as production risk). There are limits to systems, and when such limits are exceeded; great destruction occurs.
Malthus/Catton pose a very real existential threat when such disorder occurs, and its almost inevitable that it does without action to prevent it. One cannot assume action will happen until it actually does.
[0]: https://www.newyorker.com/books/page-turner/rethinking-the-l...
I am being 100% genuine here, I struggle to understand how the most useful things I've ever encountered are thought of this way and would like to better understand your perspective.
Anyway, that about sums up my experience with AI. It may save some time here and there, but on net, you’re better off without it.
>This implies that each hour spent using genAI increases the worker’s productivity for that hour by 33%. This is similar in magnitude to the average productivity gain of 27% from several randomized experiments of genAI usage (Cui et al., 2024; Dell’Acqua et al., 2023; Noy and Zhang, 2023; Peng et al., 2023)
Our estimated aggregate productivity gain from genAI (1.1%) exceeds the 0.7% estimate by Acemoglu (2024) based on a similar framework.
To be clear, they are surmising that GenAI is already having a productivity gain.
As for the quote, I can’t find it in the article. Can you point me to it? I did click on one of the studies and it indicated productivity gains specifically on writing tasks. Which reminded me of this recent BBC article about a copywriter making bank fixing expensive mistakes caused by AI: https://www.bbc.com/news/articles/cyvm1dyp9v2o
It's actually based on the results of three surveys conducted by two different parties. While surveys are subject to all kinds of biases and the gains are self-reported, their findings of 25% - 33% producitivity do match the gains shown by at least 3 other randomized studies, one of which was specifically about programming. Those studies are worth looking at as well.
I use AI in my personal life to learn about things I never would have without it because it makes the cost of finding any basic knowledge basically 0. Diet improvement ideas based on several quick questions about gut functioning, etc, recently learning how to gauge tsunami severity, and tons of other things. Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
How much have you actually tried using LLMs and did you just use normal chat or some big grand complex tool? I mostly just use chat and prefer to enter my code in artisanally.
If I need information, I can just keyword search wikipedia, then follow the chain there and then validate the sources along with outside information. An LLM would actually cost me time because I would still need to do all of the above anyways, making it a meaningless step.
If you don't do the above then it's 'cheaper' but you're implicitly trusting the lying machine to not lie to you.
> Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
You're practically saying that looking at an index in the back of a book is a meaningless step.
It is significantly faster, so much so that I am able to ask it things that would have taken an indeterminate amount of time to research before, for just simple information, not deep understanding.
Edit:
Also I can truly validate literally any piece of information it gives me. Like I said previously, it makes it very easy to validate via Wikipedia or other places with the right terms, which I may not have known ahead of time.
You're using the machine that ingests and regurgitates stuff like Wikipedia to you. Why not skip the middleman entirely?
The same reasons you use Wikipedia instead of reading all the citations on Wikipedia.
1. To work through a question I'm not sure how to ask yet 2. To give me a starting point/framework when I have zero experience with an issue 3. To automate incredibly stupid monkey-level tasks that I have to do but are not particularly valuable
It's a remarkable accomplishment that has the potential to change a lot of things very quickly but, right now, it's (by which I mean publicly available models) only revolutionary for people who (a) have a vested interest in its success, (b) are easily swayed by salespeople, (c) have quite simple needs (which, incidentally, can relate to incredible work!), or (d) never really bothered to check their work anyway.
I still do 10-20x regular Kagi searches for every LLM search, which seems about right in terms of the utility I'm personally getting out of this.
Spam emails are not any worse for being verbose, I don't recognize the sender, I send it straight to spam. The volume seems to be the same.
You don't want an AI therapist? Go get a normal therapist.
I have not heard of any AI product displacing industrial design, but if anything it'll make it easier to make/design stuff if/when it gets there.
Like are these real things you are personally experiencing?
That depends on the quality of the end product and the willingness to invest the resources necessary to achieve a given quality of result. If average quality goes up in practice then I'd chalk that up as a net win. Low quality replacing high quality is categorically different than low quality filling a previously empty void.
Therapy in particular is interesting not just because of average quality in practice (therapists are expensive experts) but also because of user behavior. There will be users who exhibit both increased and decreased willingness to share with an LLM versus a human.
There's also a very strong privacy angle. Querying a local LLM affords me an expectation of privacy that I don't have when it comes to Google or even Wikipedia. (In the latter case I could maintain a local mirror but that's similar to maintaining a local LLM from a technical perspective making it a moot point.)
Nope; cloning a bundle created from a depth-limited clone results in error messages about missing commit objects.
So I tell the parrot that, and it comes back with: of course, it is well-known that it doesn't work, blah blah. (Then why wasn't it well known one prompt ago, when it was suggested as the definitive answer?)
Obviously, I wasn't in the "the right mindset" today.
This mindset is one of two things:
- the mindset of a complete n00b asking a n00b question that it will nail every time, predicting it out of its training data richly replete with n00b material.
- the mindset of a patient data miner, willing to expend all they keystrokes. needed to build up enough context to in effect create a query which zeroes in on the right nugget of information that made an appearance in the training data.
It was interesting to go down this #2 rabbit hole when this stuff was new, which it isn't any more. Basically do most of the work, while it looks as if it solved the problem.
I had the right mindset for AI, but most of it has worn off. If I don't get something useful in one query with at most one follow up, I quit.
The only shills who continue to hype AI are either completely dishonest assholes, or genuine bros bearing weapons-grade confirmation bias.
Let's try something else:
Q: "What modes of C major are their own reflection?"
A: "The Lydian and Phrygian modes are reflections of each other, as are the Ionian and Aeolian modes, and the Dorian and Mixolydian modes. The Locrian mode is its own reflection."
Very nice sounding and grammatical, but gapingly wrong in every point. The only mode that is its own reflection is Dorian. Furthermore, Lydian and Phrygian are not mutual reflections. Phrygian reflected around is root is Ionian. The reflection of Lydian is Locrian; and of Aeolian, Mixolydian.
I once loaded a NotebookLM with materials about George Russel's concept of the Lydian Chromatic, and Tonal Gravity. It made an incomprehensible mess of explaining the stuff, worse than the original sources.
AI performs well on whatever is the focus of its purveyors. When they want to shake down entry-level coding, they beef it up on entry-level coding and let it loose, leaving it unable to tell Mixolydian from mixing console.
"Among the seven modes of C major, only Dorian is its own reflection.
Understanding Mode Reflections When we reflect a mode, we reverse its interval pattern. The modes of C major and their interval patterns are:
Ionian: W-W-H-W-W-W-H
Dorian: W-H-W-W-W-H-W
Phrygian: H-W-W-W-H-W-W
Lydian: W-W-W-H-W-W-H
Mixolydian: W-W-H-W-W-H-W
Aeolian: W-H-W-W-H-W-W
Locrian: H-W-W-H-W-W-W
The Palindromic Nature of Dorian Dorian mode is palindromic, meaning it produces the same scale whether you read its interval pattern forwards or backwards. When you reverse the Dorian interval pattern W-H-W-W-W-H-W, you get exactly the same sequence: W-H-W-W-W-H-W.
Mirror Pairs Among the Other Modes The remaining modes form mirror pairs with each other:
Ionian-Phrygian: Mirror pair
Lydian-Locrian: Mirror pair
Mixolydian-Aeolian: Mirror pair
For example, when you reflect the C major scale (Ionian), which has the interval pattern W-W-H-W-W-W-H, you get H-W-W-W-H-W-W, which corresponds to the Phrygian mode.
This symmetrical relationship exists because the whole diatonic scale system can be symmetrically inverted, creating these natural mirror relationships between the modes"
Are you hoping to disprove my point by cherry picking the AI that gets the answer?
I used Gemini 2.5 Flash.
Where can I get an exact list of stuff that Gemini 2.5 Flash does not know that Claude Sonnet does, and vice versa?
Then before deciding to consult with AI, I can consult the list?
What would make 2.5 Pro (or anything else) categorically better would be if it could say "I don't know".
There will be things that Claude 3.7 or Gemini Pro will not know, and the interpolations they come up with will not make sense.
You must rely on your own internal model in your head to verify the answers it gives.
On hallucination: it is a problem but again, it reduces as you use heavier models.
Do you build computers by ordering random parts off Alibaba and complaining when they are deficient? You are complaining that you need to RTFM for a piece of high tech?
If they are about something you're not sure about, and you're making decisions based on them ... maybe it would actually help, so yes?
> Do you build computers by ordering random parts off Alibaba and complaining when they are deficient?
We build computers using parts which are carefully documented by data sheets, which tell you exactly for what ranges of parameters their operation is defined and in what ways. (temperatures, voltages, currents, frequencies, loads, timings, typical circuits, circuit board layouts, programming details ...)
Sure. They don't meaningfully improve anything in my life personally.
They don't improve my search experience, they don't improve my work experience, they don't improve the quality of my online interactions, and I don't think they improve the quality of the society I live in either
At this point I am somewhat of a conscientious objector though
Mostly from a stance of "these are not actually as good as people say and we will regret automating away jobs held by competent people in favor of these low quality automations"
I have the same feeling with AI.
It clearly cannot produce the quality of code, architecture, features which I require from myself. And I also want to understand what’s written, and not saying “it works, it’s fine <inserting dog with coffee image here>”, and not copy-pasting a terrible StackOverflow answer which doesn’t need half of the code in reality, and clearly nobody who answered sat down and tried to understand it.
Of course, not everybody wants these, and I’ve seen several people who were fine with not understanding what they were doing. Even before AI. Now they are happy AI users. But it clears to me that it’s not beneficial salary, promotion, and political power wise.
So what’s left is that it types faster… but that was never an issue.
It can be better however. There was the first case just about a month ago when one of them could answer better to a problem than anything else which I knew or could find via Kagi/Google. But generally speaking it’s not there at all. Yet.
Unfortunately yes I do, because it is placed in a way to immediately hijack my attention
Most of the time it is just regurgitating the text of the first link anyways, so I don't think it saves a substantial amount of time or effort. I would genuinely turn it off if they let me
> That's a feeling, not a fact
So? I'm allowed to navigate my life by how I feel
I'm already a pretty fast writer and programmer without LLMs. If I hadn't already learned how to write and program quickly, perhaps I would get more use out of LLMs. But the LLMs would be saving me the effort of learning which, ultimately, is an O(1) cost for O(n) benefit. Not super compelling. And what would I even do with a larger volume of text output? I already write more than most folks are willing to read...
So, sure, it's not strictly zero utility, but it's far less utility than a long series of other things.
On the other hand, trains are fucking amazing. I don't drive, and having real passenger rail is a big chunk of why I want to move to Europe one day. Being able to get places without needing to learn and then operate a big, dangerous machine—one that is statistically much more dangerous for folks with ADHD like me—makes a massive difference in my day-to-day life. Having a language model... doesn't.
And that's living in the Bay Area where the trains aren't great. Bart, Caltrain and Amtrak disappearing would have an orders of magnitude larger effect on my life than if LLMs stopped working.
And I'm totally ignoring the indirect but substantial value I get out of freight rail. Sure, ships and trucks could probably get us there, but the net increase in costs and pollution should not be underestimated.
So for some professionals, mental math really is faster.
Make of that what you will.
Analyze it this way: Are LLMs enabling something that was impossible before? My answer would be No.
Whatever I'm asking of the LLM, I'd have figured it out from googling and RTFMing anyway, and probably have done a better job at it. And guess what, after letting the LLM do it, I probably still need to google and RTFM anyway.
You might say "it's enabling the impossible because you can now do things in less time", to which I would say, I don't really think you can do it in less time. It's more like cruise control where it takes the same time to get to your destination but you just need to expend less mental effort.
Other elephants in the room:
- where is the missing explosion of (non-AI) software startups that should've been enabled by LLM dev efficiency improvements?
- why is adoption among big tech SWEs near zero despite intense push from management? You'd think, of all people, you wouldn't have to ask them twice.
The emperor has no clothes.
I would say yes. It was previously impossible for me to research a subject within 5 minutes when it required doing several searches and review dozens of search results. A LLM with function calling can do this.
Especially people on the left need to realize how important their vision is to the future if AI. Right now you can see the current US admin having zero concern for AI safety or carbon use. If you keep your head in the dirt saying “bubble!” that’s no problem. But if this is here to stay then you need to get involved.
I honestly don't see technology that stumbles over trivial problems like these as something that will replace my job, or any job that is not already automatable within ten thousand lines of Python, anytime soon. The gap between hype and actual capabilities is insane. The more I've tried to apply LLMs to real problems, the more disillusioned I've become. There is nothing, absolutely nothing, no matter how small the task, that I can trust LLMs to do correctly.
The first clause of that sentence negates the second.
The investment only makes sense if the the expectation of success * the investment < the payoff of that goal.
If I don't think the major AI labs will succeed, then it's not justified.
the vast expense is on the GPU silicon, which is essentially useless for compute other than parallel floating point operations
when the bubble pops, the "investment" will be a very expensive total waste of perfectly good sand
I'm not going to do the homework for a Hacker News comment, but here are a few guesses:
I suspect that a lot of it is TSMC's capex for building new fabs. But since the fabs are already built, they could run them for longer. (Possibly producing different chips.)
Meanwhile, carbon emissions due to electricity use by data centers can't be taken back.
But also, much of an investment bubble popping wouldn't be about wasting resources. It would be investors' anticipated profits turning out to be a mirage - that is, investors feel poorer, but nothing material was lost.
It could have some unexciting applications like, oh, modeling climate change and other scientific simulations.
> The net utility of AI is far more debatable.
As long as people are willing to pay for access to AI (either directly or indirectly), who are we to argue?
In comparison: what's the utility of watching a Star Wars movie? I say, if people are willing to part with their hard earned cash for something, we must assume that they get something out of it.
Has anyone found the source for that 20%? Here's a paper I found:
> Between 1848 and 1854, railroad investment, in these and in preceding years, contributed to 4.31% of GDP. Overall, the 1850s are the period in which railroad investment had the most substantial contribution to economic conditions, 2.93% of GDP, relative to 2.51% during the 1840s and 2.49% during the 1830s, driven by the much larger investment volumes during the period.
https://economics.wm.edu/wp/cwm_wp153.pdf
The first sentence isn't clear to me. Is 4.31 > 2.93 because the average was higher from 1848-1854 than from 1850-1859, or because the "preceding years" part means they lumped earlier investment into the former range so it's not actually an average? Regardless, we're nowhere near 20%.
I'm wondering if the claim was actually something like "total investment over x years was 20% of GDP for one year". For example, a paper about the UK says:
> At that time, £170 million was close to 20% of GDP, and most of it was spent in about four years.
https://www-users.cse.umn.edu/~odlyzko/doc/mania18.pdf
That would be more believable, but the comparison with AI spending in a single year would not be meaningful.
When you go so far back in time you run into the problem where GDP only counts the market economy. When you count people farming for their own consumption, making their own clothes, etc, spending on railroads was a much smaller fraction of the US economy than you'd estimate from that statistic (maybe 5-10%?)
First, GDP still doesn't count you making your own meals. Second, when eg free Wikipedia replaces paid for encyclopedias, this makes society better off, but technically decreases GDP.
However, having said all that, it's remarkably how well GDP correlates with all the goods things we care about, despite its technical limitations.
While GDP correlates reasonably well, imagine very roughly what it would be like if GDP growth averaged 3% annually while the overall economy grew at 2%. While correlation would be good, if we speculate that 80% of the economy is counted in GDP today, then only 10% would have been counted 200 years ago.
What's good for one class is often bad for another.
Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?
For some people that's great. For others, not so great.
Maybe some economies are great for everyone, but this is definitely not one of those.
This economy is great for some people and bad for others.
In today's US? Debatable, but on the whole probably not.
In a hypothetical country with sane health care and social safety net policies? Yes that would be hugely beneficial. The tax base would bear the vast majority of the burden of those displaced from their jobs making it a much more straightforward collective optimization problem.
The US spends around 6.8k USD/capita/year on public health care. The UK spends around 4.2k USD/capita/year and France spends around 3.7k.
For general public social spending the numbers are 17.7k for the US, 10.2k for the UK and 13k for France.
(The data is for 2022.)
Though I realise you asked for sane policies. I can't comment on that.
I'm not quite sure why the grandfather commenter talks about unemployment: the US had and has fairly low unemployment in the last few decades. And places like France with their vaunted social safety net have much higher unemployment.
To a vast and corrupt array of rentiers, middlemen, and out-and-out fraudsters, instead of direct provision of services, resulting in worse outcomes at higher costs!
Turns out if I’m forced to have visits with three different wallet inspectors on the way to seeing a doctor, I’ve somehow spent more money and end up less healthy than my neighbors who did not. Curious…
(Too many people getting their metaphorical pound of flesh, and bad incentives.)
We need new metrics.
https://sherwood.news/markets/the-ai-spending-boom-is-eating...
(comment below: https://news.ycombinator.com/item?id=44804528 )
So they are talking about changes not levels.
If the general theme of this article is right (that it's a bubble soon to burst), I'm less concerned about the political environment and more concerned about the insane levels of debt.
If AI is indeed the thing propping up the economy, when that busts, unless there are some seriously unpopular moves made (Volcker level interest rates, another bailout leading to higher taxes, etc), then we're heading towards another depression. Likely one that makes the first look like a sideshow.
The only thing preventing that from coming true IMO is dollar hegemony (and keeping the world convinced that the world's super power having $37T of debt and growing is totally normal if you'd just accept MMT).
Which is their (Thiel, project2025, etc) plan, federal land will be sold for cheap.
The first Great Depression was pretty darn bad, I'm not at all convinced that this hypothetical one would be worse.
Today, we have the highest tariffs since right before the Great Depression, with the added bonus of economic uncertainty because our current tariff rates change on a near daily basis.
Add in meme stocks, AI bubble, crypto, attacks on the Federal Reserve’s independence, and a decreasing trust in federal economic data, and you can make the case that things could get pretty ugly.
But for things to be much worse than the Great Depression, I think is an extraordinary claim. I see the ingredients for a Great Depression-scale event, but not for a much-worse-than-Great-Depression event.
How long will the foot stay on the accelerator after (almost literally) everyone else knows we might be in a bit of strife here?
If the US can put off the depression for the next three years then it has a much better chance of working it's way out gracefully.
If this isn't the Singularity, there's going to be a big crash. What we have now is semi-useful, but too limited. It has to get a lot better to justify multiple companies with US $4 trillion valuations. Total US consumer spending is about $16 trillion / yr.
Remember the Metaverse/VR/AR boom? Facebook/Meta did somehow lose upwards of US$20 billion on that. That was tiny compared to the AI boom.
Edit: agree on the metaverse as implemented/demoed not being much, but that's literally one application
* Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs.
* Surveys are showing that most people are only using AI for a fraction of the time at work, and still reporting significant productivity benefits, even with current models.
The AGI/ASI hype is a distraction, potentially only relevant to the frontier model labs. Even if all model development froze today, there is tremendous untapped demand to be met.
The Metaverse/VR/AR boom was never a boom, with only 2 big companies (Meta, Apple) plowing any "real" money into it. Similarly with crypto, another thing that AI is unjustifiably compared to. I think because people were trying to make it happen.
With the AI boom, however, the largest companies, major governments and VCs are all investing feverishly because it is already happening and they want in on it.
Are they constrained on resources for training, or resources for serving users using pre-trained LLMs? The first use case is R&D, the second is revenue. The ratio of hardware costs for those areas would be good to know.
However, my understanding is that the same GPUs can be used for both training and inference (potentially in different configurations?) so there is a lot of elasticity there.
That said, for the public clouds like Azure, AWS and GCP, training is also a source of revenue because other labs pay them to train their models. This is where accusations of funny money shell games come into play because these companies often themselves invest in those labs.
I was working on crypto during the NFT mania, and THAT felt like a bubble at the time. I'd spend my days writing smart contracts and related infra, but I was doing a genuine wallet transaction at most once a week, and that was on speculation, not work.
My adoption rate of AI has been rapid, not for toy tasks, but for meaningful complex work. Easily send 50 prompts per day to various AI tools, use LLM-driven auto-complete continuously, etc.
That's where AI is different from the dot com bubble (not enough folks materially transaction on the web at the time), or the crypto mania (speculation and not utility).
Could I use a smarter model today? Yes, I would love that and use the hell out of it. Could I use a model with 10x the tokens/second today? Yes, I would use it immediately and get substantial gains from a faster iteration cycle.
I have to imagine that other professions are going to see similar inflection points at some point. When they do, as seen with Claude Code, demand can increase very rapidly.
I was recently at a big, three-letter pharmacy company and I can't be specific, but just let me say this: They're always on the edge of having the main websites going down for this or that reason. It's a constant battle.
How is adding more AI complexity going to help any of that when they don't even have a competent enough workforce to manage the complexity as it is today?
You mention VR--that's another huge flop. I got my son a VR headset for Christmas in like 2022. It was cool, but he couldn't use it long or he got nauseaus. I was like "okay, this is problematic." I really liked it in some ways, but sitting around with that goofy thing on your head wasn't a strong selling point at all. It just wasn't.
If AI can't start doing things with accuracy and cleverness, then it's not useful.
This is crusty, horrible, old, complex code. Nothing is in one place. The entire editing experience was copy-pasted from the create resource experience (not even reusable components; literally copy-pasted). As the principal on the team, with the best understanding of anyone about it, even my understanding was basically just "yeah I think these ten or so things should happen in both cases because that's how the last guy explained it to me and it vibes with how I've seen it behave when I use it".
I asked Cursor (Opus Max) something along the lines of: Compare and contrast the differences in how the application behaves when creating this resource versus updating it. Focus on the API calls its making. It responded in short order with a great summary, and without really being specifically prompted to generate this insight it ended the message by saying: It looks like editing this resource doesn't make the API call to send a notification to affected users, even though the text on the page suggests that it should and it does when creating the resource.
I suspect I could have just said "fix it" and it could have handled it. But, as with anything, as you say: Its more complicated than that. Because while we imply we want the app to do this, its a human's job (not the AI's) to read into what's happening here: The user was confused because they expected the app to do this, but do they actually want the app to do this? Or were they just confused because text on the page (which was probably just copy-pasted from the create resource flow) implied that it would?
So instead I say: Summarize this finding into a couple sentences I can send to the affected customer to get his take on it. Well, that's bread and butter for even AIs three years ago right there, so off it goes. The current behavior is correct; we just need to update the language to manage expectations better. AI could also do that, but its faster for me to just click the hyperlink in Claude's output, jumps right to the file, and I make the update.
Opus Max is expensive. According to Cursor's dashboard, this back-and-forth cost ~$1.50. But let's say it would have taken me just an hour to arrive at the same insight it did (in a fifth the time): that's easily over $100. That's a net win for the business, and its a net win for me because I now understand the code better than I did before, and I was able to focus my time on the components of the problem that humans are good at.
The average response to that is "its just fake demand from other businesses also trying to make AI work". Then why are the same trends all but certainly happening at Cursor, for Claude Code, Midjourney, entities that generally serve customers outside of the fake money bubble? Talk to anyone under the age of 21 and ask them when they used Chat last. McDonalds wants to deploy Gemini in 43,000 US locations to help "enhance" employees (and you know they won't stop there) [2]. Students use it to cheat at school, while their professors use it to grade their generated papers. Developers on /r/ClaudeAI are funding triple $200/mo claude max subscriptions and swapping between them because the limits aren't high enough.
You can not like the world that this technology is hurtling us toward, but you need to separate that from the recognition that this is real, everyone wants this, today its the worst it'll ever be, and people still really want it. This isn't like the metaverse.
[1] https://openrouter.ai/rankings
[2] https://nypost.com/2025/03/06/lifestyle/mcdonalds-to-employ-...
These are jobs that normally would have gone to a human and now go to AI. We haven't paid a cent for AI mind you -- it's all on the ChatGPT free tier or using this tool for the graphics: https://labs.google/fx/tools/image-fx
I could be wrong, but I think we are at the start of a major bloodbath as far as employment goes.... in tech mostly but also in anything that can be replaced by AI?
I'm worried. Does this mean there will be a boom in needing people for tradeskills and stuff? I honestly don't know what to think about the prospects moving forward.
The AI bubble is so big that it's draining useful investment from the rest of the economy. Hundreds of thousands of people are getting fired so billionaires can try to add a few more zeros to their bank account.
The best investment we can make would be to send the billionaires and AI researchers to an island somewhere and not let them leave until they develop an AI that's actually useful. In the meanwhile, the rest of us get to live productive lives.
There probably are a few nuts out there that actually fired people to be replaced with AI, I feel like that won't go well for them
There really is no evidence.
I'll say its okay to be reserved on this, since we won't know until after the fact, but give it 6-12 months, then we'll know for sure. Until then, I see no reason not to believe there is a culture in the boardrooms forming around AI that is driving closed door conversations about reducing headcount specifically to be replaced by AI.
[0]: https://gizmodo.com/the-end-of-work-as-we-know-it-2000635294
1) for future cashflows (aka dividends) derived from net profits.
2) to on-sell to somebody willing to pay even more.
When option (2) is no longer feasible, the bubble pops and (1) resets the prices to some multiple of dividends. Economics 101.Back then, the money poured into building real stuff like actual railroads and factories and making tangible products.
That kind of investment really grew the value of companies and was more about creating actual economic value than just making shareholders rich super fast.
Post-Labor Economics Lecture 01 - "Better, Faster, Cheaper, Safer" (2025 update) https://www.youtube.com/watch?v=UzJ_HZ9qw14
Vibe coding is great for Shanty town software and the aftermath from storms is equally entertaining to watch.
This will not be the case anymore. There is not labor restructuring to be made, the lists for the future safe jobs are humorous to say the least. There has been a difficulty in finding skilled labor in sustainable wages for the companies and that has been highlighted as a key blocker for growth econony will rise because of this blocker being removed by AI. Rise of the economy due to AI invalidates old models and trickle down spurious correlations. Rise of the economy through AI enables directly the most extreme inequality and no reflexes or economics experience exists to manage it.
There have been many theories for revolutions, social financial ideological and others. I will not comnent on those but I will make a practical observation: It boils down to the ratio of controlers vs controlled. AI also enables an extremely minimal number of controllers through the AI managment of the flow information and later a large number of drones keep everyone at bay. Cheaply, so good for the economy.
0cf8612b2e1e•10h ago
bravetraveler•10h ago
intended•9h ago
electrondood•9h ago
gruez•9h ago
[1] Things get even spicier if consumer growth was zero. Then what would the comparison? That AI added infinitely more to growth than consumer spending? What if it was negative? All this shows how ridiculous the framing is.
agent_turtle•8h ago
gruez•7h ago
Have you heard of the disagreement hierarchy? You're somewhere between 1 and 3 right now, so I'm not even going to bother to engage with you further until you bring up more substantive points and cool it with the personal attacks.
https://paulgraham.com/disagree.html
agent_turtle•7h ago
Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on. A resilient economy has multiple growth areas; an unstable one has one or two.
While you could certainly argue that we may already be in rough shape even without the bubble popping, it would undoubtedly get worse for the reasons I listed above,
gruez•6h ago
Right, I'm not suggesting that all of the datacenter construction will seamlessly switch over to building homes, just that some of the labor/materials freed would be allocated to other sorts construction. That could be homes, amazon distribution centers, or grid connections for renewable power projects.
>A resilient economy has multiple growth areas; an unstable one has one or two.
>[...] it would undoubtedly get worse for the reasons I listed above,
No disagreement there. My point is that if AI somehow evaporated, the hit to GDP would be less than $10 (total size of the sector in the toy example above), because the resources would be allocated to do something else, rather than sitting idle entirely.
>Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on.
That's a fair point, although to be fair the federal government is pretty good at stimulus after the GFC and covid that any credit crunch would be short lived.
dang•6h ago
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
troyastorino•8h ago
Using non-seasonally adjusted St. Louis FRED data (https://fred.stlouisfed.org/series/NA000349Q), and the AI CapEx spending for Meta, Alphabet, Microsoft, and Amazon from the WSJ article (https://www.wsj.com/tech/ai/silicon-valley-ai-infrastructure...):
-------------------------------------------------
Q4 2025 consumer spending: ~$5.2 trillion
Q4 2025 AI CapEx spending: ~$75 billion
-------------------------------------------------
Q1 2025 consumer spending: ~$5 trillion
Q1 2025 AI CapEx spending: ~$75 billion
-------------------------------------------------
Q2 2025 consumer spending: ~$5.2 trillion
Q2 2025 AI CapEx spending: ~$100 billion
-------------------------------------------------
So, non-seasonally adjusted consumer spending is flat. In that sense, yes, anything where spend increased contributed more to GDP growth than consumer spending.
If you look at seasonally-adjusted rates, consumer spending has grown ~$400 billion, which might outstrips total AI CapEx in that time period, let alone growth. (To be fair the WSJ graph only shows the spending from Meta, Google, Microsoft, and Amazon. But it also says that Apple, Nvidia, and Tesla combined "only" spent $6.7 billion in Q2 2025 vs the $96 billion from the other four. So it's hard to believe that spend coming from elsewhere is contributing a ton.)
If you click through the the tweet that is the source for the WSJ article where the original quote comes from (https://x.com/RenMacLLC/status/1950544075989377196) it's very unclear what it's showing...it only shows percentage change, and it doesn't even show anything about consumer spending.
So, at best this quote is very misleadingly worded. It also seems possible that the original source was wrong.
raincole•4h ago
Is the keyword here. US consumers have been spending so much so of course that sector doesn't have that much room to grow.
lisbbb•3h ago