Now, it does that at the expense of the average person, but it will definitely prop up the bubble just long enough for the next election cycle to hit.
Recently, I've heard many left wingers, as a response to Trump's tariffs, start 1) railing about taxes being too high, and that tariffs are taxes so they're bad, and 2) saying that the US trade deficit is actually wonderful because it gives us all this free money for nothing.
I know all of these are opposite positions to every one of the central views of the left of 30 years ago, but politics is a video game now. Lefties are going out of their way to repeat the old progressive refrain:
> "The way that Trump is doing it is all wrong, is a sign of mental instability, is cunning psychopathic genius and will resurrect Russia's Third Reich, but in a twisted way he has blundered into something resembling a point..."
"...the Fed shouldn't be independent and they should lower interest rates now."
Personally I trust Jerome Powell more than any other part of the government at the moment. The man is made of steel.
[0]: https://www.bloomberg.com/news/articles/2024-07-03/senator-w...
That doesn't really change what I said regarding interest rates though.
That's not the redistricting Newsom wants for 2028, and I tend to agree that Dems have to play the game right now, but I'd really like to see them present some sort of story for why it's not going to happen again.
This makes me feel dread. I just don't see him dragging moderates in the middle of the country to the polls, or getting people in the leftist part of the Democratic Party to not "but but but" their way out of voting against fascism again.
Oh well.
The seeds were planted after Nixon resigned and it was decided to re-shape the media landscape and move the overton window rightwards in the 1970s, dismantling social democracy across the west and leading to a gradual reversal of the norms of governance in the US (see Newt Gingrich).
It's been gradual, slow and methodical. It has definitely accelerated but in retrospect the intent was there from the very beginning.
If you see it that way this is just a reversion to the mean.
You could say that was when things reverted back to "normal". The FDR social reconstruction and post WW2 economic boom were the exception, anomaly. But the Scandinavian countries seem to be doing alright. Sure, they have some big size problems (Sweden in particular) but daily life for the majority in those countries appears to be better than a lot of people in the Anglosphere.
About what? Like seriously what would they even do other then try and lame duck him?
The big issue is Dem approval ratings are even lower then Trumps so how the hell are they going to gain any seats?
Add in the fact that everyone has been moving to red states for the last decade the seat re-balancing of the house is really going to favor Republicans.
Nvidia the poster-child of this "bubble" has been getting effectively cheaper every day.
Youre implying the country exerting financial responsibility to control inflation isn’t good.
Not using interest rates to control inflation caused the stagflation crisis of the 70s, and ended when Volcker set rates to 20%.
This is why in a hot economy we raise rates, and in a not economy we lower them
(oversimplification, but it is a commonly provided explanation)
Not necessarily. Sure, it that money is chasing fixed assets like housing but if that money was invested into production of things to consume its not necessarily inflation inducing is it? For example, if that money went into expanding the electricity grid and production of electric cars, the pool of goods to be consumed is expanding so there is less likelihood of inflation.
People are paid salaries to work at these production facilities, which means they have more money to spend, and the competition drives people to be willing to spend more to get the outputs. Not all outputs will be scaled, those that aren't experience inflation, like food and housing today
Another way to look at this: Low interest rates can induce demand and drive inflation. But they also control the rates when financing supply-side production; so they can also ramp up supply to meet increased demand.
1. Not all goods and services are like this, obviously. Real estate is the big one that low interest rates will continue to inflate. We need legislative-side solutions to this, ideally focused at the state and local levels.
2. None of this applies if you have an economy culturally resistant to consumerism, like Japan. Everything flips on its head and things get weird. But that's not the US.
This means stocks will return less in low rates environment unless there is a lot of additional growth.
> Low interest rates make borrowing cheap, so companies flood money into real estate and stocks
also https://en.wikipedia.org/wiki/List_of_recessions_in_the_Unit...
Of course, this is just one way that interest rates affect the economy, and it's important to bear in mind that lower interest rates can also stimulate investment which help to create jobs for average people as well.
Precisely! Yet the big problem in the Anglosphere is that most of that money has been invested in asset accumulation, namely housing, causing a massive housing crisis in these countries.
People like my parents, who are both 65, could just park their money at a local bank and have an FDIC-insured savings instrument that roughly tracks inflation and helps invest in the local economy. They don't have to worry about cokeheads in lower Manhattan making bets that endanger their retirements like they have numerous times.
If they do that with lower interest rates, they're more likely to lose money instead of preserving it or slightly increasing it. Which, of course, gives the cokeheads more money to gamble with.
In the same way that UBI would disproportionately benefit poor people, but considered with its downstream effects could benefit rich people too.
As someone in an AI company right now - Almost every company we work with is using Azure wrapped OpenAI. We're not sure why, but that is the case.
It's the same reason you would use RDS at an AWS shop, even if you really like CloudSQL better.
This is the main reason the big cloud vendors are so well-positioned to suck up basically any surplus from any industry even vaguely shaped like a b2b SaaS.
Also Microsoft Azure hosts its own OpenAI models. It isn’t a proxy for OpenAI.
These companies are left to choose between self-hosting models, or a vendor like MS who will rent them "their own AI running in their own Azure subscription", cut off from the outside world.
https://wccftech.com/ai-capex-might-equal-2-percent-of-us-gd...
> Next, Kedrosky bestows a 2x multiplier to this imputed AI CapEx level, which equates to a $624 billion positive impact on the US GDP. Based on an estimated US GDP figure of $30 trillion, AI CapEx is expected to amount to 2.08 percent of the US GDP!
Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century. This means that the ongoing AI CapEx boom has lots of legroom to run before it reaches parity with the rail road boom of that bygone era.
The net utility of AI is far more debatable.
I'm sure if you asked the luddites the utility of mechanized textile production you'd get a negative response as well.
What does AI get the consumer? Worse spam, more realistic scams, hallucinated search results, easy cheating on homework? AI-assisted coding doesn't benefit them, and the jury is still out on that too (see recent study showing it's a net negative for efficiency).
There's a reason that AI is already starting to fade out of the limelight with customers (companies and consumers both). After several years, the best they can offer is slightly better chatbots than we had a decade ago with a fraction of the hardware.
I also use them to help me write code, which it does pretty well.
IDK where you're getting the idea that it's fading out. So many people are using the "slightly better chatbots" every single day.
Btw if you only think chat GPT is slightly better than what we had a decade ago then I do not believe that you have used any chat bots at all, either 10 years ago or recently because that's actually a completely insane take.
To back that up, here's a rare update on stats from OpenAI: https://x.com/nickaturley/status/1952385556664520875
> This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year.
Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
Learning well is about consulting multiple sources and using them to build up your own robust mental model of the truth of how something works.
If you can really find the single perfect source of 100% correct information then great, I guess... but that's never been my experience. Every source of information has its flaws. You need to build your own mental model with a skeptical eye from as many sources as possible.
As such, even if AI makes mistakes it can still accelerate your learning, provided you know how to learn and know how to use tips from AI as part of your overall process.
Having an unreliable teacher in the mix may even be beneficial, because it enforces the need for applying critical thinking to what you are learning.
> Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
I think it does matter, but the problem is vastly overstated. One person points out that AIs aren’t 100% reliable. Then the next person exaggerates that a little and says that AIs often get things wrong. Then the next person exaggerates that a little and says that AIs very often get things wrong. And so on.
Before you know it, you’ve got a group of anti-AI people utterly convinced that AI is totally unreliable and you can’t trust it at all. Not because they have a clear view of the problem, but because they are caught in this purity spiral where any criticism gets amplified every time it’s repeated.
Go and talk to a chatbot about beginner-level, mainstream stuff. They are very good at explaining things reliably. Can you catch them out with trick questions? Sure. Can you get incorrect information when you hit the edges of their knowledge? Sure. But for explaining the basics of a huge range of subjects, they are great. “Most of what they told you was completely wrong” is not something a typical beginner learning a typical subject would encounter. It’s a wild caricature of AI that people focused on the negatives have blown out of all proportion.
You're looking at the prototype while complaining about an end product that isn't here yet.
The loom wasn't centralized in four companies. Customers of textiles did not need an expensive subscription.
Obviously average people would benefit more if all that investment went into housing or in fact high speed railways. "AI" does not improve their lives one bit.
Luddites weren't at a point where every industry sees individual capital formation/demand for labor trend towards zero over time.
Prices are ratios in the currency between factors and producers.
What do you suppose happens when the factors can't buy anything because there is nothing they can trade. Slavery has quite a lot of historic parallels with the trend towards this. Producers stop producing when they can make no profit.
You have a deflationary (chaotic) spiral towards socio-economic collapse, under the burden of debt/money-printing (as production risk). There are limits to systems, and when such limits are exceeded; great destruction occurs.
Malthus/Catton pose a very real existential threat when such disorder occurs, and its almost inevitable that it does without action to prevent it. One cannot assume action will happen until it actually does.
[0]: https://www.newyorker.com/books/page-turner/rethinking-the-l...
Getting people to associate the luddites as anti-technology zealots rather than pro-labor organization is one of the most successful pieces of propaganda in history.
Interestingly....
..... the fact that luddites also called for unemployment compensation and retraining for workers displaced by the new machinery, probably makes them amongst the most forward thinking and progressive people of the 1800's.
Source? Skimming the wikipedia article it definitely sounds like most were made up of former skilled textile workers that were upset they were replaced with unskilled workers operating the new machines.
> They had nothing against mechanized looms, they had everything against the business owners using their workers talents and knowledge to build an entire operation only to later undercut their wages and/or replace them with lesser paid unskilled workers and reduce the quality of life of their entire community.
Sounds a lot like the anti-AI sentiment today, eg. "I'm not against AI, I'm just against it being used by evil corporations so they don't have to hire human workers". The "AI slop" argument also resembles luddites objecting to the new machines on the quality of "quality" (also from wikipedia), although to be fair that was only a passing mention.
This sort of “other people were wrong once, so you might be too” comment is really pointless.
I am being 100% genuine here, I struggle to understand how the most useful things I've ever encountered are thought of this way and would like to better understand your perspective.
Anyway, that about sums up my experience with AI. It may save some time here and there, but on net, you’re better off without it.
>This implies that each hour spent using genAI increases the worker’s productivity for that hour by 33%. This is similar in magnitude to the average productivity gain of 27% from several randomized experiments of genAI usage (Cui et al., 2024; Dell’Acqua et al., 2023; Noy and Zhang, 2023; Peng et al., 2023)
Our estimated aggregate productivity gain from genAI (1.1%) exceeds the 0.7% estimate by Acemoglu (2024) based on a similar framework.
To be clear, they are surmising that GenAI is already having a productivity gain.
As for the quote, I can’t find it in the article. Can you point me to it? I did click on one of the studies and it indicated productivity gains specifically on writing tasks. Which reminded me of this recent BBC article about a copywriter making bank fixing expensive mistakes caused by AI: https://www.bbc.com/news/articles/cyvm1dyp9v2o
It's actually based on the results of three surveys conducted by two different parties. While surveys are subject to all kinds of biases and the gains are self-reported, their findings of 25% - 33% producitivity do match the gains shown by at least 3 other randomized studies, one of which was specifically about programming. Those studies are worth looking at as well.
However, what doesn't get discussed enough about the METR study is that there was a spike in overall idle time as they waited for the AI to finish. I haven't run the numbers so I don't know how much of the increased completion time it accounts for, but if your cognitive load drops almost to 0, it will of course feel like your work is sped up, even though calendar time has increased. I wonder if that is the more important finding of that paper.
I use AI in my personal life to learn about things I never would have without it because it makes the cost of finding any basic knowledge basically 0. Diet improvement ideas based on several quick questions about gut functioning, etc, recently learning how to gauge tsunami severity, and tons of other things. Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
How much have you actually tried using LLMs and did you just use normal chat or some big grand complex tool? I mostly just use chat and prefer to enter my code in artisanally.
If I need information, I can just keyword search wikipedia, then follow the chain there and then validate the sources along with outside information. An LLM would actually cost me time because I would still need to do all of the above anyways, making it a meaningless step.
If you don't do the above then it's 'cheaper' but you're implicitly trusting the lying machine to not lie to you.
> Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
You're practically saying that looking at an index in the back of a book is a meaningless step.
It is significantly faster, so much so that I am able to ask it things that would have taken an indeterminate amount of time to research before, for just simple information, not deep understanding.
Edit:
Also I can truly validate literally any piece of information it gives me. Like I said previously, it makes it very easy to validate via Wikipedia or other places with the right terms, which I may not have known ahead of time.
You're using the machine that ingests and regurgitates stuff like Wikipedia to you. Why not skip the middleman entirely?
The same reasons you use Wikipedia instead of reading all the citations on Wikipedia.
How do you KNOW it doesn't lie/hallucinate? In order to know that, you have to verify what it says. And in order to verify what it says, you need to check other outside sources, like Wikipedia. So what I'm saying is: Why bother wasting time with the middle man? 'Vague queries' can be distilled into simple keyword searches: If I want to know what a 'Tsunami' is I can simply just plug that keyword into a Wikipedia search and skim through the page or ctrl-f for the information I want instantly.
If you assume that it doesn't lie/hallucinate because it was right on previous requests then you fall into the exact trap that blows your foot off eventually, because sometimes it can and will hallucinate over even benign things.
For most questions it is so much faster to validate a correct answer than to figure out the answer to begin with. Vague queries CANNOT be distilled to simple keyword searches when you don't know where to start without significant time investment. Ctrl-f relies on you and the article having the exact same preferred vocabulary for the exact same concepts.
I do not assume that LLMs don't lie or hallucinate, I start with the assumption that they will be wrong. Which for the record is the same assumption I take with both websites and human beings.
1. To work through a question I'm not sure how to ask yet 2. To give me a starting point/framework when I have zero experience with an issue 3. To automate incredibly stupid monkey-level tasks that I have to do but are not particularly valuable
It's a remarkable accomplishment that has the potential to change a lot of things very quickly but, right now, it's (by which I mean publicly available models) only revolutionary for people who (a) have a vested interest in its success, (b) are easily swayed by salespeople, (c) have quite simple needs (which, incidentally, can relate to incredible work!), or (d) never really bothered to check their work anyway.
That is pretty significant in my book.
How do you quantify such things? How can you say with a straight face that this magic box gives you more relevant information (which may be wrong!) and that will revolutionize the workforce?
I still do 10-20x regular Kagi searches for every LLM search, which seems about right in terms of the utility I'm personally getting out of this.
Spam emails are not any worse for being verbose, I don't recognize the sender, I send it straight to spam. The volume seems to be the same.
You don't want an AI therapist? Go get a normal therapist.
I have not heard of any AI product displacing industrial design, but if anything it'll make it easier to make/design stuff if/when it gets there.
Like are these real things you are personally experiencing?
That depends on the quality of the end product and the willingness to invest the resources necessary to achieve a given quality of result. If average quality goes up in practice then I'd chalk that up as a net win. Low quality replacing high quality is categorically different than low quality filling a previously empty void.
Therapy in particular is interesting not just because of average quality in practice (therapists are expensive experts) but also because of user behavior. There will be users who exhibit both increased and decreased willingness to share with an LLM versus a human.
There's also a very strong privacy angle. Querying a local LLM affords me an expectation of privacy that I don't have when it comes to Google or even Wikipedia. (In the latter case I could maintain a local mirror but that's similar to maintaining a local LLM from a technical perspective making it a moot point.)
Nope; cloning a bundle created from a depth-limited clone results in error messages about missing commit objects.
So I tell the parrot that, and it comes back with: of course, it is well-known that it doesn't work, blah blah. (Then why wasn't it well known one prompt ago, when it was suggested as the definitive answer?)
Obviously, I wasn't in the "the right mindset" today.
This mindset is one of two things:
- the mindset of a complete n00b asking a n00b question that it will nail every time, predicting it out of its training data richly replete with n00b material.
- the mindset of a patient data miner, willing to expend all they keystrokes. needed to build up enough context to in effect create a query which zeroes in on the right nugget of information that made an appearance in the training data.
It was interesting to go down this #2 rabbit hole when this stuff was new, which it isn't any more. Basically do most of the work, while it looks as if it solved the problem.
I had the right mindset for AI, but most of it has worn off. If I don't get something useful in one query with at most one follow up, I quit.
The only shills who continue to hype AI are either completely dishonest assholes, or genuine bros bearing weapons-grade confirmation bias.
Let's try something else:
Q: "What modes of C major are their own reflection?"
A: "The Lydian and Phrygian modes are reflections of each other, as are the Ionian and Aeolian modes, and the Dorian and Mixolydian modes. The Locrian mode is its own reflection."
Very nice sounding and grammatical, but gapingly wrong in every point. The only mode that is its own reflection is Dorian. Furthermore, Lydian and Phrygian are not mutual reflections. Phrygian reflected around is root is Ionian. The reflection of Lydian is Locrian; and of Aeolian, Mixolydian.
I once loaded a NotebookLM with materials about George Russel's concept of the Lydian Chromatic, and Tonal Gravity. It made an incomprehensible mess of explaining the stuff, worse than the original sources.
AI performs well on whatever is the focus of its purveyors. When they want to shake down entry-level coding, they beef it up on entry-level coding and let it loose, leaving it unable to tell Mixolydian from mixing console.
"Among the seven modes of C major, only Dorian is its own reflection.
Understanding Mode Reflections When we reflect a mode, we reverse its interval pattern. The modes of C major and their interval patterns are:
Ionian: W-W-H-W-W-W-H
Dorian: W-H-W-W-W-H-W
Phrygian: H-W-W-W-H-W-W
Lydian: W-W-W-H-W-W-H
Mixolydian: W-W-H-W-W-H-W
Aeolian: W-H-W-W-H-W-W
Locrian: H-W-W-H-W-W-W
The Palindromic Nature of Dorian Dorian mode is palindromic, meaning it produces the same scale whether you read its interval pattern forwards or backwards. When you reverse the Dorian interval pattern W-H-W-W-W-H-W, you get exactly the same sequence: W-H-W-W-W-H-W.
Mirror Pairs Among the Other Modes The remaining modes form mirror pairs with each other:
Ionian-Phrygian: Mirror pair
Lydian-Locrian: Mirror pair
Mixolydian-Aeolian: Mirror pair
For example, when you reflect the C major scale (Ionian), which has the interval pattern W-W-H-W-W-W-H, you get H-W-W-W-H-W-W, which corresponds to the Phrygian mode.
This symmetrical relationship exists because the whole diatonic scale system can be symmetrically inverted, creating these natural mirror relationships between the modes"
Are you hoping to disprove my point by cherry picking the AI that gets the answer?
I used Gemini 2.5 Flash.
Where can I get an exact list of stuff that Gemini 2.5 Flash does not know that Claude Sonnet does, and vice versa?
Then before deciding to consult with AI, I can consult the list?
What would make 2.5 Pro (or anything else) categorically better would be if it could say "I don't know".
There will be things that Claude 3.7 or Gemini Pro will not know, and the interpolations they come up with will not make sense.
You must rely on your own internal model in your head to verify the answers it gives.
On hallucination: it is a problem but again, it reduces as you use heavier models.
This is what significantly reduces the utility, if it can only be trusted to answer things I know the answer to, why would I ask it anything?
I have written about it here: https://news.ycombinator.com/item?id=44712300
Do you build computers by ordering random parts off Alibaba and complaining when they are deficient? You are complaining that you need to RTFM for a piece of high tech?
If they are about something you're not sure about, and you're making decisions based on them ... maybe it would actually help, so yes?
> Do you build computers by ordering random parts off Alibaba and complaining when they are deficient?
We build computers using parts which are carefully documented by data sheets, which tell you exactly for what ranges of parameters their operation is defined and in what ways. (temperatures, voltages, currents, frequencies, loads, timings, typical circuits, circuit board layouts, programming details ...)
Lately, I have been using Grok 4 and I have had very good results from it.
Sure. They don't meaningfully improve anything in my life personally.
They don't improve my search experience, they don't improve my work experience, they don't improve the quality of my online interactions, and I don't think they improve the quality of the society I live in either
At this point I am somewhat of a conscientious objector though
Mostly from a stance of "these are not actually as good as people say and we will regret automating away jobs held by competent people in favor of these low quality automations"
I have the same feeling with AI.
It clearly cannot produce the quality of code, architecture, features which I require from myself. And I also want to understand what’s written, and not saying “it works, it’s fine <inserting dog with coffee image here>”, and not copy-pasting a terrible StackOverflow answer which doesn’t need half of the code in reality, and clearly nobody who answered sat down and tried to understand it.
Of course, not everybody wants these, and I’ve seen several people who were fine with not understanding what they were doing. Even before AI. Now they are happy AI users. But it clears to me that it’s not beneficial salary, promotion, and political power wise.
So what’s left is that it types faster… but that was never an issue.
It can be better however. There was the first case just about a month ago when one of them could answer better to a problem than anything else which I knew or could find via Kagi/Google. But generally speaking it’s not there at all. Yet.
Unfortunately yes I do, because it is placed in a way to immediately hijack my attention
Most of the time it is just regurgitating the text of the first link anyways, so I don't think it saves a substantial amount of time or effort. I would genuinely turn it off if they let me
> That's a feeling, not a fact
So? I'm allowed to navigate my life by how I feel
No.
Because I cannot trust it. (Especially when it gives no attributions).
I'm already a pretty fast writer and programmer without LLMs. If I hadn't already learned how to write and program quickly, perhaps I would get more use out of LLMs. But the LLMs would be saving me the effort of learning which, ultimately, is an O(1) cost for O(n) benefit. Not super compelling. And what would I even do with a larger volume of text output? I already write more than most folks are willing to read...
So, sure, it's not strictly zero utility, but it's far less utility than a long series of other things.
On the other hand, trains are fucking amazing. I don't drive, and having real passenger rail is a big chunk of why I want to move to Europe one day. Being able to get places without needing to learn and then operate a big, dangerous machine—one that is statistically much more dangerous for folks with ADHD like me—makes a massive difference in my day-to-day life. Having a language model... doesn't.
And that's living in the Bay Area where the trains aren't great. Bart, Caltrain and Amtrak disappearing would have an orders of magnitude larger effect on my life than if LLMs stopped working.
And I'm totally ignoring the indirect but substantial value I get out of freight rail. Sure, ships and trucks could probably get us there, but the net increase in costs and pollution should not be underestimated.
So for some professionals, mental math really is faster.
Make of that what you will.
The math that isn't mathing is even more basic tho. This is a Concorde situation all over again. Yes, supersonic passenger jets would be amazing. And they did reach production. But the economics were not there.
Yeah, using GPU farms delivers some conveniences that are real. But after 1.6 trillion dollars it's not clear at all that they are a net gain.
Mathematicians are not calculators. Programmers are not typists.
Analyze it this way: Are LLMs enabling something that was impossible before? My answer would be No.
Whatever I'm asking of the LLM, I'd have figured it out from googling and RTFMing anyway, and probably have done a better job at it. And guess what, after letting the LLM do it, I probably still need to google and RTFM anyway.
You might say "it's enabling the impossible because you can now do things in less time", to which I would say, I don't really think you can do it in less time. It's more like cruise control where it takes the same time to get to your destination but you just need to expend less mental effort.
Other elephants in the room:
- where is the missing explosion of (non-AI) software startups that should've been enabled by LLM dev efficiency improvements?
- why is adoption among big tech SWEs near zero despite intense push from management? You'd think, of all people, you wouldn't have to ask them twice.
The emperor has no clothes.
That said, I recently saw a colleague use a LLM to make a non-trivial UI for electron in HTML/CSS/JS, despite knowing nothing about any of those technologies, in less time than it would have taken me to do it. We had been in the process of devising a set of requirements, he fed his version of them into the LLM, did some back and forth with the LLM, showed me the result, got feedback, fed my feedback back into the LLM and got a good solution. I had suggested that he make a mockup (a drawing in kolourpaint for example) for further discussion, but he had surprised me by using a LLM to make a functional prototype in place of the mockup. It was a huge time saver.
Consider something like Shopify - someone with zero knowledge of programming can wow you with an incredible ecommerce site built through Shopify. It's probably like a 1000x efficiency improvement versus building one from scratch (or even using the popular lowcode tools of the era like Magento and Drupal). But it won't help you build Amazon.com, or even Nike.com. It won't even get you part of the way there.
And LLMs, while more general/expressive than Shopify, are inferior to Shopify at doing what Shopify does i.e. you're still better off using Shopify instead of trying to vibe-code an e-commerce website. I would say the same line of thinking extends to general software engineering.
Shopify is tangential to this. I will add that having had experience with similar platforms in the past (for building websites, not e-commerce), I can say that you must be either naive or a masochist to use them. They tend to be mediocre compared to what you can get from self hosted solutions and the vendor lock-in always will be used to bite those foolish enough to use them in the end.
I would say yes when the LLM is combined with function calling to allow it to do web searches and read web pages. It was previously impossible for me to research a subject within 5 minutes when it required doing several searches and reviewing dozens of search results (not just reading the list entries, but reading the actual HTML pages). I simply cannot read that fast. A LLM with function calling can do this.
The other day, I asked it to check the Linux kernel sources to tell me which TCP connection states for a closing connection would not return an error to send() with MSG_NOSIGNAL. It not only gave me the answer, but made citations that I could use to verify the answer. This happened in less than 2 minutes. Very few developers could find the answer that fast, unless they happen to already know it. I doubt very many know it offhand.
Beyond that, I am better informed than I have ever been since I have been offloading previously manual research to LLMs to do for me, allowing me to ask questions that I previously would not ask due to the amount of time it took to do the background research. What previously would be a rabbit hole that took hours can be done in minutes with minimal mental effort on my part. Note that I am careful to ask for citations so I can verify what the LLM says. Most of the time, the citations vouch for what the LLM said, but there are some instances where the LLM will provide citations that do not.
Especially people on the left need to realize how important their vision is to the future if AI. Right now you can see the current US admin having zero concern for AI safety or carbon use. If you keep your head in the dirt saying “bubble!” that’s no problem. But if this is here to stay then you need to get involved.
I honestly don't see technology that stumbles over trivial problems like these as something that will replace my job, or any job that is not already automatable within ten thousand lines of Python, anytime soon. The gap between hype and actual capabilities is insane. The more I've tried to apply LLMs to real problems, the more disillusioned I've become. There is nothing, absolutely nothing, no matter how small the task, that I can trust LLMs to do correctly.
In fact much automation, code or otherwise, benefits from or even requires explicit, concise rules.
It is far quicker for me to already know, and write, an SQL statement, than it is to explain what I need to an LLM.
It is also quite difficult to get LLMs into a lot of processes, and I think big enterprises are going to really struggle with this. I would absolutely love AI to manage some Windows servers that are in my care, but they are three VMs deep in a remote desktop stack that gets me into a DMZ/intranet. There's no interface, and how would an LLM help anyway. What I need is concise, discreet automations. Not a chat bot interface to try and instruct every day.
To be clear I do try to use AI most days, I have Claude and I am a software developer so ideally it could be very helpful, but I have far less use for it than say people in the strategy or marketing departments for example. I do a lot of things, but not really all that much writing.
For me, LLMs are also the most useful thing ever but I was a C student in all my classes. My programming is a joke. I have always been intellectually curious but I am quite lazy. I have always had tons of ideas to explore though and LLMS let me explore these ideas that I either wouldn't be able to otherwise or would be too lazy to bother.
Are you saying that LLMs are most useful if you're not intellectually curious, and therefore most interested in immediate answers, but also that they're very useful if you're a really great programmer for an unstated reason?
I think we probably are in a bubble, but much like housing bubbles in major metro areas, the value is real and so the bubble is on top of that real value vs being 100% synthetic.
IMO it's also clearly wrong, because I think even if you believe most of AI is hype you must see the value that a lot of people are getting from it, like the housing market example I gave.
Appreciate you explaining your perspective!
The first clause of that sentence negates the second.
The investment only makes sense if the the expectation of success * the investment < the payoff of that goal.
If I don't think the major AI labs will succeed, then it's not justified.
the vast expense is on the GPU silicon, which is essentially useless for compute other than parallel floating point operations
when the bubble pops, the "investment" will be a very expensive total waste of perfectly good sand
I'm not going to do the homework for a Hacker News comment, but here are a few guesses:
I suspect that a lot of it is TSMC's capex for building new fabs. But since the fabs are already built, they could run them for longer. (Possibly producing different chips.)
Meanwhile, carbon emissions due to electricity use by data centers can't be taken back.
But also, much of an investment bubble popping wouldn't be about wasting resources. It would be investors' anticipated profits turning out to be a mirage - that is, investors feel poorer, but nothing material was lost.
It could have some unexciting applications like, oh, modeling climate change and other scientific simulations.
> The net utility of AI is far more debatable.
As long as people are willing to pay for access to AI (either directly or indirectly), who are we to argue?
In comparison: what's the utility of watching a Star Wars movie? I say, if people are willing to part with their hard earned cash for something, we must assume that they get something out of it.
You can still run a train on those old tracks. And it'll be competitive. Sure you could build all new tracks, but that's a lot more expensive and difficult. So they'll need to be a whole lot better to beat the established network.
But GPUs? And with how much tech has changed in the last decade or two and might in the next?
We saw cryptocurrency mining go from CPU to GPU to FPGA to ASICs in just a few years.
We can't yet tell where this fad is going. But there's fair reason to believe that, even if AI has tons of utility, the current economics of it might be problematic.
P/E is, after all, given in the implied unit of "years". (Same as other ratios like debt/GDP).
Has anyone found the source for that 20%? Here's a paper I found:
> Between 1848 and 1854, railroad investment, in these and in preceding years, contributed to 4.31% of GDP. Overall, the 1850s are the period in which railroad investment had the most substantial contribution to economic conditions, 2.93% of GDP, relative to 2.51% during the 1840s and 2.49% during the 1830s, driven by the much larger investment volumes during the period.
https://economics.wm.edu/wp/cwm_wp153.pdf
The first sentence isn't clear to me. Is 4.31 > 2.93 because the average was higher from 1848-1854 than from 1850-1859, or because the "preceding years" part means they lumped earlier investment into the former range so it's not actually an average? Regardless, we're nowhere near 20%.
I'm wondering if the claim was actually something like "total investment over x years was 20% of GDP for one year". For example, a paper about the UK says:
> At that time, £170 million was close to 20% of GDP, and most of it was spent in about four years.
https://www-users.cse.umn.edu/~odlyzko/doc/mania18.pdf
That would be more believable, but the comparison with AI spending in a single year would not be meaningful.
In a majority agrarian economy where a lot of output doesn't go toward GDP (e.g. milking your own damn cow to feed milk to your own damn family won't show up) I would expect "new hotness" booms to look bigger than they actually are.
At this rate, I hope we get something useful, public, and reasonably priced infrastructure out of these spending in about 5-8 years just like the railroads.
When you go so far back in time you run into the problem where GDP only counts the market economy. When you count people farming for their own consumption, making their own clothes, etc, spending on railroads was a much smaller fraction of the US economy than you'd estimate from that statistic (maybe 5-10%?)
First, GDP still doesn't count you making your own meals. Second, when eg free Wikipedia replaces paid for encyclopedias, this makes society better off, but technically decreases GDP.
However, having said all that, it's remarkably how well GDP correlates with all the goods things we care about, despite its technical limitations.
While GDP correlates reasonably well, imagine very roughly what it would be like if GDP growth averaged 3% annually while the overall economy grew at 2%. While correlation would be good, if we speculate that 80% of the economy is counted in GDP today, then only 10% would have been counted 200 years ago.
If you wanted to, you could look at eg black market prices for kidneys to get an estimate for how much your kidney is worth. Or, less macabre, you can look at how much you'd have to pay a gardener to mow your lawn to see what the labour of your son is worth.
* Get a sample of 100 random people or so
* For each person, have them track their time
* For each thing you have them tracked doing, estimate how much it would cost to get someone else to do it
But it pretty quickly gets difficult around questions of entertainment. If I go dancing for fun, should you count how expensive it would be to hire a professional to dance in my place? If I woodworking or knit for fun but then I also give away the things I make to my friends as presents should we count that at market value?
Not just question of entertainment, but also personal hygiene.
(Not saying this is a good thing.)
Cache la poudre.
What's good for one class is often bad for another.
Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?
For some people that's great. For others, not so great.
Maybe some economies are great for everyone, but this is definitely not one of those.
This economy is great for some people and bad for others.
In today's US? Debatable, but on the whole probably not.
In a hypothetical country with sane health care and social safety net policies? Yes that would be hugely beneficial. The tax base would bear the vast majority of the burden of those displaced from their jobs making it a much more straightforward collective optimization problem.
The US spends around 6.8k USD/capita/year on public health care. The UK spends around 4.2k USD/capita/year and France spends around 3.7k.
For general public social spending the numbers are 17.7k for the US, 10.2k for the UK and 13k for France.
(The data is for 2022.)
Though I realise you asked for sane policies. I can't comment on that.
I'm not quite sure why the grandfather commenter talks about unemployment: the US had and has fairly low unemployment in the last few decades. And places like France with their vaunted social safety net have much higher unemployment.
To a vast and corrupt array of rentiers, middlemen, and out-and-out fraudsters, instead of direct provision of services, resulting in worse outcomes at higher costs!
Turns out if I’m forced to have visits with three different wallet inspectors on the way to seeing a doctor, I’ve somehow spent more money and end up less healthy than my neighbors who did not. Curious…
By the way, how is that the republican playbook? What does any of this have to do with the desire to remove King Charles as the head of government? https://en.wikipedia.org/wiki/Republicanism_in_the_United_Ki...
Neither the US, UK or France are my own society.
I lived in the UK for a few years on and off. I agree that rationing by queuing is less efficient than rationing by money. Singapore does a much better job: they always have a co-payment (even if that's often that just for symbolic/ideologic reasons, and less so for rationing).
(Too many people getting their metaphorical pound of flesh, and bad incentives.)
I don't think you are disagreeing with them.
I think you're forgetting the Soviet Union, which looked great on paper until it turned out that it wasn't actually great...
Real GDP can go up, and it doesn't HAVE to mean you are producing more of anything valuable, and can - in fact - mean that you're not producing enough of what you need, and a bunch of what you don't need.
A very simple way to view this is: currently x% of GDP is waste. If Real GDP goes up 4% but the percentage of waste goes from 1% to 8% - you are clearly doing worse.
This is a reduction of what happened in the Soviet Union.
We need new metrics.
https://sherwood.news/markets/the-ai-spending-boom-is-eating...
(comment below: https://news.ycombinator.com/item?id=44804528 )
So they are talking about changes not levels.
For me, that’s enough of a thought experiment — as implausible as it might be to have AI in 1901 — to be skeptical that the difference is simply that the first tech step-change was a pre-war uplift to build the post-war US success story, and the latter builds on it.
Where as AI, who actually gets the investment? Nvidia? TMSC? Are people who are employed some that would have anyway been employed? Do they actually spend much more? Any Nvidia profits likely go just back to the market propping it up even higher.
How much efficiency from use of LLMs have actually increased proctiveness?
More like apples to octopus.
People should keep in mind that there was no such thing as a GDP before the 1980's.
All that has been back-calculated, and the further back you go the more ridiculous it gets.
Excuses sounded plausible at the time but killed two birds with one stone.
Less rapid increase in government benefits which had become based on GNP for survival to cope with inflation, and further obscuring the ongoing poor economic performance of the 1980's going forward compared to how it was before 1970 numerically.
The people who were numerically smart before that and saw what things were like first hand were not fooled so easily.
Even using GDP back in the 1980's when it first came out, you couldn't get a good picture of the 1960's which were not that much earlier.
Don't make me laugh trying for the 1860's :)
Often it comes down to arguing the “basket of goods” is wrong rather than the individual components, or perhaps that there are wider rates in specific areas.
If the general theme of this article is right (that it's a bubble soon to burst), I'm less concerned about the political environment and more concerned about the insane levels of debt.
If AI is indeed the thing propping up the economy, when that busts, unless there are some seriously unpopular moves made (Volcker level interest rates, another bailout leading to higher taxes, etc), then we're heading towards another depression. Likely one that makes the first look like a sideshow.
The only thing preventing that from coming true IMO is dollar hegemony (and keeping the world convinced that the world's super power having $37T of debt and growing is totally normal if you'd just accept MMT).
Which is their (Thiel, project2025, etc) plan, federal land will be sold for cheap.
Free money to the poor would raise inflation but they'd never do that because the poor would have more money.
It's already happening, past 6 months USD has been losing value against EUR, CHF, GBP, even BRL and almost flat against the JPY which was losing a ton of value the past years.
The first Great Depression was pretty darn bad, I'm not at all convinced that this hypothetical one would be worse.
Today, we have the highest tariffs since right before the Great Depression, with the added bonus of economic uncertainty because our current tariff rates change on a near daily basis.
Add in meme stocks, AI bubble, crypto, attacks on the Federal Reserve’s independence, and a decreasing trust in federal economic data, and you can make the case that things could get pretty ugly.
But for things to be much worse than the Great Depression, I think is an extraordinary claim. I see the ingredients for a Great Depression-scale event, but not for a much-worse-than-Great-Depression event.
How long will the foot stay on the accelerator after (almost literally) everyone else knows we might be in a bit of strife here?
If the US can put off the depression for the next three years then it has a much better chance of working it's way out gracefully.
The Great Depression lasted a decade and caused a 30% reduction in US GDP. That's really really bad.
I think people are just using "worse than the Great Depression" as a rhetorical device to mean "it would be bad", without out actually understanding what it would mean to be "worse than the Great Depression".
If my claim of this all leading to a greater depression is extraordinary (to the point of being easily dismissed), then someone will have to walk me through the math.
I think that, just like in the 1920's, we've gorged ourselves on debt, speculation, and hubristic thinking and the humbling is coming at us like a freight train. Instead of producing value, we produced inordinate amounts of bullshit and now the bill is coming due.
What I observe right now is that we are not, at this moment, in a depression despite federal debt being such a high percentage of GDP.
It is not at all clear to me that debt percent of GDP and badness of depression have a linear relationship.
The Great Depression lasted a decade and caused a 30% contraction of US GDP. You claim that this will be worse. Can you please walk me through the math?
Because the "experts" keep redefining what it means to be in a recession, depression, etc. Not to mention the fake job numbers/revisions and the delusion that the value of the stock market is anything near representative of the actual book value of the companies it trades.
So, to that end, you're right. If we continue to delude ourselves indefinitely while the wealth gap continues to expand (and nobody does anything about it), then we won't have a book-defined depression (i.e., no sudden GDP collapse or soaring unemployment per se).
Instead, we'll have a slow but sure move to a full blown oligarchy—implemented by hoovering up hard assets (like real estate) with cheap money—thanks to low interest rates—while the middle-to-lower class are struggling to survive (and media narratives often reinforce the idea that everything is fine, even when data on wages, asset ownership, and real inflation tell a different story). Frankly, I expect this to be far more likely than anything based on the people involved and recent evidence of this very thing taking place [1].
> It is not at all clear to me that debt percent of GDP and badness of depression have a linear relationship.
I didn't say it had a linear relationship, but it is a bellweather of a country that doesn't have its finances under control (and is rapidly approaching an inability to service its debt). Eventually, you run out of the capital (in this case, either monetary or geopolitical) to simultaneously service the debt—which we're on track to do by about ~2030—and stimulate growth.
Considering how embedded the USD is in global economics, if the above oligarchy scenario doesn't take place, the only thing that could happen is a global depression (because the thing propping up the whole system globally—dollar hegemony—has just collapsed/hyperinflated—imagine Venezuela, but globally).
Put simply: if debt outpaces revenue, interest spirals, and trust in the dollar fades, then whether we call it a "depression" (on paper) or not, the outcome is the same: widespread economic pain, asset consolidation, and long-term instability.
---
[1] https://www.cnbc.com/2023/02/21/how-wall-street-bought-singl...
I still don't see why it would be worse than the Great Depression.
The national debt being low during the Depression and high now doesn't seem relevant. The national debt was not a primary causative agent of the Great Depression, we're saying that it will be a primary causative agent of this one, so why would a Depression kicked off by national debt be worse than the one kicked off earlier by not-national-debt?
That said:
> > What I observe right now is that we are not, at this moment, in a depression despite federal debt being such a high percentage of GDP.
> Because the "experts" keep redefining what it means to be in a recession, depression, etc.
If you are claiming "actually we're totally in a depression right now but actually no because we're redefined our way out of it", now that is an extraordinary claim. "Depression" means something more than "bad economy vibes".
If this isn't the Singularity, there's going to be a big crash. What we have now is semi-useful, but too limited. It has to get a lot better to justify multiple companies with US $4 trillion valuations. Total US consumer spending is about $16 trillion / yr.
Remember the Metaverse/VR/AR boom? Facebook/Meta did somehow lose upwards of US$20 billion on that. That was tiny compared to the AI boom.
Edit: agree on the metaverse as implemented/demoed not being much, but that's literally one application
Don't get me wrong, VRChat and Beat Saber are neat, and all the money thrown at the space got the tech advanced at a much faster rate than it would have organically have done I'm the same time (or potentially ever). But you can see Horizon's attempt to be "VRChat but a larger more profitable business" to see how the things you would need to do to monetise it to that level will lose you the audience that you want to monetise.
* Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs.
* Surveys are showing that most people are only using AI for a fraction of the time at work, and still reporting significant productivity benefits, even with current models.
The AGI/ASI hype is a distraction, potentially only relevant to the frontier model labs. Even if all model development froze today, there is tremendous untapped demand to be met.
The Metaverse/VR/AR boom was never a boom, with only 2 big companies (Meta, Apple) plowing any "real" money into it. Similarly with crypto, another thing that AI is unjustifiably compared to. I think because people were trying to make it happen.
With the AI boom, however, the largest companies, major governments and VCs are all investing feverishly because it is already happening and they want in on it.
Are they constrained on resources for training, or resources for serving users using pre-trained LLMs? The first use case is R&D, the second is revenue. The ratio of hardware costs for those areas would be good to know.
However, my understanding is that the same GPUs can be used for both training and inference (potentially in different configurations?) so there is a lot of elasticity there.
That said, for the public clouds like Azure, AWS and GCP, training is also a source of revenue because other labs pay them to train their models. This is where accusations of funny money shell games come into play because these companies often themselves invest in those labs.
I was working on crypto during the NFT mania, and THAT felt like a bubble at the time. I'd spend my days writing smart contracts and related infra, but I was doing a genuine wallet transaction at most once a week, and that was on speculation, not work.
My adoption rate of AI has been rapid, not for toy tasks, but for meaningful complex work. Easily send 50 prompts per day to various AI tools, use LLM-driven auto-complete continuously, etc.
That's where AI is different from the dot com bubble (not enough folks materially transaction on the web at the time), or the crypto mania (speculation and not utility).
Could I use a smarter model today? Yes, I would love that and use the hell out of it. Could I use a model with 10x the tokens/second today? Yes, I would use it immediately and get substantial gains from a faster iteration cycle.
I have to imagine that other professions are going to see similar inflection points at some point. When they do, as seen with Claude Code, demand can increase very rapidly.
See the dotcom bubble in the early 2000s for a perfect example. The Web is still useful, but the bubble bursting was painful.
I really don’t know where you got that impression.
I was recently at a big, three-letter pharmacy company and I can't be specific, but just let me say this: They're always on the edge of having the main websites going down for this or that reason. It's a constant battle.
How is adding more AI complexity going to help any of that when they don't even have a competent enough workforce to manage the complexity as it is today?
You mention VR--that's another huge flop. I got my son a VR headset for Christmas in like 2022. It was cool, but he couldn't use it long or he got nauseaus. I was like "okay, this is problematic." I really liked it in some ways, but sitting around with that goofy thing on your head wasn't a strong selling point at all. It just wasn't.
If AI can't start doing things with accuracy and cleverness, then it's not useful.
This is crusty, horrible, old, complex code. Nothing is in one place. The entire editing experience was copy-pasted from the create resource experience (not even reusable components; literally copy-pasted). As the principal on the team, with the best understanding of anyone about it, even my understanding was basically just "yeah I think these ten or so things should happen in both cases because that's how the last guy explained it to me and it vibes with how I've seen it behave when I use it".
I asked Cursor (Opus Max) something along the lines of: Compare and contrast the differences in how the application behaves when creating this resource versus updating it. Focus on the API calls its making. It responded in short order with a great summary, and without really being specifically prompted to generate this insight it ended the message by saying: It looks like editing this resource doesn't make the API call to send a notification to affected users, even though the text on the page suggests that it should and it does when creating the resource.
I suspect I could have just said "fix it" and it could have handled it. But, as with anything, as you say: Its more complicated than that. Because while we imply we want the app to do this, its a human's job (not the AI's) to read into what's happening here: The user was confused because they expected the app to do this, but do they actually want the app to do this? Or were they just confused because text on the page (which was probably just copy-pasted from the create resource flow) implied that it would?
So instead I say: Summarize this finding into a couple sentences I can send to the affected customer to get his take on it. Well, that's bread and butter for even AIs three years ago right there, so off it goes. The current behavior is correct; we just need to update the language to manage expectations better. AI could also do that, but its faster for me to just click the hyperlink in Claude's output, jumps right to the file, and I make the update.
Opus Max is expensive. According to Cursor's dashboard, this back-and-forth cost ~$1.50. But let's say it would have taken me just an hour to arrive at the same insight it did (in a fifth the time): that's easily over $100. That's a net win for the business, and its a net win for me because I now understand the code better than I did before, and I was able to focus my time on the components of the problem that humans are good at.
Humans are not always accurate or clever. But we still consider them useful and employ them.
The average response to that is "its just fake demand from other businesses also trying to make AI work". Then why are the same trends all but certainly happening at Cursor, for Claude Code, Midjourney, entities that generally serve customers outside of the fake money bubble? Talk to anyone under the age of 21 and ask them when they used Chat last. McDonalds wants to deploy Gemini in 43,000 US locations to help "enhance" employees (and you know they won't stop there) [2]. Students use it to cheat at school, while their professors use it to grade their generated papers. Developers on /r/ClaudeAI are funding triple $200/mo claude max subscriptions and swapping between them because the limits aren't high enough.
You can not like the world that this technology is hurtling us toward, but you need to separate that from the recognition that this is real, everyone wants this, today its the worst it'll ever be, and people still really want it. This isn't like the metaverse.
[1] https://openrouter.ai/rankings
[2] https://nypost.com/2025/03/06/lifestyle/mcdonalds-to-employ-...
These are jobs that normally would have gone to a human and now go to AI. We haven't paid a cent for AI mind you -- it's all on the ChatGPT free tier or using this tool for the graphics: https://labs.google/fx/tools/image-fx
I could be wrong, but I think we are at the start of a major bloodbath as far as employment goes.... in tech mostly but also in anything that can be replaced by AI?
I'm worried. Does this mean there will be a boom in needing people for tradeskills and stuff? I honestly don't know what to think about the prospects moving forward.
The AI bubble is so big that it's draining useful investment from the rest of the economy. Hundreds of thousands of people are getting fired so billionaires can try to add a few more zeros to their bank account.
The best investment we can make would be to send the billionaires and AI researchers to an island somewhere and not let them leave until they develop an AI that's actually useful. In the meanwhile, the rest of us get to live productive lives.
There probably are a few nuts out there that actually fired people to be replaced with AI, I feel like that won't go well for them
There really is no evidence.
I'll say its okay to be reserved on this, since we won't know until after the fact, but give it 6-12 months, then we'll know for sure. Until then, I see no reason not to believe there is a culture in the boardrooms forming around AI that is driving closed door conversations about reducing headcount specifically to be replaced by AI.
[0]: https://gizmodo.com/the-end-of-work-as-we-know-it-2000635294
(I am an AI optimist, by the by. But that is not one of its success stories.)
1) for future cashflows (aka dividends) derived from net profits.
2) to on-sell to somebody willing to pay even more.
When option (2) is no longer feasible, the bubble pops and (1) resets the prices to some multiple of dividends. Economics 101.But wouldn't you want to pay more for a company that has a history of revenue and income growth than one in a declining industry? And you have to look at assets on the company's books; you're not just buying a company, you're buying a share of what it owns. What if it has no income, but you think there's a 10% chance it'll be printing money in 5 years?
That's why prices won't naively reset to a multiple of ~~dividends~~ income (see the dividend irrelevance theory) across the board. Someone will always put a company's income in context.
Back then, the money poured into building real stuff like actual railroads and factories and making tangible products.
That kind of investment really grew the value of companies and was more about creating actual economic value than just making shareholders rich super fast.
Its limitations are well-documented, but cutting-edge AI right now is very much "real stuff."
The amount speculation and fraud from this time period would make even the biggest shit coin fraud blush.
Try a biography of Jay Gould if you want more information.
Post-Labor Economics Lecture 01 - "Better, Faster, Cheaper, Safer" (2025 update) https://www.youtube.com/watch?v=UzJ_HZ9qw14
Vibe coding is great for Shanty town software and the aftermath from storms is equally entertaining to watch.
The scary thing is that these tools are now part of the toolset of experienced developers as well. So those same issues can and will happen to apps and services that used to take security seriously.
It's depressing witnessing the average level of quality in the software industry go down. This is the same scenario that caused mass consumer dissatisfaction in 1983 and 2000, yet here we are again.
Also because so many companies are staying private, a crash in private markets is relatively irrelevant for the overall economy.
Simplifying an economic activity down to a single short formula leaves out a lot of important parameters and these kinds of things tend to hold some truth for the time they are invented and often break at some point in the near future after they are created. This is because of changes in money flows in the economy as a result of tax, regulation and technology changes.
Like the yield curve inversions and the Sahm rule and so on.
This will not be the case anymore. There is no labor restructuring to be made, the lists for the future safe jobs are humorous to say the least. There has been a difficulty in finding skilled labor in sustainable wages for the companies and that has been highlighted as a key blocker for growth. Econony will rise by removing this blocker by AI. Rise of the economy due to AI invalidates old models and trickle down spurious correlations. Rise of the economy through AI directly enables the most extreme inequality and no reflexes or economics experience exists to manage it.
There have been many theories for revolutions, social financial ideological and others. I will not comnent on those but I will make a practical observation: It boils down to the ratio of controlers vs controlled. AI also enables an extremely minimal number of controllers through the AI managment of the flow information and later a large number of drones can keep everyone at bay. Cheaply, so good for the economy.
I usually avoid responding to remarks like this because they risk forays into politics, which I avoid, but the temptation to ask was too great here. What do you consider computers, cellphones, air conditioners, flat screen TVs and refrigerators to be? The first ones had outrageous prices that only the exorbitantly wealthy could afford. Now almost everyone in the US has them. They seem to have trickled down to me.
You're talking about this: https://ideas.repec.org/p/wrk/warwec/270.html
:)
Trickle down economics is supposed to make poorer people more wealthy. Not suppress their wage growth while offering a greater selection of affordable gadgets.
https://www.merriam-webster.com/dictionary/gadget
Among the many things that have become affordable for every day people because money had been present to fund the R&D are air conditioners, refrigerators, microwave ovens, dish washers, washing machines, clothes dryers, etcetera. When I was born in the 80s, my parents had only a refrigerator (and maybe a microwave oven). They could not afford more. Now they have all of these things.
I don’t expect either of us to be able the answer the questions posed. Nobody in the 80s was asking for any of these inventions. People were living their lives happily ignorant to a better future. For that reason, most of these things do amount to just gadgets. They have shaped our lives in a dramatic way and had huge commercial success by solving huge problems or increasing conveniences, but they are still nonessential. That’s the way I’m using the term, don’t really care what Webster has to say about it tbh as I’m perhaps being dramatic precisely to highlight this point.
The continuation of R&D isn’t even a trickle down policy. If you’re a big manufacturer of CRT televisions, it’s in your interest to continue inventing better technology in that space just to remain competitive. If you’re really good at it, there’s a good chance you can steal market share. It’s good old fashioned business as usual in a competitive industry. I don’t see how they relate to one another. Not to mention that many things are invented in a garage somewhere and capital is infused later. Would this only happen if the rich uncles of the world benefited from economic policies aimed at making them rich? I think it would still find a way in most cases, good ideas typically always find a way. I don’t think a majority of gadgets can be linked to something like “brought to you by trickle down economics”.
Honestly, I have to say that I am relatively happy with the things that I have these days because of obscenely wealthy people’s investments. I have a heat pump air conditioner that would have been unthinkable when I was a child. I have food from Aldi and Lidl, whose prices relative to the competition also would have been unthinkable when I was a child. I have an electric car and solar panels, which were in the realm of fantasy when I was a child. Solar panels and electric cars existed, but solar panels were obscenely expensive and electric cars were considered a joke when I was young. I have a gigabit fiber internet connection at $64.99 per month, such internet connections were only available to the obscenely rich when I was a child. I am not sure if I would have any of these things if the money had not been there to fund them. I really do feel like things have trickled down to me.
I like electric cars and solar panels and gigabit fiber as much as the next person, but they aren’t wealth.
https://en.wikipedia.org/wiki/Aldi
If you shop there, you are enriching its owners. That is not a bad thing. The more money they have, the better they make things for people, so it is a win-win.
Note that Aldi is technically two companies since the family that founded it had some internal disagreement and split the company into two, but they are both privately owned.
That said, if wealthy people had not made investments, I would not have an electric car, solar panels or gigabit fiber. The solar panels also improve property values, so it very much is a form of wealth, although not a liquid one. Electric cars similarly are things that you can sell (although they are depreciating assets), so saying that they are not wealth is not quite correct. The internet connection is not wealth in a traditional sense, but it enables me to work remotely, so it more than pays for itself.
“Trickle down” isn’t just “rich people found companies” or “rich people buy gadgets when they’re still new and expensive.” It’s specifically about making ordinary people financially better off in a significant way by making rich people richer. It’s not about technology at all, and it’s not merely about rich people doing some things that benefit the rest of us. It’s a causal claim about rich people doing more things to benefit us, and it being a positive tradeoff, by making them richer.
> The trickle-down theory includes commonly debated policies associated with supply-side economics.
Products people buy with the money they earn. Not things that fall down from the tables of the ultra rich.
Their affordability comes from the economies of scale. If I can sell 100000 units of something as opposed to 100 units, the cost-per-unit goes down. Again, nothing to do with anything "trickling down".
Also, not all patents are monetizable.
Then they hope they can sell it at a profit.
Products becoming cheaper is a result of the processes getting more optimized ( on the production side and the supply side ) which is a function of the desire to increase the profit on a product.
Without any other player in the market this means the profit a company makes on that product increases over time.
With other players in that market that underprice your product it means that you have to reinvest parts of your profit into making the product cheaper ( or better ) for the consumer.
Not to increase scale, but to reduce the cost of the device while maintaining 99% of the previous version, IOW, enshittification of the product.
> how would the affordable versions exist today?
Not all "affordability" comes from the producer of the said stuff. Many things are made from commodity materials, and producers of these commodity materials want to increase their profits, hence trying to produce "cheaper" versions of them, not for the customers, but for themselves.
Affordability comes from this cost reduction, again enshittification. Only a few companies I see produce lower priced versions of their past items which also surpasses them in functionality and quality.
e.g. I have Sony WH-CH510 wireless headphones, which has way higher resolution than some wired headphones paired with decent-ish amps, this is because Sony is an audiovisual company, and takes pride in what they do. On the other end of the spectrum is tons of other brands which doesn't sell for much cheaper, but get way worse sound quality and feature set, not because they can't do it as good as Sony, but want to get a small pie of the said market and earn some free money, basically.
https://cdn.britannica.com/93/172793-050-33278C86/Cell-phone...
As for your wireless headphones, if you compare them to early wireless headphones, you should find that prices have decreased, while quality has increased.
I can argue, from some aspects, yes. Given that you provide the infrastructure for these devices, they'll work exactly as they are designed today. On the other hand, a modern smartphone has a way shorter life span. OLED screens die, batteries, swell, electronics degrade.
Ni-Cad batteries, while being finicky and toxic, are much more longer lasting than Li-ion and Li-Poly batteries. If we want to talk Li-Poly batteries, my old Sony power bank (advertising 1000 recharge cycles with a proprietary Sony battery tech) is keeping its promise, capacity and shape 11 years after its stamped manufacturing date.
Can you give me an example of another battery/power pack which is built today and can continue operating for 11 years without degrading?
As electronics shrink, the number of atoms per gate decreases, and this also reduces the life of the things. My 35 y/o amplifier works pretty well, even today, but modern processors visibly degrade. A processor degrading to a limit of losing performance and stability was unthinkable a decade ago.
> you will find that prices have decreased, while quality has increased.
This is not primarily driven by the desire to create better products. First, cheaper and worse ones come, and somebody decides to use the design headroom to improve things later on, and put a way higher price tag.
Today, in most cases, speakers' quality has not improved, but the signal processed by DSP makes them appear sound better. This is cheaper, and OK for most people. IOW, enshittification, again. Psychoacoustics is what makes this possible, not better sounding drivers.
The last car I rented has a "sound focus mode" under its DSP settings. If you're the only one in the car, you can set it to focus to driver, and it "moves" the speakers around you. Otherwise, you select "everyone", and it "improves" sound stage. Digital (black) magic. In either case, that car does not sound better than my 25 year old car, made by the same manufacturer.
You want genuinely better sounding drivers, you'll pay top dollar in most cases.
I have LiFePo4 batteries from K2 Energy that will be 13 years old in a few months. They were designed as replacements for SLA batteries. Just the other day, I had put two of them into a UPS that needed a battery replacement. They had outlived the UPS units where I had them previously.
I have heard of Nickel Iron batteries around 100 years old that still work, although the only current modern manufacturers are in China. The last US manufacturer went out of business in 2023.
> You want genuinely better sounding drivers, you'll pay top dollar in most cases.
I do not doubt that, but if the signal processing improves things, I would consider that to be a quality improvement.
Interesting, but they are not manufactured more, but way less, as you can see. So, quality doesn't drive the market. Monies do.
> I do not doubt that, but if the signal processing improves things, I would consider that to be a quality improvement.
Depends on the "improvement" you are looking for. If you are a casual listener hunting for an enjoyable pair while at a run or gym, you can argue that's an improvement.
But if you're looking for resolution increases, they're not there. I occasionally put one of my favorite albums on, get a tea, and listen to that album for the sake of listening to it. It's sadly not possible on all gear I have. You don't need to pay $1MM, but you need to select the parts correctly. You still need a good class AB or an exceptional class D amplifier to get good sound from a good pair of speakers.
This "apparent" improvement which is not there drives me nuts actually. Yes, we're better from some aspects (you can get hooked to feeds instead of drugs and get the same harm for free), but don't get distracted, the aim is to make numbers and line go up.
They were always really expensive, heavy and had low energy density (both by weight and by volume). Power density was lower than lead acid batteries. Furthermore, they would cause a hydrolysis reaction in their electrolyte, consuming water and producing a mix of oxygen and hydrogen gas, which could cause explosions if not properly vented. This required periodic addition of water to the electrolyte. They also had issues operating at lower temperatures.
They were only higher quality if you looked at longevity and nothing else. I had long thought about getting them for home energy storage, but I decided against them in favor of waiting for LiFePo4 based solutions to mature.
By the way, I did a bit more digging. It turns out that US production of NiFe batteries ended before 2023, as the company that was supposed to make them had outsourced production to China:
Sorry, I misread your comment. I thought you were talking about LiFePo4 production ending in 2023, not NiFe.
I know that NiFe batteries are not suitable (or possible to be precise) to be miniaturized. :)
I still wish market does research on longevity as much as charge speed and capacity, but it seems companies are happy to have batteries with shorter and shorter life spans to keep up with their version of the razor and blades model.
Also, this is why regulation is necessary in some areas.
> What do you consider computers, cellphones, air conditioners, flat screen TVs and refrigerators to be? The first ones had outrageous prices that only the exorbitantly wealthy could afford. Now almost everyone in the US has them. They seem to have trickled down to me.
That said, I see numerous things that exist solely because those with money funded R&D. Your capital markets theory for how the R&D was funded makes no sense because banks will not give loans for R&D. If any R&D funds came from capital markets, it was by using existing property as collateral. Funds for R&D typically come from profitable businesses and venture capitalists. Howard Hughes for example, obtained substantial funds for R&D from the Hughes Tool Company.
Just to name how the R&D for some things was funded:
- Microwave oven: Developed by Raytheon, using profits from work for the US military
- PC: Developed by IBM using profits from selling business equipment.
- Cellular phone: Developed by Motorola using profits from selling radio components.
- Air conditioner: Developed by Willis Carrier at Buffalo Forge Company using profits from the sale of blacksmith forges.
- Flat panel TV: Developed by Epson using profits from printers.
The capital markets are no where to be seen. I am at a startup where hardware is developed. Not a single cent that went into R&D or the business as a whole came from capital markets. My understanding is that the money came from an angel investor and income from early adopters. A hardware patent that had given people the idea for the business came from research in academia, and how that was funded is unknown to me, although I would not be surprised if it had been funded through a NSF grant. The business has been run on a shoe string budget and could grow much quicker with an injection of funding, yet the capital markets will not touch it.
https://www.sec.gov/resources-small-businesses/capital-raisi...
As for capital markets, I had misunderstood what the term meant when I replied, as your definition and the definition at wikipedia at a glance looked like it described the lending portion of fractional reserve banking and I never needed a term to discuss the individual "capital" markets collectively. Investopedia has a fairly good definition:
https://www.investopedia.com/terms/c/capitalmarkets.asp
I am going to assume that by capital markets, you really mean the stock market (as the others make even less sense for getting a new business off the ground to produce something new). Unfortunately, a business needs to be at a certain level of maturity before they can do an IPO on the stock market. VC exists for the time before an IPO can be done. Once they are at that size, the stock market can definitely inject funding and that funding could be used for R&D. However, share dilution to raise funds for R&D is not sustainable, so funding for R&D needs to eventually transition to revenue from sales. This would be why the various inventions I had listed had not been funded from capital markets. I imagine many other useful inventions had not been either.
That said, the stock market also is 90% owned by the wealthiest 10% of Americans, so the claim that "Capital markets exist by pooling in the savings even by poor and middle class households" is wrong:
https://seekingalpha.com/news/4464647-deeper-dive-the-wealth...
In any case, despite your insistence that money does not trickle down, your own example of capital markets shows money trickling down. The stock market in particular is not just 90% owned by the wealthiest Americans, but is minting new millionaires at a rapid pace, with plenty of rags to riches stories from employees at successful businesses following IPOs.
Let's assume you have a monopoly on something with a guarantee that no one else can sell the same product in your market. Then there is no direct incentive to make the product cheaper, even if you can produce it for cheaper. Adding more money on top of it that is supposed to trickle down in some way will not make that product cheaper, unless there is an incentive for that company to do so.
The real world is of course more complicated, let's say you have two companies that get the incentives and one of them is using it to make the product cheaper, then that will "trickle down" as a price decrease because the other company need to follow suit to stay competitive. But this again is driven by the market and not the incentives and would have happened without them just as well.
The first cellular phone in modern currency cost something like $15,000. At that price, the market for it would be orders of magnitude below the present cellular phone market size. Lower the price 1 to 2 orders of magnitude and we have the present cellular phone market, which is so much larger than what it would have been with the cellular phone at $15,000.
Interestingly, the cellular phone market also seems to be in a period where competition is driving prices upward through market segmentation. This is the opposite of what you described competition as doing. Your remark that the real world is more complicated could not be more true.
If you fix the specs and progress time then the prices go down considerably
Take the first Iphone which was $499 ( $776.29 if adjusted for inflation ) and try to find a currently built phone with similar specs. I couldn't find any that go down that far but the cheapest one I could find was the ZTE Blade L9 ( which still has higher specs overall ) then we are looking at over 90% price reduction
Quite why we've persuaded ourselves we need to do this through a remote & deaf middleman is anyone's guess, when governments we elect could just direct money through policies we can all argue about and nudge in our own small ways.
I ask hyperbolically: are they economic enablers or financial traps?
(My hunch is that fridges are net-enablers, but TVs are net-traps. I say this as someone with a TV habit I would like to kick.)
Permeation of technology due to early adopters paying high costs leading to lower costs is not what trickle down generally means. Being an early adopter of cellphones, AC, flat screen TVs or computers required the wealth level of your average accountant of that era - it didn't require being a millionaire.
Except for the fact that it wasn't right around the corner???
Modern AI is not an intelligence. Wonder what crap they are calling AGI.
Real AGI would be alive and would be capable of art and music and suffering and community, of course. So there would really be no more need for humans except to shovel the coal (or bodies of other humans) into the furnace that power the Truly Valuable members is society, the superintelligent AIs, which all other aspects of our society will be structured towards serving.
Real AGI might realistically decide to go to war with us if we've leaned anything from current LLMs and their penchant for blackmail
No, AGI isn't a good thing. We should expect it to go badly, because there are so many ways it could be catastrophic. Bad outcomes might even be the default without intervention. We have virtually no idea how to drive good outcomes of AGI.
AGI isn't being pursued because it will be good, it's being pursued because it is believed to be more-or-less inevitable, and everyone wants to be the one holding the reins for the best odds of survival and/or being crowned god-emperor (this is pretty obviously sam altman's angle for example)
I'd hope that they'd keep a few of us around, but it's hard to see the logic in them keeping all of us and allowing us the freedom to live and breed the way we do right now.
No. With the kind of wealth ASI can generate keeping 10 billion humans alive with a very good standard of living is like a human owning a cat.
https://www.pewresearch.org/short-reads/2022/12/08/about-fou...
Thankfully education about physics, first principles, and critical thinking got me out from under it. Hopefully they can do the same for the rest--if we get them young enough.
Other than government, is there anybody else who can loosen the purse strings a little bit and have it not act as a temporary stimulant as long as it lasts?
Whether they wish it would last or even provide any benefit to the average person, seems like there are plenty who wouldn't wish more prosperity on anyone who doesn't already have it :\
The only real way for long-term growth would be to plant seeds rather than dispense mere artificial stimulants.
Unless AI makes the general public way more than capitalists have spent, it wouldn't be worth any increase in cost whatsoever for things like energy or hardware. Even non-AI software could become unaffordable if their labor costs go up enough too keep top people from being poached by AI companies flush with cash.
I bet even the real estate near the data centers gets more unaffordable, at the same time clocking a win for the local economy due to the increased cash flow and tax revenue. Except all that additional cash is flowing out of peoples' pockets not in :\
It feels like the AI economy is being propped up by the idea that we're right around the corner from a fabled C-suite promised land where "labor" is unnecessary, but we've already been there for like a year now if you don't need a machine to explain your own business model to you, and it's basically free.
As long as that remains true, don't see how this bubble will be popped
I don’t really have a strong preference, so I just use any service where I’m currently not rate limited. There are many of them and I don’t see much difference between them for day to day use. My company pays for Cursor but I burned through my monthly quota in a day working on a proof of concept that mirrored their SDK in a different language. Was it nice that I could develop a proof of concept? Yes. Would I pay 500 dollars for it from my own pocket? No, I don’t think so.
It’s like those extremely cheap food and grocery delivery apps, they made their food cheap, no delivery fees for a while… of course everyone was using it. Then, they started to run out of VC money, they had to raise prices, then suddenly nobody used them anymore and they went bankrupt. There was demand, but only because of the suppressed prices fueled by VC money.
It doesn’t cover the free users, but that’s normal startup strategy. They are using investor cash to grow as quickly as possible. This involves providing as much free service as they can afford so that some of those free users convert to paid customers. At any point they can decide not to give so much free service away to rebalance in favour of profitability over growth. The cost of inference is plummeting, so it’s getting cheaper to service those free users all the time.
> It’s like those extremely cheap food and grocery delivery apps, they made their food cheap, no delivery fees for a while… […] they started to run out of VC money, they had to raise prices
That’s not the same situation because that makes the product more expensive for the customers, which will hit sales. This isn’t the same as cutting back on a free tier, because in that situation you’re not doing anything to harm your customers’ service.
That's what everybody was saying in February 2000.
Fortunately, this time around, I have a partner who is gainfully employed and has valuable skills that could be used in other companies. I am somewhat successful at consulting, plus a small pension. So the chances of going completely bankrupt are small, but not zero.
But oh fuck, I am having nightmares.
I was college age at the time, but yeah, this is how I recall it feeling.
There's seemingly a universal malaise around that comes up even in casual conversations.
Seemingly everyone is looking for a job with no success, every third story is about some sort of layoffs, and now even the press is starting to discuss the housing bubble.
I've been waiting for the proverbial other shoe to drop for some time now. Clearly, something's gotta give and the cracks are getting too large to ignore now.
Capacity just means there is currently more demand than supply and that might be a number of negative factors driving that: users with no ROI (free users), too rapid growth, poor efficiency etc etc.
I have spoken with many companies and nearly all of them, when speaking about AI, have gotten to the point they don't even make any sense. A common theme is 'we need' AI, but nobody can articulate 'why' and in-fact they get defensive when questioned. It is almost perfectly parallel to the 'we need blockchain' argument or 'we need a mobile app'. That isn't to say those are not useful technologies, but the rapid rise, steep decline, then gradual rise is a theme in tech.
Others have observed and pointed out his prescience before:
Teachers are demanding not to do the work that is teaching. Lawyers are demanding not to do the work of lawyering. Engineers don't want to do coding and leaders don't want to steer the ship anymore unless it's towards AI.
Alllll the "value" is bullshit. Either AGI arrives and all jobs are over and it's eternal orgy time, or at some point the lazy ai-using losers will get fired and everyone else will go back to doing work like usual
Make teachers great again.
Eternal orgy time is not possible, will never happen. And if AI is useful, which it is, it will never be abandoned. Somewhere in the middle is the real prognosis
It may "balance" around the middle, and expect it to be noticably different than now, even without much of an actual middle.
Or the middle could end up being just another group, maybe or maybe not the most prominent one.
No, history is a web of lies written by the winners, just like your daily news.
This statement is unrelated to the funding in the space, which is not going to be mis placed, only a question of how much.
If anything, this might be a realer version of a dot com boom.
It’s basically similar to biking on rolling hills with each subsequent hill being a lot taller than the previous one. You come down the hill hard, but just before you crash to the ground, another hill comes up, and you let the momentum carry you through, and you also get a gush of wind in your back.
That’s what’s been happening since 2012 - it’s a great 13-year run now. I’ve stopped betting against it.
Neglects the most important benefit of large semiconductor spending: we are riding the Learning Curve up Moore's Law. We are not much better at building railroads today than we were in 1950. We are way better at building computers today. The GPUs may depreciate but the knowledge of how to build, connect, and use them does not - that knowledge compounds over time. Where else do you see decades of exponential efficiency improvements?
I can’t think of one reason anyone really wants this right now. I prefer to deal with a human in 99% of my interactions.
Anyone who is actually in the industry knows that this is the opposite of the truth. There are more layoffs than ever. Yea maybe it's being propped up for the richest of the rich taking advantage of leverage over speculation. Everyone else? It's just getting worse and worse. And that's what actually matters.
Would another order of magnitude decrease in the amount of compute for a model do it?
Would the cost of compute falling by an order of magnitude in a black swan event?
Or, perhaps Jevan's paradox[1] kicks in, and we just eat up the extra capacity in new uses.
0cf8612b2e1e•6mo ago
bravetraveler•6mo ago
intended•6mo ago
electrondood•6mo ago
gruez•6mo ago
[1] Things get even spicier if consumer growth was zero. Then what would the comparison? That AI added infinitely more to growth than consumer spending? What if it was negative? All this shows how ridiculous the framing is.
agent_turtle•6mo ago
gruez•6mo ago
Have you heard of the disagreement hierarchy? You're somewhere between 1 and 3 right now, so I'm not even going to bother to engage with you further until you bring up more substantive points and cool it with the personal attacks.
https://paulgraham.com/disagree.html
agent_turtle•6mo ago
Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on. A resilient economy has multiple growth areas; an unstable one has one or two.
While you could certainly argue that we may already be in rough shape even without the bubble popping, it would undoubtedly get worse for the reasons I listed above,
gruez•6mo ago
Right, I'm not suggesting that all of the datacenter construction will seamlessly switch over to building homes, just that some of the labor/materials freed would be allocated to other sorts construction. That could be homes, amazon distribution centers, or grid connections for renewable power projects.
>A resilient economy has multiple growth areas; an unstable one has one or two.
>[...] it would undoubtedly get worse for the reasons I listed above,
No disagreement there. My point is that if AI somehow evaporated, the hit to GDP would be less than $10 (total size of the sector in the toy example above), because the resources would be allocated to do something else, rather than sitting idle entirely.
>Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on.
That's a fair point, although to be fair the federal government is pretty good at stimulus after the GFC and covid that any credit crunch would be short lived.
dang•6mo ago
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
troyastorino•6mo ago
Using non-seasonally adjusted St. Louis FRED data (https://fred.stlouisfed.org/series/NA000349Q), and the AI CapEx spending for Meta, Alphabet, Microsoft, and Amazon from the WSJ article (https://www.wsj.com/tech/ai/silicon-valley-ai-infrastructure...):
-------------------------------------------------
Q4 2025 consumer spending: ~$5.2 trillion
Q4 2025 AI CapEx spending: ~$75 billion
-------------------------------------------------
Q1 2025 consumer spending: ~$5 trillion
Q1 2025 AI CapEx spending: ~$75 billion
-------------------------------------------------
Q2 2025 consumer spending: ~$5.2 trillion
Q2 2025 AI CapEx spending: ~$100 billion
-------------------------------------------------
So, non-seasonally adjusted consumer spending is flat. In that sense, yes, anything where spend increased contributed more to GDP growth than consumer spending.
If you look at seasonally-adjusted rates, consumer spending has grown ~$400 billion, which might outstrips total AI CapEx in that time period, let alone growth. (To be fair the WSJ graph only shows the spending from Meta, Google, Microsoft, and Amazon. But it also says that Apple, Nvidia, and Tesla combined "only" spent $6.7 billion in Q2 2025 vs the $96 billion from the other four. So it's hard to believe that spend coming from elsewhere is contributing a ton.)
If you click through the the tweet that is the source for the WSJ article where the original quote comes from (https://x.com/RenMacLLC/status/1950544075989377196) it's very unclear what it's showing...it only shows percentage change, and it doesn't even show anything about consumer spending.
So, at best this quote is very misleadingly worded. It also seems possible that the original source was wrong.
raincole•6mo ago
Is the keyword here. US consumers have been spending so much so of course that sector doesn't have that much room to grow.
lisbbb•6mo ago