People will go to alternative models, but it likely will be as popular as Linux.
People who are cut out to be software developers can afford the means of production.
To be sitting that far out on a limb of software development while sawing at the branches of others is quite an interesting choice.
AI is only going to work if enough people can actually meaningfully use it.
Therefore, the monetisation model will have to adapt in ways that make it sustainable. OpenAI is experimenting with ads. Other companies will just subsidise the living daylights out of their solutions...and a few people will indeed run this stuff locally.
Look at how slow the adoption of VR has been. And how badly Meta's gamble on the metaverse went. It's still too expensive for most people. Yes, a small elite can afford the necessary equipment, but that's not a petri dish on which one can grow a paradigm-shift.
If only a few thousand people could afford [insert any invention here], that invention wouldn't be common-place nowadays.
Now, the pyramid has sort of been turned on its head, in the sense that things nowadays don't start expensive and then become cheaper, but instead start cheap and then become...something else, be that more expensive or riddled with ads. But there are limits to this.
> People who are cut out to be software developers
You mean the people AI is going to replace? What's the definition of 'cut out to be' here?
I'm not sure today's Claude Code could ask for that. But I don't think it would be a crazy goal for them to work towards
Also a 25% boost per individual doesn’t necessarily equal a 25% boost to the final output if there are other bottlenecks.
I would expect that they could sell something like AI computers with lot of GPU power created from similar recycled GPU clusters ussed today.
Cloud services have a head-start for quite a few reasons, but I really think we could see local LLMs coming into their own over the next 3-5 years.
The future is here and it's time to stop ignoring it.
Your analog 1x productivity is worthless in comparison to my AI backed 10x productivity.
https://news.ycombinator.com/newsguidelines.html
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Honestly, if you just made your profile a day ago to yell overly confident and meaningless statements into the void, like a Mandrill in the jungle trying to shout over all the others, go back to LinkedIn, they like that kind of stuff there.
I even agree that AI has a place in our world and can greatly increase productivity. But we should talk about the how and why, instead of attacking others ad hominem and just stopping any discourse with absolutist nonsense.
They desperately need LLMs to stay rentier and hardware advances are a direct attack on their model.
1. Models become commodities and immensely cheaper to operate for inference as a result of some future innovation. This would presumably be very bad for the handful of companies who have invested that $1T and want to recoup that, but great for those of us who love cheap inference.
2. #1 doesn't happen and the model providers start begin to feel empowered to pass the true cost of training + inference down to the model consumer. We start paying thousands of dollars per month for model usage and the price gate blocks out most people from reaping the benefits of bleeding-edge AI, instead being locked into cheaper models that are just there to extract cash by selling them things.
Personally I'm leaning toward #1. Future models near as good as the absolute best will get far cheaper to train, and new techniques and specialized inference chips will make them much cheaper to use. It isn't hard for me to imagine another Deepseek moment in the not-so-distant future. Perhaps Anthropic is thinking the same thing given the rumors that they are rumored to be pushing toward an IPO as early as this year.
This also fits with OpenAIs announced advertising cost, and is something most consumers can stomach.
If you want the best, then pay.
A $1T investment needs to produce on the order of $100B in yearly earnings to be a good investment.
Global GDP is about $100T.
So one way for things to work out for the AI companies would be if AI raises GDP by 1% and the AI companies capture 10% of the created value.
I’m not certain of the conclusion - I think a lot depends on amortization schedules - if data centers are fully booked right now, then we don’t need very long amortization schedules at the reported 60+% margin on inference to see this capex fully paid off.
My prior is that we are seeing something like 1/10,000th or so of the reasonable inference demand the world has fulfilled. There’s a note in the analysis that might back this - it says that we are seeing one of the only times ever where hardware prices are rising over time. Combined with spot prices at lambda labs (still quite high I’d say), it doesn’t look like we’re seeing a drop in inference demand.
Under those circumstances, the first phases of this bet, cross-industry, look like they will pay off. If that’s true, as an investment strategy, I’d just buy the basket - oAI, Anthropic, GOOG, META, SpaceX, MSFT, probably even Oracle, and wait. We’ll either get the rotating state of the art frontier capacity we’ve gotten in the last 18 months, or one of those will have lift off.
Of those, I think MSFT is the value play - they’re down something like 20% in the last six months? Satya’s strategy seems very sensible to me - slow hyperscale buildouts in the US (lots of competition) and do them everywhere else in the world (still not much competition). For countries that can’t build their own frontier models, the next best thing is going to be running them in local datacenters; MSFT has long standing operational bases everywhere in the world, it’s arguably one of their differentiators compared to GOOG/META.
Perhaps all of these data centers won't be needed. At least not by some of the current AI companies that won't keep up. If that happens to OpenAI, that would be quite a shock to the financial system (and GDP).
Microsoft's changes to windows have alienated some of their userbase. Copilot is poor compared to it's rivals. There's a reason they are down 20%. Linux adoption use is accelerating (still too low!).
And don't forget AI on device. When it becomes "good enough" for most tasks, data centre use will reduce.
With the talk of Nvidia backtracking and saying they won't invest 100 billion in OpenAI, Oracle in a poor financial position with the loan's for it's upcoming data centres becoming more expensive and dubious (they could fail to pay them)- the picture isn't as positive as you make it out to be. Which makes me think that you have an ulterior motive.
xyst•1h ago
This country is so awful. Great if you are rich. Awful if you are not in this top 0.01-1%.
A massive $79T has been transferred from bottom 90% to top 1% since the 1970s. [1]
[1] https://www.rand.org/pubs/working_papers/WRA516-2.html
francisofascii•1h ago
jl6•1h ago
sQL_inject•1h ago
What's the alternative ?
jryan49•1h ago
rozap•1h ago
sQL_inject•1h ago
organsnyder•1h ago
webdoodle•1h ago
But WE BUILT IT, and can take back the internet when we finally realize it's not dems vs reps, but rich vs poor. It's always been a class war, they just are much better at keeping us distracted.
sarchertech•42m ago
But the French Revolution is nothing to emulate. If you’ve read the history of the French Revolution you know that it quickly moved on from rich parasites to murdering and imprisoning people over minor philosophical differences and real or lack of perceived lack of enthusiasm for continued murder. And it eventually led to global war and attempted global conquest.
reducesuffering•1h ago
ericmay•1h ago
One thing would-be revolutionaries don't appreciate is that, well, similar to Mr. Putin's experience today, revolutions (and wars) are much easier to start than to control. One day you're chopping off the leader's head, the next day you are pressed into military service and your Constitution is gone. I personally would rather be patient and work on reforming institutions, even if it takes a much longer time. Often times when we get rid of them, it's not that something better fills the void, as anarchists (communists or libertarians alike) like to claim, but instead it's nothing and that capability is gone until some calamity restores the need.
sarchertech•36m ago
BosunoB•1h ago
You know why they don't share the fruits of capital with us now? Because Americans hate getting taxed to pay for welfare, and so they've been voting against taxes for 50 years. This whole political landscape changes when people lose their jobs to AI, a thing that everyone thinks should be taxed. In fact, the entire ideological underpinning behind extreme wealth accumulation is gone when AI runs everything.
scrollop•40m ago
coffeemug•1h ago
1. Awful compared to what? 2. Was there an equivalent transfer outside America? 3. What is the cause? What ratio rent-seeking/shady activity vs a consequence of natural forces (e.g. technological change)
BloondAndDoom•1h ago
hattmall•1h ago
throwmeaway820•1h ago
This assertion is based on comparing reality with a counterfactual where income distributions remained static from 1975 to the present. Real median personal income roughly doubled over this time period.
The use of the word "transferred" seems a little intellectually dishonest here. The use of the counterfactual seems to suggest that income distribution has no relationship with growth in total income, and total income would have been exactly the same regardless of income distribution. I see no reason to assume that to be the case.
yifanl•1h ago
throwmeaway820•39m ago
Do you think I'm talking about my own, personal income?
I'm talking about mean personal income in the United States, because the figures I found for household income only go back to 1985