A way to drum up sense of urgency without mentioning that it's the patience of the investors (and _not_ the public) that will be the limiting factor here?
This isn't one of those times.
I’m spending $400/mo on AI subscriptions at this point. Probably the best money I spend.
that $400 will go up by at least a factor of 10 once the bubble pops
would you be prepared to pay $4000/month?
If a country/state has to choice of giving power to data center A or B, it makes sense for Satya to make statements about how only Microsoft provides the most AI value
I guess you could always just use a fraction of the billions in investments and whip up a few new power plants. [1]
What the hell is going on in this type of argument anyways? Utilities are normally private businesses so what does the state have to do with it?
He's blaming customers that his product isn't hitting the valuation he wants.
When non techie friends/family bring up AI there are two major topics: 1) the amount of slop is off the charts and 2) said slop is getting harder to recognize which is scary. Sometimes they mention a bit of help in daily tasks at work, but nothing major.
They don't find AI useful, just a toy. Is their fault? Maybe.
But they aren't stupid. You sound like a tech bro.
Copilot Notepad.
Copilot MS Paint.
Copilot Shoes.
Copilot Ice Cream.
LOL. "Looks like you're trying to tie those laces - would you like me to order you velcro?"
> Copilot Ice Cream.
Too late, parody is dead.
https://fortune.com/2023/02/18/shift-robotics-a-i-powered-mo...
https://www.unilever.com/news/news-search/2025/how-ai-is-tra...
[0]: https://www.bloomberg.com/graphics/2025-ai-data-centers-elec...
And yet studies show the opposite [0].
[0] https://www.media.mit.edu/publications/your-brain-on-chatgpt...
If you're using ChatGPT like you use Google then I agree with you. But IMO comparing ChatGPT to Google means you haven't had the "aha" moment yet.
As a concrete example, a lot of my work these days involves asking ChatGPT to produce me an obscure micro-app to process my custom data. Which it usually does and renders in one shot. This app could not exist before I asked for it. The productivity gains over coding this myself are immense. And the experience is nothing like using Google.
It might seem quaint today but one example might be fact checking a piece of text.
Google effectively has a pretty good internal representation of whether any particular document concords with other documents on the internet, on account of massive crawling and indexing over decades. But LLMs let you run the same process nearly instantly on your own data, and that's the difference.
If you ask for a link, it may hallucinate the link.
And unlike a search engine where someone had to previously think of, and then make some page with the fake content on it, it will happily make it up on the fly so you'll end up with a new/unique bit of fake documentation/url!
At that point, you would have been way better off just... using a search engine?
It’s for queries that are unlikely to be satisfied in a single search. I don’t think it would be a negligible amount of time if you did it yourself.
I think we all understand that at this point, so I question deeply why anyone acts like they don’t.
More convenient than traditional search? Maybe. Quicker than traditional search? Maybe not.
Asking random questions is exactly where you run into time-wasting hallucinations since the models don't seem to be very good at deciding when to use a search tool and when just to rely on their training data.
For example, just now I was asking Gemini how to fix a bunch of Ubuntu/Xfce annoyances after a major upgrade, and it was a very mixed bag. One example: the default date and time display is in an unreadably small "date stacked over time" format (using a few pixel high font so this fits into the menu bar), and Gemini's advice was to enable the "Display date and time on single line" option ... but there is no such option (it just hallucinated it), and it also hallucinated a bunch of other suggestions until I finally figured out what you need to do is to configure it to display "Time only" rather than "Data and Time", then change the "Time" format to display both data and time! Just to experiment, I then told Gemini about this fix and amusingly the response was basically "Good to know - this'll be useful for anyone reading this later"!
More examples, from yesterday (these are not rare exceptions):
1) I asked Gemini (generally considered one of the smartest models - better than ChatGPT, and rapidly taking away market share from it - 20% shift in last month or so) to look at the GitHub codebase for an Anthropic optimization challenge, to summarize and discuss etc, and it appeared to have looked at the codebase until I got more into the weeds and was questioning it where it got certain details from (what file), and it became apparent it had some (search based?) knowledge of the problem, but seemingly hadn't actually looked at it (wasn't able to?).
2) I was asking Gemini about chemically fingerprinting (via impurities, isotopes) roman silver coins to the mines that produced the silver, and it confidently (as always) comes up with a bunch of academic references that it claimed made the connection, but none or references (which did at least exist) actually contained what it claimed (just partial information), and when I pointed this out it just kept throwing out different references.
So, it's convenient to be able to chat with your "search engine" to drill down and clarify, etc, but a big time waste if a lot of it is hallucination.
Search vs Chat has anyways really become a difference without a difference since Google now gives you the "AI Overview" (a diving off point into "AI Mode"), or you can just click on "AI Mode" in the first place - which is Gemini.
Though it's a use case people like Satya will want to avoid for reasons.
That’s courageous from a CEO of an US company, where the current government doesn’t see burning more oil as being bad for the planet, and is willing to punish everyone who thinks otherwise.
I think there are business reasons why they wouldn’t do that, and that makes me sad.
Every time it hallucinates visits to Starbucks.
I never go to Starbucks, it’s just a probable finding given the words in the question.
This should work. I want it to work. But until it can do this correctly all analysis capabilities should be suspect.
Even a year ago I had success with Claude giving it a photo of my credit card bill and asking it to give me repeating category subtotals, and it flawlessly OCR'd it and wrote a Python program to do as asked, giving me the output.
I'd imagine if you asked it to do a comparison to something else it'd also write code to do it, so get it right (and certainly would if you explicity asked).
I’ve been predicting for a while: free or cheap AI will enshittify and become an addictive ad medium with nerfed capabilities. If you want actually good AI you will have to pay for it, either a much heftier fee or buying or renting compute to run your own. In other words you’ll be paying what it actually costs, so this is really just the disappearance of the bubble subsidy.
Hi there, friends from another dimension! In my reality, there's a cold front coming from the north. Healthcare is expensive and politics are a mess. But AI? It hallucinates sometimes but it's so much better for searching, ad hoc consultation and as a code assistant than anything I've ever seen. It's not perfect, but it saved me SO much time I decided to pay for it. I'm a penny pincher, so I wouldn't be paying for it otherwise.
I think Satya is talking about cost/benefit. AI is incredibly useful but also incredibly expensive. I think we still need to find the right balance (perhaps slower model releases), but there's no way we'll put the genie back in the bottle.
I hope your AI gets better! Talk to you later!
I have access to all the popular AI tools from work for free, I use them for the same cases you mentioned like search, consultation, a better StackOverflow, and autocomplete. It’s definitely useful but I would describe that as incrementally useful, not revolutionary.
Satya is saying that AI needs to start doing more than vibe coding and autocomplete, there’s probably half a trillion invested into the technology worldwide now and it’s not enough for AI to be a good coding assistant. It needs to replace customer support, radiologists, and many other professions to justify the unprecedented level of investment its garnered.
There are plenty of uses for AI. Right now, the industry is heavily spending on training new models, improving performance of existing software and hardware, and trying to create niche products.
Power usage for inference will drop dramatically over the next decade, and more models are going to run on-device rather than in the cloud. AI is only going to become more ubiquitous, there's 0% chance it 'fails' and we return to 2020.
Only because companies have been cutting costs for decades here. This is not a good argument for AI.
> writing software
If you mean typing characters quickly, yes. Otherwise, there’s still a lot of employed devs, with many AI companies hiring.
> writing docs about software
The most useful docs are there because they contain info you cannot determine from the code. AI is not able to do this.
> computer graphics (animation, images)
If you are producing slop, yes.
> driving cars
True, but only because of its improved physical awareness. ie it’s a mechanical gain (better eyes, ears, etc) not an intellectual one (interpreting that information). Self driving cars aren’t LLMs and not really applicable here. Entirely different field.
> AI is only going to become more ubiquitous, there's 0% chance it 'fails' and we return to 2020
Absolutely true. But not for the reasons you think.
With all this useless slop, he’s literally arguing against his own point.
And no, I'm not saying the technology is bad. The business isn't going swimmingly, though.
If they mean "machine learning", then sure there are application in cancer detection and the like, but development there has been moving at a steady pace for decades and has nothing to do with the current hype wave of GenAI, so there's no reason to assume it's suddenly going to go exponential. I used to work in that field and I'm confident it's not going to change overnight: progress there is slow not because of the models, but because data is sparse and noisy, labels are even sparser and noisier, deployment procedures are rigid and legal compliance is a nightmare.
If they mean "generative AI", then how is that supposed to work exactly? Asking LLMs for medical diagnosis is no better than asking "the Internet at large". They only return the most statistically likely output given their training corpus (that corpus being the Internet as a whole), so it's more likely your diagnosis will be based on a random Reddit comment that the LLMs has ingested somewhere, than an actual medical paper.
The only plausible applications I can think of are tasks such as summarizing papers, acting as augmented search engines for datasets and papers, or maybe automating some menial administrative tasks. Useful, for sure, but not revolutionary.
This from a huge LLM skeptic in general. It doesn't have to be right all the time if it in aggregate saves time doctors can spend diagnosing you.
There obviously are some compelling use cases for "AI", but it's certainly questionable if any of those are really making people's lives any better, especially if you take "AI" to mean LLMs and fake videos, not more bespoke uses like AlphaFold which is not only beneficial, but also not a resource hog.
vdupras•1h ago
jsheard•1h ago