By the way, have any of you ever tried to delete and disabled Siri’s iCloud backup? You can’t do it.
- setting a timer
- dictating a title to search on Apple TV
A feature set that has remained unchanged since Siri’s launch…
Siri needs faster and more flexible handling of Spotify, Google Maps and third-party messaging apps, not a slop generator.
That's the Internet Explorer of chatbots.
Also, I have never turned on Apple "Intelligence".
This work is to turn it it into something else, more like a chatbot, presumably
On iPhone, Settings → iCloud → Storage → Siri → Disable and Delete
Edit: Tried it. It works for me. Takes a minute though.
That will be their contract writing AI.
If it would suddenly get better, like they teased (Some would say, lied about the capabilities) with Apple Intelligence that would fit pretty well. That they delegate that to Gemini now is a defeat.
To be clear, I'd much rather have my personal cloud data private than have good AI integration on my devices. But strictly from an AI-centric perspective, Apple painted themselves into a corner.
Why does a MacBook seem better than PC laptops? Because Apple makes so few designs. When you make so few things, you can spend more time refining the design. When you're churning out a dozen designs a year, can you optimize the fan as well for each one? You hit a certain point where you say "eh, good enough." Apple's aluminum unibody MacBook Pro was largely the same design 2008-2021. They certainly iterated on it, but it wasn't "look at my flashy new case" every year. PC laptop makers come out with new designs with new materials so frequently.
With iPhones, Apple often keeps a design for 3 years. It looks like Samsung has churned out over 25 phone models over the past year while Apple has 5 (iPhone, iPhone Plus, iPhone Pro, iPhone Pro Max, iPhone 16e).
It's easy to look so good at things when you do fewer things. I think this is one of Apple's great strengths - knowing where to concentrate its effort.
Hell, they can’t even make a TV this year that’s less shit than last years version of it and all that requires is do literally nothing.
Their image classification happens on-device, in comparison Google Photos does that server side so they already have ML infra.
They aren't.
"liquid ass" is how most of my friends describe it
Maybe someday they'll build their own, the way they eventually replaced Google Maps with Apple Maps. But I think they recognize that that will be years away.
With OpenAI, will it even be around 3 years from now, without going bankrupt? What will its ownership structure look like? Plus, as you say, the MS aspect.
So why not Google? It's very common for large corporations to compete in some areas and cooperate in others.
https://support.apple.com/guide/iphone/use-chatgpt-with-appl...
So I'm guessing in a future update it will be Gemini instead. I hope it's going to be more of an option to choose between the 2.
Apple weighs using Anthropic or OpenAI to power Siri
Apple has the best edge inference silicon in the world (neural engine), but they have effectively zero presence in a training datacenter. They simply do not have the TPU pods or the H100 clusters to train a frontier model like Gemini 2.5 or 3.0 from scratch without burning 10 years of cash flow.
To me, this deal is about the bill of materials for intelligence. Apple admitted that the cost of training SOTA models is a capex heavy-lift they don't want to own. Seems like they are pivoting to becoming the premium "last mile" delivery network for someone else's intelligence. Am I missing the elephant in the room?
It's a smart move. Let Google burn the gigawatts training the trillion parameter model. Apple will just optimize the quantization and run the distilled version on the private cloud compute nodes. I'm oversimplifying but this effectively turns the iPhone into a dumb terminal for Google's brain, wrapped in Apple's privacy theater.
[0] https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...
Then you learn that every modern CPU has a built-in backdoor, a dedicated processor core, running a closed-source operating system, with direct access to the entire system RAM, and network access. [a][b][c][d].
You can not trust any modern hardware.
https://en.wikipedia.org/wiki/Intel_Management_Engine
https://en.wikipedia.org/wiki/AMD_Platform_Security_Processo...
https://en.wikipedia.org/wiki/ARM_architecture_family#Securi...
Mullvad requires nothing but an envelope with cash in it and a hash code and stores nothing. Apple owns you.
I'm curious if this officially turns the foundation model providers into the new "dumb pipes" of the tech stack?
It is their strength to take commodity products and scale it well.
Wasn't Apple sitting on a pile of cash and having no good ideas what to spend it on?
Edit: especially given that Apple doesn’t do b2b so all the spend would be just to make consumer products
They still generate about ~$100 billion in free cash per year, that is plowed into the buybacks.
They could spend more cash than every other industry competitor. It's ludicrous to say that they would have to burn 10 years of cash flow on trivial (relative) investment in model development and training. That statement reflects a poor understanding of Apple's cash flow.
Don't they have the highest market cap of any company in existence?
My money is still on Apple and Google to be the winners from LLMs.
This sort of thing didn't work out great for Mozilla. Apple, thankfully, has other business bringing in the revenue, but it's still a bit wild to put a core bit of the product in the hands of the only other major competitor in the smartphone OS space!
Down the road Apple has an advantage here in a super large training data set that includes messages, mail, photos, calendar, health, app usage, location, purchases, voice, biometrics, and you behaviour over YEARS.
Let's check back in 5 years and see if Apple is still using Gemini or if Apple distills, trains and specializes until they have completed building a model-agnostic intelligence substrate.
The Allen Institute (a non-profit) just released the Molmo 2 and Olmo 3 models. They trained these from scratch using public datasets, and they are performance-competitive with Gemini in several benchmarks [0] [1].
AMD was also able to successfully train an older version of OLMo on their hardware using the published code, data, and recipe [2].
If a non-profit and a chip vendor (training for marketing purposes) can do this, it clearly doesn't require "burning 10 years of cash flow" or a Google-scale TPU farm.
[0]: https://allenai.org/blog/molmo2
Setting aside the obligatory HN dig at the end, LLMs are now commodities and the least important component of the intelligence system Apple is building. The hidden-in-plain-sight thing Apple is doing is exposing all app data as context and all app capabilities as skills.
Anyone with an understanding of Apple's rabid aversion to being bound by a single supplier understands that they've tested this integration with all foundation models, that they can swap Google out for another vendor's solution at any time, and that they have a long-term plan to eliminate this dependency as well.
> Apple admitted that the cost of training SOTA models is a capex heavy-lift they don't want to own.
I'd be interested in a citation for this (Apple introduced two multilingual, multimodal foundation language models in 2025), but in any case anything you hear from Apple publicly is what they want you to think for the next few quarters, vs. an indicator of what their actual 5-, 10-, and 20-year plans are.
The moat is talent, culture, and compute. Apple doesn't have any of these 3 for SOTA AI.
Going with Anthropic or OpenAI, despite on the surface having that clean Apple smell and feel, carries a lot of risk Apple's part. Both companies are far underwater, liable to take risks, and liable to drown if they even fall a bit behind.
Anthropic doesn't have a single data centre, they rent from AWS/Microsoft/Google.
> The U.S. government said Apple Chief Executive Officer Tim Cook and Google CEO Sundar Pichai met in 2018 to discuss the deal. After that, an unidentified senior Apple employee wrote to a Google counterpart that “our vision is that we work as if we are one company.”
https://www.bloomberg.com/news/articles/2020-10-20/apple-goo...
If you re-watch the original iPhone announcement there’s a funny moment where Steve hypes up Google products for a few minutes and brings Schmidt on stage to promote their apps.
Jobs gave that somewhat ominous sounding summary/threat of the hundreds of “inventions” (patents) that went into the iPhone. Which placed a target on Google and Samsung as primary offenders when they (unsurprisingly) decided to compete in the same space.
Of course they still worked together and each made a lot of money off their search deal so even Steve wouldn’t let hurt feelings get in the way of making profitable strategic moves.
1. The first issue is that there is significant momentum in calling Siri bad, so even if Apple released a higher quality version it will still be labelled bad. It can enhance the user's life and make their device easier to use, but the overall press will be cherrypicked examples where it did something silly.
2. Basing Siri on Google's Gemini can help to alleviate some of that bad press, since a non-zero share of that doomer commentary comes from brand-loyalists and astroturfing.
3. The final issue is that on-device Siri will never perform like server-based ChatGPT. So in a way it's already going to disappoint some users who don't realise that running something on mobile device hardware is going to have compromises which aren't present on a server farm. To help illustrate that point: We even have the likes of John Gruber making stony-faced comparisons between Apple's on-device image generator toy (one that produces about an image per second) versus OpenAI's server farm-based image generator which makes a single image in about 1-2 minutes. So if a long-running tech blogger can't find charity in those technical limitations, I don't expect users to.
> The final issue is that on-device Siri will never perform like server-based ChatGPT. So in a way it's already going to disappoint some users who don't realise that running something on mobile device hardware is going to have compromises which aren't present on a server farm.
For many years, siri requests were sent to an external server. It still sucked.
I don't expect the current US government to do anything about it though.
I admit I don't see the issue here. Companies are free to select their service providers, and free to dominate a market (as long as they don't abuse such dominant position).
It also lends credence to the DOJ's allegation that Apple is insulated from competition - the result of failing to produce their own winning AI service is an exclusive deal to use Google while all competing services are disadvantaged, which is probably not the outcome a healthy and competitive playing field would produce.
This feels a little squishy... At what size of each company does this stop being an antitrust issue? It always just feels like a vibe check, people cite market cap or marketshare numbers but there's no hard criteria (at least that I've seen) that actually defines it (legally, not just someones opinion).
The result of that is that it's sort of just up to whoever happens to be in charge of the governing body overseeing the case, and that's just a bad system for anyone (or any company) to be subjected to. It's bad when actual monopolistic abuse is happening and the governing body decides to let it slide, and it's bad when the governing body has a vendetta or directive to just hinder certain companies/industries regardless of actual monopolistic abuse.
I can't wait for gemini to lecture me why I should throw away my android
Even "Play the album XY" leads to Siri only playing the single song. It's hilariously bad.
The non-hardware AI industry is currently in an R&D race to establish and maintain marketshare, but with Apple's existing iPhone, iPad and Mac ecosystem they already have a market share they control so they can wait until the AI market stabilizes before investing heavily in their own solutions.
For now, Apple can partner with solid AI providers to provide AI services and benefits to their customers in the short term and then later on they can acquire established AI companies to jumpstart their own AI platform once AI technology reaches more long term consistency and standardization.
The biggest thing Apple has to do is get a generic pipeline up and running, that can support both cloud and non-cloud models down the road, and integrate with a bunch of local tools for agent-style workloads (e.g. "restart", "audio volume", "take screenshot" as tools that agents via different cloud/local models can call on-device).
> Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.
... https://blog.google/company-news/inside-google/company-annou...
I'm really curious how Apple is bridging the gap between consumer silicon and the datacenter scale stack they must have to run a customized Gemini model for millions of users.
RDMA over Thunderbolt is cool for small lab clusters but they must be using something else in the datacenter, right?
gnabgib•2h ago
johnthuss•2h ago
charliebwrites•2h ago
Apple explicitly acknowledged that they were using OpenAI’s GPT models before this, and now they’re quite easily switching to Google’s Gemini
hu3•1h ago
Surely research money is not the problem. Can't be lack of competence either, I think.
nothercastle•1h ago
IOT_Apprentice•1h ago
First, they touted features that no one actually built and then fired their AI figurehead “leader” who had no coherent execution plan—also, there appears to have been territorial squabbling going on, about who would build what.
How on earth did Apple Senior Management allow this to unravel? Too much focus on Services, yet ignoring their absolute failures with Siri and the bullshit that was Apple Intelligence, when AI spending is in the trillions?
LexGray•46m ago
Apple is competent at timing when to step into a market and I would guess they are waiting for AI to evolve beyond being considered untrustworthy slop.
johnthuss•1h ago
Angostura•1h ago
layer8•17m ago
Angostura•3m ago
Angostura•1h ago
johnthuss•1h ago
thinkindie•1h ago
drcongo•1h ago
9rx•1h ago
mathieuh•1h ago
Maybe I'm weird but mobile assistants have never been useful for me. I tried Siri a couple of times and it didn't work. I haven't tried it since because even if it worked perfectly I'm not sure I'd have any use for it.
I see it more like the Vision Pro. Doesn't matter how good the product ends up being, I just don't think it's something most people are going to have a use for.
As far as I'm concerned no one has proved the utility of these mobile assistants yet.
eli•1h ago
But it's a whole lot easier to switch from Gemini to Claude or Gemini to a hypothetical good proprietary LLM if it's white label instead of "iOS with Gemini"
heraldgeezer•9m ago
Depends on where you are. In my experience here in Sweden Google Maps is still better, Apple maps sent us for a loop in Stockholm (literally {{{(>_<)}}} )
burnte•1h ago
MBCook•1h ago
Google wanted to shove ads in it. Apple refused and to switch.
Their hand was forced by that refusal.
LexGray•1h ago
Apple announced last year they are putting their own ads in Maps so if that was the real problem the corporate leadership has done a complete 180 on user experience.
MBCook•25m ago
Apple is a very VERY different company than they were back then.
Back then they didn’t have all sorts of services that they advertised to you constantly. They didn’t have search ads in the App Store. They weren’t trying to squeeze every penny out of every customer all the time no matter how annoying.
wat10000•45m ago
mdasen•1h ago
Yes, Apple is acknowledging that Google's Gemini will be powering Siri and that is a big deal, but are they going to be acknowledging it in the product or is this just an acknowledgment to investors?
Apple doesn't hide where many of their components come from, but that doesn't mean that those brands are credited in the product. There's no "fab by TSMC" or "camera sensors by Sony" or "display by Samsung" on an iPhone box.
It's possible that Apple will credit Gemini within the UI, but that isn't contained in the article or video. If Apple uses a Gemini-based model anonymously, it would be easy to switch away from it in the future - just as Apple had used both Samsung and TSMC fabs, or how Apple has used both Samsung and Japan Display. Heck, we know that Apple has bought cloud services from AWS and Google, but we don't have "iCloud by AWS and GCP."
Yes, this is a more public announcement than Apple's display and camera part suppliers, but those aren't really hidden. Apple's dealings with Qualcomm have been extremely public. Apple's use of TSMC is extremely public. To me, this is Apple saying "hey CNBC/investors, we've settled on using Gemini to get next-gen Siri happening so you all can feel safe that we aren't rudderless on next-gen Siri."
a_paddy•1h ago
qnpnpmqppnp•42m ago
dewey•1h ago
dylan604•1h ago
aoeusnth1•52m ago