By the way, have any of you ever tried to delete and disabled Siri’s iCloud backup? You can’t do it.
- setting a timer
- dictating a title to search on Apple TV
A feature set that has remained unchanged since Siri’s launch…
Siri needs faster and more flexible handling of Spotify, Google Maps and third-party messaging apps, not a slop generator.
That's the Internet Explorer of chatbots.
Also, I have never turned on Apple "Intelligence".
This work is to turn it it into something else, more like a chatbot, presumably
In iOS 18.1 (on iPhone 15+) Siri is part intent-based, part on-device "Apple Intelligence" small LLM, and in iOS 18.2 it also supports off-device ChatGPT.
This year Siri 2.0 is expected to ditch the legacy intent-based system and instead use just the small on-device Apple Intelligence LLM plus (opt-in) off-device Gemini (running in some private cloud).
On iPhone, Settings → iCloud → Storage → Siri → Disable and Delete
Edit: Tried it. It works for me. Takes a minute though.
That will be their contract writing AI.
If it would suddenly get better, like they teased (Some would say, lied about the capabilities) with Apple Intelligence that would fit pretty well. That they delegate that to Gemini now is a defeat.
To be clear, I'd much rather have my personal cloud data private than have good AI integration on my devices. But strictly from an AI-centric perspective, Apple painted themselves into a corner.
Why does a MacBook seem better than PC laptops? Because Apple makes so few designs. When you make so few things, you can spend more time refining the design. When you're churning out a dozen designs a year, can you optimize the fan as well for each one? You hit a certain point where you say "eh, good enough." Apple's aluminum unibody MacBook Pro was largely the same design 2008-2021. They certainly iterated on it, but it wasn't "look at my flashy new case" every year. PC laptop makers come out with new designs with new materials so frequently.
With iPhones, Apple often keeps a design for 3 years. It looks like Samsung has churned out over 25 phone models over the past year while Apple has 5 (iPhone, iPhone Plus, iPhone Pro, iPhone Pro Max, iPhone 16e).
It's easy to look so good at things when you do fewer things. I think this is one of Apple's great strengths - knowing where to concentrate its effort.
Hell, they can’t even make a TV this year that’s less shit than last years version of it and all that requires is do literally nothing.
Their image classification happens on-device, in comparison Google Photos does that server side so they already have ML infra.
Apple definitely has software expertise, maybe it's not as specialized into AI as it is about optimizing video or music editors, but to suggest they'd be at the same starting point as an agriculture endeavor feels dishonest.
They aren't.
"liquid ass" is how most of my friends describe it
Maybe someday they'll build their own, the way they eventually replaced Google Maps with Apple Maps. But I think they recognize that that will be years away.
With OpenAI, will it even be around 3 years from now, without going bankrupt? What will its ownership structure look like? Plus, as you say, the MS aspect.
So why not Google? It's very common for large corporations to compete in some areas and cooperate in others.
I didn't see you 41 day old reply to me until it was too late to comment on it. So here's a sarcastic "thanks for ignoring what I wrote" and telling me that exactly what I was complaining about is the solution to the problem I was complaining about.
https://news.ycombinator.com/item?id=46114935
1) I told you my household can't use Target or Amazon for unscented products, without costly remediation measures, BECAUSE EVEN SCENT-FREE ITEMS COME SMELLING FROM PERFUME CROSS-CONTAMINATION THANKS TO CLEANING, STORAGE, AND TRANSPORTATION CONDITIONS. SOMETIMES REALLY BADLY.
FFS. If you are going to respond, first read.
I also mentioned something other than "government intervention to dictate how products are made" as a solution to this issue, namely adequate segregation between perfumed and non-perfumed products.
And I care less about my wallet than I do about my time and actual ability to acquire products that are either truly scent free, or like yesteryear, don't have everlasting fragrance fixatives.
For people in my position, which make up a small percentage of the population (that still numbers in the millions), the free market has failed. We are a specialized niche that trades tips on how to make things tolerable.
SORRY TO EVERYONE ELSE FOR GOING OFF TOPIC.
I’m wondering if this is a way to shift blame for issues. It was mentioned in an interview that what they built internally wasn’t good enough, presumably due to hallucinations… but every AI does that. They know customers have a low tolerance for mistakes and any issues will quickly become a meme (see the Apple Maps launch). If the technology is inherently flawed, where it will never live up to their standards, if they outsource it, they can point to Google as the source of the failings. If things get better down the road and they can improve by pivoting away from Google, they’ll look better and it will make Google look bad. This could be the long game.
They may also save a fortune in training their own models, if they don’t plan to directly try to monetize the AI, and simply have it as a value add for existing customers. Not to mention staying out of hot water related to stealing art for training data, as a company heavily used by artists.
For who? Regular people are quite famously not clamouring for more AI features in software. A Siri that is not so stupendously dumb would be nice, but I doubt it would even be a consideration for the vast majority of people choosing a phone.
https://support.apple.com/guide/iphone/use-chatgpt-with-appl...
So I'm guessing in a future update it will be Gemini instead. I hope it's going to be more of an option to choose between the 2.
Apple weighs using Anthropic or OpenAI to power Siri
Apple has the best edge inference silicon in the world (neural engine), but they have effectively zero presence in a training datacenter. They simply do not have the TPU pods or the H100 clusters to train a frontier model like Gemini 2.5 or 3.0 from scratch without burning 10 years of cash flow.
To me, this deal is about the bill of materials for intelligence. Apple admitted that the cost of training SOTA models is a capex heavy-lift they don't want to own. Seems like they are pivoting to becoming the premium "last mile" delivery network for someone else's intelligence. Am I missing the elephant in the room?
It's a smart move. Let Google burn the gigawatts training the trillion parameter model. Apple will just optimize the quantization and run the distilled version on the private cloud compute nodes. I'm oversimplifying but this effectively turns the iPhone into a dumb terminal for Google's brain, wrapped in Apple's privacy theater.
[0] https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...
* Certain types of Android
Then you learn that every modern CPU has a built-in backdoor, a dedicated processor core, running a closed-source operating system, with direct access to the entire system RAM, and network access. [a][b][c][d].
You can not trust any modern hardware.
https://en.wikipedia.org/wiki/Intel_Management_Engine
https://en.wikipedia.org/wiki/AMD_Platform_Security_Processo...
https://en.wikipedia.org/wiki/ARM_architecture_family#Securi...
Why did they change their wording from:
Nobody can read your data, not even Apple
to:
Apple cannot read your data.
You know why.
Mullvad requires nothing but an envelope with cash in it and a hash code and stores nothing. Apple owns you.
Works great for me with NextDNS.
Orion browser - while also based on WebKit - is also awesome and has great built in Adblock and supposedly privacy respecting ideals.
And it works until it's made illegal in your country and removed from the app store. You have no guarantees that anything that works today will work tomorrow with Apple.
Apple is setting us up to be under a dictator's thumb one conversion at a time.
If you want to use the App Store on these devices, you do need to have an email address.
They also were deceptive about iCloud encryption where they claimed that nobody but you can read your iCloud data. But then it came out after all their fanfare that if you do iCloud backups Apple CAN read your data. But they aren’t in a hurry to retract the lie they promoted.
Also if someone in another country messages you, if that country’s laws require that Apple provide the name, email, phone number, and content of the local users, guess what. Since they messaged you, now not only their name and information, but also your name and private information and message content is shared with that country’s government as well. By Apple. Do they tell you? No. Even if your own country respects privacy. Does Apple have a help article explaining this? No.
I don't mean to sound like an Apple fanboy, but is this true just for SMS or iMessage as well? It's my understanding that for SMS, Apple is at the mercy of governments and service providers, while iMessage gives them some wiggle room.
Ancedotal, but when my messages were subpoenaed, it was only the SMS messages. US citizen fwiw
It's something a smart niece or nephew could handle in terms of managing risk, but the implications could mean getting locked out of your device which you might've been using as the doorway to everything, and Apple cannot help you.
I'm curious if this officially turns the foundation model providers into the new "dumb pipes" of the tech stack?
It is their strength to take commodity products and scale it well.
Wasn't Apple sitting on a pile of cash and having no good ideas what to spend it on?
Edit: especially given that Apple doesn’t do b2b so all the spend would be just to make consumer products
They still generate about ~$100 billion in free cash per year, that is plowed into the buybacks.
They could spend more cash than every other industry competitor. It's ludicrous to say that they would have to burn 10 years of cash flow on trivial (relative) investment in model development and training. That statement reflects a poor understanding of Apple's cash flow.
Don't they have the highest market cap of any company in existence?
My money is still on Apple and Google to be the winners from LLMs.
I can see them eventually training their own models (especially smaller and more targeted / niche ones) but at their scale they can probably negotiate a pretty damn good deal renting Google TPUs and expertise.
Its also a race to the bottom type scenario. Apple would have never been able to keep up with server release schedules.
Was an interesting but ultimately odd moment of history for servers.
There used to be a time when IBM actually meant quality (that's where "no one ever got fired for buying IBM" came from, after all), but nowadays? A loooot of stuff is either sold (Thinkpad went to Lenovo, Lotus Notes to HCL), faded into irrelevancy outside of extremely few niche markets (anything mainframe), got left for dead (the PC - it used to be called "IBM compatible personal computer"!) or got spun off (Kyndryl).
According to Wikipedia, IBM has 282.000 employees worldwide. What the fuck are all of these people doing?
Many want to be founders here on HN don’t get that. Even if your product is better and cheaper, there is too much of a reputational risk signing a contract for a B2B SaaS product with an unknown vendor.
On a completely unrelated note, for the love of all that is holy don’t try to do B2B SaaS without SSO support.
You're right, and this is proven. Apple has fumbled a whole release cycle on AI and severely curbed expectations, and they still sell 200m iPhones a year and lead the market [0]
[0] https://www.reuters.com/business/media-telecom/apple-leads-g...
Why are we even talking about 'AI'? When I heat up food in a microwave, I dont care about the technology - I care about whether it heats up the food or not.
For some bizarre reason people keep talking about the technology (LLMs) - the consumers/buyers in the market for the most part dont give a hoot about it. They want to know how the thing fits in their life and most importantly what are the benefits.
Ive unfortunately been exposed to some Google Ads re. Gemini and let me tell you - their marketing capabilities are god awful.
So I'm glad Apple is not trying to get too much into a bidding war. As for how well orgs are run, Meta has its issues as well (cf the fiasco with its eponymous product), while Google steadily seems to erode its core products.
This sort of thing didn't work out great for Mozilla. Apple, thankfully, has other business bringing in the revenue, but it's still a bit wild to put a core bit of the product in the hands of the only other major competitor in the smartphone OS space!
Down the road Apple has an advantage here in a super large training data set that includes messages, mail, photos, calendar, health, app usage, location, purchases, voice, biometrics, and you behaviour over YEARS.
Let's check back in 5 years and see if Apple is still using Gemini or if Apple distills, trains and specializes until they have completed building a model-agnostic intelligence substrate.
The Allen Institute (a non-profit) just released the Molmo 2 and Olmo 3 models. They trained these from scratch using public datasets, and they are performance-competitive with Gemini in several benchmarks [0] [1].
AMD was also able to successfully train an older version of OLMo on their hardware using the published code, data, and recipe [2].
If a non-profit and a chip vendor (training for marketing purposes) can do this, it clearly doesn't require "burning 10 years of cash flow" or a Google-scale TPU farm.
[0]: https://allenai.org/blog/molmo2
I was under the impression that all these GPUs and such were needed to run the AI, not only ingest the data.
Because almost every example of previous cases of things like this eventually leveled out.
Google really could care less about Android being good. It is a client for Google search and Google services - just like the iPhone is a client for Google search and apps.
Setting aside the obligatory HN dig at the end, LLMs are now commodities and the least important component of the intelligence system Apple is building. The hidden-in-plain-sight thing Apple is doing is exposing all app data as context and all app capabilities as skills. (See App Intents, Core Spotlight, Siri Shortcuts, etc.)
Anyone with an understanding of Apple's rabid aversion to being bound by a single supplier understands that they've tested this integration with all foundation models, that they can swap Google out for another vendor at any time, and that they have a long-term plan to eliminate this dependency as well.
> Apple admitted that the cost of training SOTA models is a capex heavy-lift they don't want to own.
I'd be interested in a citation for this (Apple introduced two multilingual, multimodal foundation language models in 2025), but in any case anything you hear from Apple publicly is what they want you to think for the next few quarters, vs. an indicator of what their actual 5-, 10-, and 20-year plans are.
Seems like they are waiting for the "slope of enlightenment" on the gartner hype curve to flatten out. Given you can just lease or buy a SOTA model from leading vendors there's no advantage to training your own right now. My guess is that the LLM/AI landscape will look entirely different by 2030 and any 5 year plan won't be in the same zip code, let alone playing field. Leasing an LLM from Google with a support contract seems like a pretty smart short term play as things continue to evolve over the next 2-3 years.
[1] https://emp.lbl.gov/news/new-study-refocuses-learning-curve
[2] https://ourworldindata.org/grapher/solar-pv-prices-vs-cumula...
The deceleration of pace is visible to anyone capable of using Google.
If its reasonable, then reason it. Because it is a highly apples to oranges comparison you are making
1) the dearth of new (novel) training data. Hence the mad scramble to hoover up, buy, steal, any potentially plausible new sources.
2) diminishing returns of embiggening compute clusters for training LLMs and size of their foundation models.
(As you know) You're referring to Wright's Law aka experience learning curve.So there's a tension.
Some concerns that we're nearing the ceiling for training.
While the cost of applications using foundation models (implementing inference engines) is decreasing.
Someone smarter than me will have to provide the slopes of the (misc) learning curves.
Lots of benchmarks exist where everyone agrees that higher scores are better, but there's no sense in which going from a score of 400 to 500 is the same progress as going from 600 to 700, or less, or more. They only really have directional validity.
I mean, the scores might correspond to real-world productivity rates in some specific domain, but that just begs the question -- productivity rates on a specific task are not intelligence.
Apple won't switch Google out as a provider for the same reason Google is your default search provider. They don't give a shit about how many advertisements you're shown. You are actually detached from 2026 software trends if you think Apple is going to give users significant backend choices. They're perfectly fine selling your attention to the highest bidder.
Which is why privacy theatre was an excellent way to put it
Guess who's the bestie of Twitter's owner? Any clues? Could that be a vindictive old man with unlimited power and no checks and balances to temper his tantrums?
Of course they both WANT Twitter the fuck out of the store, but there are very very powerful people addicted to the app and what they can do with it.
The US, for all intents and purposes, is now a kleptocracy. Rule of law, freedom of speech, even court orders, all of that doesn't matter any more in practice. There will always be some way for the federal government to strong-arm anyone into submission.
When Trump started his campaign, circa 2011 with the birth certificate, he did not know he will win or not, but he made it his life's mission.
Countering him will take the same zeal. I know we have a precedence of presidents retiring, but unless Obama (and Hillary and Biden and Kamala) hits the streets as the leader of resistance, the resistance will be quelled easily by constant distracting. Yeah maybe AOC, maybe Bernie, maybe someone else, but no ... Trump is smart and dedicated (despite the useful idiot role he plays), he can not be countered by mid-term and full-term campaigns. We are not in Kansas any more. Been a while. The opposition needs a named resistance leader whose full time job is to engage Trump.
Newsom (or his PR team) knows how to play the troll game correctly, hitting low blows and not sticking to the fucking high ground.
AOC on the other hand will make the MAGA base _so_ irrationally angry they might do something actually stupid. She's also got Bernie's views, which might make America a place I want to visit some day in the next decade again. I've literally turned down all expenses paid company trips to USA a few times because I just don't want to risk either not getting into the country or not getting back.
Well, ICE just executed a woman in broad daylight with multiple cameras filming, and a day later you got Kristi Noem standing on a podium with a slogan referencing to an OG Nazi massacre [1][2] and half the US government gaslighting the country, spreading outright lies [3] without consequences so far.
When they can get away with this level of lies, they can get away with anything. Trump's infamous "I Could ... Shoot Somebody, And I Wouldn't Lose Any Voters" quote [4] wasn't a joke - it was a clear prediction of what he intended to enable eventually.
[1] https://www.billboard.com/music/music-news/tom-morello-trump...
[2] https://www.deutschlandfunk.de/80-jahre-massaker-lidice-100....
[3] https://www.abc.net.au/news/2026-01-08/what-happened-in-minn...
[4] https://www.npr.org/sections/thetwo-way/2016/01/23/464129029...
Google and Apple together will posttrain Gemini to Apple's specification. Google has the know-how as well as infra and will happily do this (for free ish) to continue the mutually beneficial relationship - as well as lock out competitors that asked for more money (Anthropic)
Once this goes live, provided Siri improves meaningfully, it is quite an expensive experiment to then switch to a different provider.
For any single user, the switching costs to a different LLM are next to nothing. But at Apple's scale they need to be extremely careful and confident that the switch is an actual improvement
I was in the map search, so I just said "Costco" and it said "I can't help with that right now, please try again later" or something of the sort. I tried a couple more times until I changed up to saying "Navigate me to Costco" where it finally did the search in the textbox and found it for me.
Obviously this isn't the same thing as Gemini but the experience with Android Auto becomes more and more garbage as time passes and I'm concerned that now we're going to have 2 google product voice assistants.
Also, tbh, Gemini was great a month ago but since then it's become total garbage. Maybe it passes benchmarks or whatever but interacting with it is awful. It takes more time to interact with than to just do stuff yourself at this point.
I tried Google Maps AI last night and, wow. The experience was about as garbage as you can imagine.
‘Sorry I don’t know anyone called ‘your girlfriend’’ The kids find it hilarious
Are there licensing issues regarding commercial use at scale or something?
Same reason they switched to Intel chips in the 2000s. They were better. Then Cupertino watched. And it learned. And it leapfrogged.
If I were Google, my fear would be Apple launching and then cutting the line at TSMC to mass produce custom silicon in the 2030s.
Siri’s functionality and OS integration could be exposed in a similar, industry-standard way via tools provided to the model.
Then any other model can be swapped in quite easily. Of course, they may still want to do fine tuning, quantization, performance optimization for Apple’s hardware, etc.
But I don’t see why the actual software integration part needs to be difficult.
That’s not the issue. The issue is that once Gemini is in place as the intelligence behind Siri, the bar is now much higher than today and so you have to be more careful if you consider replacing Gemini, because you’re as likely as not to make Siri worse. Maybe more likely to make it worse.
That gives them plenty of runway to test and optimize new models internally before release and not feel like they need to rush them out because Siri sucks.
Why do you say it doesn't solve loads of things?
Because I'm sitting here twiddling my thumbs waiting for random pages to go through their anti-LLM bot crap. LLMs create more problems than they solve. Um if I ask an LLM about a fake band it literally say I couldn't find any
songs by the fake band did you type is correctly and it's about a millions
times more likely to guess correctly
Um if Apple wrote proper error handling in the first place the issue would be solve without LLM baggage. Apple made a conscious decision to handle "unknown" artists this way, LLMs don't change that.Not a high bar…
That said, Apple is likely to end up training their own model, sooner or later. They are already in the process of building out a bunch of data centers, and I think they have even designed in-house servers.
Remember when iPhone maps were Google Maps? Apple Maps have been steadily improving, to the point they are as good as, if not better than, Google Maps, in many areas (like around here. I recently had a friend send me a GM link to a destination, and the phone used GM for directions. It was much worse than Apple Maps. After a few wrong turns, I pulled over, fed the destination into Apple Maps, and completed the journey).
LLMs are now commodities and the least important component of the intelligence system Apple is building
If that was even remotely true, Apple, Meta, and Amazon would have SoTA foundational models.The moat is talent, culture, and compute. Apple doesn't have any of these 3 for SOTA AI.
Source: I worked there
Culture is overrated. Money talks.
They did things far more complicated from an engineering perspective. I am far more impressed by what they accomplished along TSMC with Apple Silicon than by what AI labs do.
Google invented the transformer architecture, the backbone of modern LLMs.
"Google" did? Or humans who worked there and one who didn't?
https://www.wired.com/story/eight-google-employees-invented-...
In any case, see the section on Jakob Uszkoreit, for example, or Noam Shazeer. And then…
> In the higher echelons of Google, however, the work was seen as just another interesting AI project. I asked several of the transformers folks whether their bosses ever summoned them for updates on the project. Not so much. But “we understood that this was potentially quite a big deal,” says Uszkoreit.
Worth noting the value of “bosses” who leave people alone to try nutty things in a place where research has patronage. Places like universities, Xerox, or Apple and Google deserve credit for providing the petri dish.
Apple has no control over the most important change to tech. They have control to Google.
Yes I forgot xAI. So 4 left. I’m betting that there will be one or two dominant ones in next 10 years. Apple won’t be one of them.
No one can outpace them in improving the SOTA, everyone can catch up to them. Why are open-weight models perpetually 6 months behind the SOTA? Given enough data harvested from SOTA models you can eventually distill them.
The biggest differentiator when training better models are not some new fancy architectural improvements (even the current SOTA transformer architectures are very similar to e.g. the ancient GPT-2), but high quality training data. And if your shiny new SOTA model is hooked into a publicly available API, guess what - you've just exposed a training data generator for everyone to use. (That's one of the reasons why SOTA labs hide their reasoning chains, even though those are genuinely useful for users - they don't want others to distill their models.)
Frankly, a lot of times I prefer using GLM 4.6 running on Cerebras Inference, than having to deal with the performance hiccups from Claude. For most practical purposes, I've seen no big penalty in using it compared to Opus 4.5, even the biggest qwen-coder models are pretty much competitive.
Between me and the company I work for, I spend some serious money with AI. I use it extensively in my main job, on two side projects that I have paying customers for, and for graduate school work. I can tell you that there quite a few more SOTA models around than what the benchmarks tell you.
(And even if you do believe it, they also aren't licensing the IP they're training on, unlike american firms who are now paying quite a lot for it)
It sounds like the value of these very time-consuming, resource-intensive, and large scale operations is entirely self-contained in the weights produced at the end, right?
Given that we have a lot of other players enabling this in other ways, like Open Sourcing weights (West vs East AI race), and even leaks, this play by Apple sounds really smart and the only opportunity window they are giving away here is "first to market" right?
Is it safe to assume that eventually the weights will be out in the open for everyone?
Yes, and the knowledge gained along the way. For example, the new TPUv4 that Google uses requires rack and DC aware technologies (like optical switching fabric) for them to even work at all. The weights are important, and there is open weights, but only Google and the like are getting the experience and SOTA tech needed to operate cheaply at scale.
A lot of the hype in LLM economics is driven by speculation that eventually training these LLMs is going to lead to AGI and the first to get there will reap huge benefits.
So if you believe that, being "first to market" is a pretty big deal.
But in the real world there's no reason to believe LLMs lead to AGI, and given the fairly lock-step nature of the competition, there's also not really a reason to believe that even if LLMs did somehow lead to AGI that the same result wouldn't be achieved by everyone currently building "State of the Art" models at roughly the same time (like within days/months of each other).
So... yeah, what Apple is doing is actually pretty smart, and I'm not particularly an Apple fan.
I can see a future where LLM research stalls and stagnates, at which point the ROI on building/maintaining their own commodity LLM might become tolerable. Apple has had Siri as a product/feature and they've proven for the better part of a decade that voice assistants are not something they're willing to build a proficiency in. My wife still has an apple iPhone for at least a decade now, and I've heard her use Siri perhaps twice in that time.
They have always been a premium "last mile" delivery network for someone else's intelligence, except that "intelligence" was always IP until now. They have always polished existing (i.e., not theirs) ideas and made them bulletproof and accessible to the masses. Seems like they intend to just do more of the same for AI "intelligence". And good for them, as it is their specialty and it works.
It goes back much further than that - up until 2016, Apple wouldn't let its ML researchers add author names to published research papers. You can't attract world-class talent in research with a culture built around paranoid secrecy.
Would giving more money/shares help?
It also lets Apple stand by while the dust settles on who will out innovate in the AI war - they could easily enter the game on a big way much later on.
> without burning 10 years of cash flow.
Sorry to nitpick but Apple’s Free Cash Flow is 100B/yr. Training a model to power Siri would not cost more than a trillion dollars.They are only ones who do not have large debts off(or on) balance sheet or aggressive long term contracts with model providers and their product demand /cash flow is least dependent on the AI industry performance.
They will still be affected by general economic downturn but not be impacted as deeply as AI charged companies in big tech.
There is no intelligence
I don't know how close to that ideal they've achieved, but especially given this announcement is partly baked on an arrangement with Google that they are allowed to run Gemini on-device and in Private Cloud Compute, without using Google's more direct Gemini services/cloud, I'm excited that they are trying and I'm interested in how this plays out.
Maybe private in the sense that it isn’t funneled into your ad profile, but not private in the sense that nobody else can access it.
I just think it is useful that Apple is trying something along those lines and wishful the guarantees work half as well as they claim they do, because that's a good goal to have in theory even when it fails in practice against dedicated threat actors.
And yes, to be fair my personal day-to-day threat model currently is much more concerned with the evil advertising company known as Google than it is with government actors. Even if Apple's Private Cloud Compute only means "private from Google" that's still a win for me (and most of the information I was looking for when I saw this headline, because my first fear was that the advertising company Google was involved).
I would think for the vast majority of users out there this is not a concern at all.
Apple until now failed to even get the basics done and make Siri smart, despite marketing "Apple Intelligence" as the core feature of 2024's iPhone.
I'm excited about the attempt at privacy because I'm on "Team Keep Siri Dumb". I like dumb Siri. It reliably meets most of my needs, setting timers and managing house lights. I'd rather Siri stay dumb and I would never opt-in to ChatGPT Siri as some of my family has, but if Siri "has to" get smart to survive, I will celebrate whatever privacy wins are still available as my only hope that smarter Siri is not something I need to just disable entirely (and lose my "friend" in charge of my timers and house lights in the process).
apple to some users "are you leaving for android because of their ai assistant? don’t leave we are bringing it to iphone"
So what does it take? How many actual commitments to privacy does Apple have to make before the HN crowd stops crowing about "theater"?
So while the letter of your claim is technically true, it's also very misleading.
Apple isnt suddenly private just because they have enough data about you that they dont need to link to 3rd party data. They do exactly what 3rd party sites that are considered privacy invasive do. They serve you ads based on your private data like what you watch, what you read and what things you do on your device.
It doesnt say that they only store all this information on device. Apple is only using a random identifier when its sharing information about your habits and personal data on its ad platform, that info btw is shared with 3rd parties. But dont worry that data suddenly becomes non personal because they used a random identifier.
https://machinelearning.apple.com/research/apple-intelligenc...
Probably not missing the elephant. They certainly have the money to invest and they do like vertical integration but putting massive investment in bubble that can pop or flatline at any point seems pointless if they can just pay to use current best and in future they can just switch to something cheaper or buy some of the smaller AI companies that survive the purge.
Given how much AI capable their hardware is they might just move most of it locally too
Can you cite this claim? The Qualcomm Hexagon NPU seems to be superior in the benchmarks I've seen.
Apple is flush with cash and other assets, they have always been. They most likely plan to ride out the AI boom with Google's models and buy up scraps for pennies on the dollar once the bubble pops and a bunch of the startups go bust.
It wouldn't be the first time they went for full vertical integration.
Why does Apple need to build its own training cluster to train a frontier model, anyway?
Why couldn't the deal we're reading about have been "Apple pays Google $200bn to lease exclusive-use timeslots on Google's AI training cluster"?
AAPL has approximately $35 billion of cash equivalents on hand. What other use may they have for this trove? Buy back more stocks?
Everyone using Siri is going to have their personality data emulated and simulated as a ”digital twin” in some computing hell-hole.
I feel like people probably said this when Google became the default search engine for everyone...
Going with Anthropic or OpenAI, despite on the surface having that clean Apple smell and feel, carries a lot of risk Apple's part. Both companies are far underwater, liable to take risks, and liable to drown if they even fall a bit behind.
Anthropic doesn't have a single data centre, they rent from AWS/Microsoft/Google.
Definitely. At at this point, Apple just needs to get anything out the door. It was nearly two years ago they sold a phone with features that still haven't shipped and the promise that Apple Intelligence would come in two months.
I'm chuckling at the idea of pirating software in 1996.
iirc even in 1999, I couldn't figure out why Windows update required me to use internet exploder. It would take forever to download updates over dialup.
Talk about being there when the deep magic was written.
1. Have a user interface. Sometimes I'll ask a question and Siri actually provides a good enough answer, and while I'm reading it, the Siri response window just disappears. Siri is this modal popup with no history, no App, and no UI at all really. Siri doesn't have a user interface, and it should have one so that I can go back to sessions and resume them or reference them later and interact with Siri in more meaningful ways.
2. Answer questions like a modern LLM does. Siri often responds with very terse web links. I find this useful when I'm sitting with friends and we don't remember if Lliam Neeson is alive or not - for basic fact-checking. This is the only use case where it's useful I've found, when I want to peel my attention away for the shortest period of time. If ChatGPT could be bound to a power button long-press, then I'd cease to use Siri for this use case. Otherwise Siri isn't good for long questions because it doesn't have the intelligence, and as mentioned before, has no user interface.
3. Be able to do things conversationally, based on my context. Today, when I "Add to my calendar Games at Dave's house" it creates a calendar entry called "Games" and sets the location to a restaurant called "Dave's House" in a different country. My baseline expectation is that I should be able to work with Siri, build its memory and my context, and over time it becomes smarter about the things I like to do. The day Siri responds with "Do you mean Dave's House the restaurant in another country, or Dave, from your contacts?" I'll be happy.
If you ask for a website it should open a browser.
Edit: everything else spot on
Yeah it’s an interesting idea, but visuals are required sometimes. Even the simple task of “List the highest rated Mexican restaurants near me” works perfectly well enough with old crappy Siri. You’ll get a list of the highest rated Mexican restaurants near you. But as soon as you open the first restaurant, Siri closes and the list is gone. You can’t view the second restaurant. To get the list back you need to ask Siri again.
There’s no world in which that user experience makes a viable product. It’s a completely broken user experience no matter how smart the Gemini model is.
This should be possible, go to Settings->Action Button->Controls and search for ChatGPT
btw: I hope you will visit Dave's House someday in the future.
Also, Liam Neeson just catching strays over here
Siri's current architecture now provides context into the prompt, such as the app/window that has focus and the content loaded into it. In that sense, Siri is more like the MacOS menu bar than an app. A consolidated view of Siri history may look disjointed, in that there is a lot of context hidden if all it shows is a query like "when was this building built?".
Even more so, it might not provide the functionality desired if you go look at historic chats and ask "who was the architect?", unless all that context was actually captured. However, that context was never formatted in a way that was intended to be clearly displayed to the user. That in itself creates a lot of challenges around things like user consent since Siri can farm off queries to other (online) tools and world-knowledge AI services.
There is at least a UX paradigm for this - clipboard history. Coincidentally, Tahoe built clipboard history into Spotlight. But clipboard history lends itself to perhaps being more a complete and self contained snapshot. I'm not sure Siri is being built to work this way because of implicit context.
For 2, at a certain point this gets farmed off to other tools or other AI services. The Gemini agreement is for the foundational model, not large "world knowledge" models or backing databases. Today, Siri answers this question by providing bibliographical information inline from Wikipedia, using internal tools. The model itself just isn't able to answer the actual question (e.g. it will just say his birthday).
For 3, the model already has substantial personal context (as much as apps are willing to give it) and does have state in between requests. This is actually one of the issues with Siri today - that context changes the behavior of the command and control engine in interesting ways, phone to phone and sometimes moment to moment.
Unfortunately, I think stopping and asking for clarification is not something generative AI currently excels at.
Would you like to click this button which takes what you said and executes it as a Google search in Safari?
"Hi Siri, can you message Katrina on WhatsApp that Judy is staying 11-15th Feb and add it to the shared Calendar, confirm with me the message to Kat and the Calendar start and end times and message."
To the extent Cupertino fucked up, it's in having had this attitude when they rolled out Apple Intelligence.
There isn't currently a forcing function. Apple owns the iPhone, and that makes it an emperor among kings. Its wealth is also built on starting with user problems and then working backwards to the technology, versus embracing whatever's hot and trying to shove it down our throats.
Sorry but if there wasn’t a forcing function then “Apple Picks Gemini to Power Siri” wouldn’t be the headline
A pair four-trillion dollar companies striking a deal in the hottest technology space since the internet getting headline treatment is not evidence of a forcing function.
Or maybe you’re arguing that Apple never did intend to commit to those promises and it was all intentional and part of a well orchestrated plan from the outset? Seems like an odd strategy
Which is not how headlines work.
You may have an argument that Apple is under pressure. But your headline argument is bananas.
> maybe you’re arguing that Apple never did intend to commit to those promises
Where did you get that?
The attitude I called "fucked up" is precisely rushing to make promises and then meet them. Apple's sales don't suggest customers are putting material pressure on Cupertino. Apple's share price doesn't suggest investors are panicking. The promises have already been broken. If Apple is pushing something out, again, because they feel they have to on the basis of those promises, it's–again-a fuckup.
That’s the joke. I assure you they are panicking.
Their naff image generation tool (Playground) is further evidence.
Apple are definitely panicking. And they should be too.
That's evidence they forced themselves. Not that there was a forcing function. That's the whole point of the top comment. (Mine.) They rushed where they didn't have to. And I still don't think they need to rush.
These services are threatened by the emerging competitive landscape. Apple is panicking, and there is a forcing function, because their users are spending more and more time with LLMs having the most personal experiences they’ve ever had with any software, and Apple isn’t getting a piece of that pie. They’re in a very high risk position because this is the heart of their brand and as data is slowly siphoned away into apps and services that are providing the experiences their users are growing to expect, that moat and stickiness is eroding.
That said, this precisely why it’s taken so long. While Apple is desperate, they are completely unwilling to disrespect their users trust and mishandle their personal data or compromise their privacy.
A company can be forced into a decision because they’re scared of losing market share, but that doesn’t count as them being forced because they chose they wanted to stay in business.
The crux of the problem is that people use Gemini on Android. In fact Google have been doubling down on the AI features for years. And in a range of guise’s from camera enhancing vision models to smart personal assistants via Gemini.
Just as Apple heavily promoted Siri back when it was pioneering.
But Siri has stagnated for years. It’s basically the Internet Explorer 5 of the assistant domain. And the competition is so far ahead in capabilities that people are going to start questioning the innovation happening at Apple. In fact people already are.
I wouldn’t be surprised if the Liquid Glass misfire was a desperate attempt to make their technology feel futuristic again.
The fact is, Siri sucks. I almost never use it now. Apple knows this. We know it. Consumers know it. Apple know that they have to urgently fix it.
Liquid Glass is part of an ongoing strategy to get developers to target all the platforms equally - not to come out with a native iOS version, then poop out an electron app for the Mac and let it run in a zoomed window on iPad.
This is an initiative that started with MacOS 11:
1. Make the Mac feel closer to an iPad; strip away arbitrary differences like app icons.
2. Catalyst to make porting an iOS codebase easier
3. Swift UI to make native targeting of platforms easier with their differing UX/capabilities
4. Create iPad variants of MacOS UX features like mouse pointers, menus, and so on. Create API (typically under Swift UI) to support both variants with the same code
I don't think the designers had a goal with Liquid Glass to make everything feel more like AVP. Instead, I think thats what they had touched last, and they used that recent experience to revamp all the platforms.
But their goal is that everything works like an iPad. An iPhone is a mini iPad (which maybe in the future folds out to have a similar size and aspect ratio to the iPad Mini). AVP is the iPad you strap to your face. And a build targeting Mac now has icons and menus and controls which don't look out of place.
That could be a big differentiator, when for many companies iOS and Android are the _only_ platforms that currently get native experiences and integration, with everything else being web or electron based.
The choice of theme is the misstep. Not the strategy.
Since when?
> versus embracing whatever's hot and trying to shove it down our throats
I agree here, to a degree. It's just that Apple tells its customers what's hot and then shoves it down their throats.
I don't really understand this. Is it shoving when something is actually popular? The iPod was legitimately extremely popular. Did Apple decide it was hot and then somehow force people to buy 450 million of them?
I mean I'm just curious what products you're thinking of when you say "shoves it down their throats"
What's hot about less ports, no headphone jack, no SD card, a tax when buying apps for your phone, planned obsolescence, antagonistic behavior towards app and software developers, an unchanged aluminum rectangle, thinner devices that look cool at the cost of performance and efficiency, heaviest laptops and phones on the market, phones made out of glass front and back, the touchbar, the notch, etc.?
> Its wealth is also built on starting with user problems and then working backwards to the technology, versus embracing whatever's hot and trying to shove it down our throats.
Then again, remember millimeterwave? But yes, as a general rule I think your point still stands.
Investors are the forcing function
It’s just how Apple does things: They still have no folding phone, under-screen finger print scanner, under-screen front-cam, etc.
> “The right info, right when you need it.” That’s how Google describes Magic Cue, one of the most prominent new AI features on the Pixel 10 series. Using the power of artificial intelligence, Magic Cue is supposed to automatically suggest helpful info in phone calls, text messages, and other apps without you having to lift a finger.
However, the keyword there is “supposed” to... even when going out of my way to prompt Magic Cue, it either doesn’t work or does so little that I’m amazed Google made as big a deal about the feature as it did.
https://www.androidauthority.com/google-pixel-10-magic-cue-o...
When was this part last true?
Always need to attach an adapter to my Anker chargers and powerbanks.
I hope they can forgive me for doubting their benevolent wisdom, I promise never to do it again.
It’s very confusing if you do that and are an idiot.
But I have a bunch of USB-C stuff and so when I go to charge my laptop it’s just easier to find that cable and use it.
Thats the real difference - it now easily lasts until I would want to take an extended break anyway.
Closest would be the SD card slot... if it was SD Express.
If they had released the M1 MBP in the old chassis I would have a real challenge upgrading to the current models.
Microsoft had tablets for a decade before the iPad came out. You rarely ever saw them in the wild. In fact, you still rarely see a Surface tablet. At least, I don't.
Apple tries extremely hard to be durably differentiated from products in the same category to avoid being dragged down in a price war to have cheap quality.
That in turn makes it hard for others to compete with them - you don't have differentiating features that would pull existing users off a mature product like iPad, and you can't come out with a cheaper product without discriminating consumers being concerned that it is fragile, clunky, and/or incomplete.
Many years later, I was working for a startup called kWhOURS in a little old house in East Cambridge, Massachusetts. Our target users were engineers used to paying thousands for the rugged and expensive Windows laptops we needed to deploy our Adobe AIR tablet app onto since they had a touchscreen. Still a clunky UI, but our software was usable. Then the iPad was released, and it was literally worlds apart, something people have long taken for granted. All of us, including Adobe, were taken by surprise, because all attempts at tablets prior to that were so far inferior to Apple's version, and competitors spent many years trying to catch up.
Oh yeah, that's been awesome for the consumer.
Tablet were pretty commonly used by delivery drivers and other employees of national corporations who came to my apartment building, but I don't know for sure that they ran Windows.
You can't use cool to argue against me. It was in the comment I replied to.
> but you know exactly what is being referenced
No, I don't, which is why I asked. Mind explaining instead of being coy?
Apple didn’t make the first MP3 player, but once they made the iPod, everyone wanted an iPod. It was cool. Most other players pivoted to be more iPod-like.
Apple didn’t make the first smart phone. Smart phones were semi-niche devices for businessmen and nerds. Once the iPhone came out, everyone wanted it and the whole market changed.
Apple didn’t make the first smart watch, but once they did, their smart watch was more capable and integrated than the others and went on to outsell Rolex.
Apple didn’t make the first tablet. Microsoft tried to push the idea repeatedly 10 years earlier. Apple waited and came out with the iPad once multitouch was a thing and they could build an OS around touch. 15 years after its launch, it’s still the only tablet anyone actually talks about.
Steve Jobs talked about putting the customer experience first and selecting technologies that will be around for the next 10+ years, rather than chasing the latest bleeding edge tech, just to say you’re using it and trying to find a way to shoehorn it in.
To know what tech is going to stick around and to find how to best implement it takes time for things to mature a little bit. This means sacrificing the bleeding edge for a more thoughtful and stable approach.
Tim Cook doesn’t have the same kind of vision as Jobs, so I think some of this has been lost, but this has been their history for a long time, and one of the reasons why they’ve been so successful.
In your examples market demand from existing customers of iMacs wasn't pointed at Apple to create the iPod. iPod customers weren't demanding that Apple create the iPhone. And iPhone customers weren't seething over the lack of a first-party watch option. Apple customers are looking across the landscape and can see every other phone manufacturer running circles around Siri, and this integration with Gemini really feels like they're throwing in the towel.
The thing is, Siri doesn’t need an LLM for Apple customers to use an LLM. The App Store exists and iPhone users can download ChatGPT, Gemini, Claude, Grok, etc, etc, etc. They can map their favorite one to the action button if they want quick access.
I don’t see a major need for Apple to rush something out the door that doesn’t live up to their quality standard. From my use of LLMs, I still don’t think it lives up to the standards needed to hand out to a billion people and say “use this, you can trust it”. Even if their internal models were as good as the best ones on the market, I still think the press would treat it as another Apple Maps situation. I’m saying that with LLMs of today, not even the ones from the GPT-3 days.
Cook is too eager to say stuff that will please the stockholders, so he teased the AI stuff and had a big AI phone release before they had a product that was viable to release. That’s a theme with him.
Huh, I always thought it was the other way around (whether people liked it or not): ditching floppy disks, ditching cdroms, prioritizing BT over wired earphones, etc. I am glad, though, that they were forced to stick with USB-C if I'm not mistaken.
Bluetooth sucks, needing to charge headphones sucks. I'm still bitter :p
> I am glad, though, that they were forced to stick with USB-C if I'm not mistaken.
Now I have a boatload of apple chargers which will all be made into landfill for the good of the planet when i next upgrade my phone. Thank you so much.
although Lightning was better-designed for being routinely used (pins on the outside of the wire end rather than inside the device, easy to clean and no protruding pieces in the device to damage/snap off), and the ideal scenario would have been making it an open standard
USB-A chargers are so brutally slow, but you can use a USB-A to C cable if you really want to spend 3+ hours charging a modern phone.
The switch prompted cables to go into the landfill. The USB-A chargers should have been there half a decade ago.
In actual fact, though, apple is a very effective fifth or sixth mover, and has been for a very long time. They watch everyone else fuck it up and get it wrong a bunch of times, and then throw scads of cash at threading the needle.
Apple is in the value extraction business these days: their devices are conduits for advertising Apple services. The Vision Pro flopped because they wanted to charge and arm and a leg for a platform that was actively hostile to developers. It's not 2008 anymore.
They don't though, Android is clearly ahead in AI integration (even Samsung are running TV ads mocking iPhones AI capability) yet still iPhones sales are breaking records - the majority of their phone buyers still prefer an iPhone over an AI capable other phone.
They can take their time to develop AI integration that others can't deploy - 'secure/private', deep integration with iCloud, location services, processing on device etc. that will provide the product moat to increase sales.
They're not good enough for that usecase, currently - so almost all interactions make the UX worse, currently.
Might change in the future, I'm just taking about today in January 2026
I don’t think that’s true. People just use the LLM apps. What people don’t feel like they need right now is deep LLM integration across the whole OS. IMO, that’s more of just not showing people the killer product yet.
I don't often use voice assistants myself, but they're fully conversational these days and several billion times more useful than the old-school Alexa-style stuff with a limited set of integrations.
Will make it much easier to find those missing pictures from a few years ago...
Just under 16 months since the release of iOS 18. The phones they would have sold this with shipped alongside 18.
Also, the personalized Siri was indicated it would not be available until later and was expected in the spring release (March 2025).
I dont think the model is that much different if they thought Siri was half decent enough for so long.
Judging from the past 10 years, I would say this is more likely driven by part of a bigger package deal with Google Search Placement and Google Cloud Services. When everything else being roughly equal.
Instead of raising price again Paying Apple even more per user, How about we pay the less but throw in Gemini with it?
Apple has been very good, if not the best at picking one side and allowing the others to fight for its contract. They dont want Microsoft to win the AI race, at the same time Apple is increasing the use of Azure just in case. Basically playing the game of leverage at its best. In hindsight probably too well into it they forgot what the real purpose of all these leverage are for, not cost savings but ultimately better quality product.
Can the DOJ and FTC look into this?
Google shouldn't be able to charge a fee on accessing every registered trademark in the world. They use Apple get get the last 30% of "URL Bars", I mean Google Search middlemen.
Searching Anthropic gets me a bidding war, which I'm sure is bleeding Google's competition dry.
We need a "no bare trademark (plus edit distance) ads or auto suggest" law. It's made Google an unkillable OP monster. Any search monopoly or marketplace monopoly should be subject to not allowing ads to be sold against a registered trademark database.
I guess this venture into politics more than anything else. And I am opinionated on the subject.
But other than that, the point worth centred on is Apple no longer care as much as being the best. They care much more about extracting best business deals and money out of their current position. Which is very different to Steve Jobs era, No money can put crap on his plate.
Personally I wouldn't use it, it still belongs to an advertiser specialised on extracting user information. Not that I expect that other AI companies value privacy much higher. But clean smell also means bland smell.
Given my stance about AI, I'll definitely not use it, but I understand Apple's choice. Also this choice will give them enough time to develop their infrastructure and replace parts of it with their own, if they are planning to do it.
> Not that I expect that other AI companies value privacy much higher.
Breaching privacy and using it for its own benefit is AIs business model. There are no ethical players here. Neither from training nor from respecting their users' privacy perspective. Just next iteration of what social media companies do.
I don't however like the idea of having Google deeply embedded in my machine and Siri will definitely be turned off when this happens. I only use Siri as an egg timer anyway.
This seems like a odd move for a company that sells privacy.
It ISN'T in this day and age. People don't switch back and forth between iOS and Android like it's still 2010. They use whatever they got locken in initially since their first smartphone or where Apple's green/blue-bubble issue pushed them to or what their family handed them down or what their close friend groups used to have.
People who've been using iOS for 6+ years will 98% stick to iOS for their next purchase and won't even bother look at Android no matter what features Android were to add.
The Android vs iOS war is as dead as the console war. There's no competition anymore, it's just picking one from a duopoly of vendor lock-ins.
Even if EU were to break some of the lockins, people have familiarity bias and will stick with inertia of what they're used to, so it will not move the market share needle one bit.
Performance? We are many years past the point somebody cared about performance. I am writing this on iphone 11 pro and the experience is almost exactly the same as current iOS.
You know what's not the same? Android became pretty great OS. I recently got older Pixel to see how GrapheneOS works and was surprised about Android (which i havent seen for a decade). iOS on the other hand has recently gone trough with very bad ui redesign for no reason.
Imho the main thing Apple has going for it is that Google is spyware company and Apple is still mainly hardware company. But if Apple decides to pull their users data to gemini… well good luck.
I respect Google's engineering, and I'm aware that fundamental technologies such as Protocol Buffers and FlatBuffers are unavoidably integrated into the software fabric, but this is is avoidable.
I'm surprised Google aren't paying Apple for this.
Second I'm developing privacy focused apps that were going to use foundation models. Now I need to seriously reconsider this.
Unfortunately, it probably actually is a small number comparatively. Or at least I would need to see some sort of real data to say anything different.
I feel like people who distrust Google probably wouldn't trust Apple enough to give them their data either? Why would you distrust one but not the other?
Also accounting optics means Apple can shows lower revenue but cleaner margins
> The U.S. government said Apple Chief Executive Officer Tim Cook and Google CEO Sundar Pichai met in 2018 to discuss the deal. After that, an unidentified senior Apple employee wrote to a Google counterpart that “our vision is that we work as if we are one company.”
https://www.bloomberg.com/news/articles/2020-10-20/apple-goo...
1. The first issue is that there is significant momentum in calling Siri bad, so even if Apple released a higher quality version it will still be labelled bad. It can enhance the user's life and make their device easier to use, but the overall press will be cherrypicked examples where it did something silly.
2. Basing Siri on Google's Gemini can help to alleviate some of that bad press, since a non-zero share of that doomer commentary comes from brand-loyalists and astroturfing.
3. The final issue is that on-device Siri will never perform like server-based ChatGPT. So in a way it's already going to disappoint some users who don't realise that running something on mobile device hardware is going to have compromises which aren't present on a server farm. To help illustrate that point: We even have the likes of John Gruber making stony-faced comparisons between Apple's on-device image generator toy (one that produces about an image per second) versus OpenAI's server farm-based image generator which makes a single image in about 1-2 minutes. So if a long-running tech blogger can't find charity in those technical limitations, I don't expect users to.
Apple and Google have said that Private Cloud Compute will be involved as well, which Apple is trying to build a mystique of "on-device-like" trust. (Which yes, if Private Cloud Compute is involved and is secure in the ways that Apple says it is does presumably imply that the announced deal with Google includes selling Apple the complete model weights.)
> The final issue is that on-device Siri will never perform like server-based ChatGPT. So in a way it's already going to disappoint some users who don't realise that running something on mobile device hardware is going to have compromises which aren't present on a server farm.
For many years, siri requests were sent to an external server. It still sucked.
Their point is that if Apple totally scraps the current, bad, product called "Siri" and replaces it with an entirely different, much better product that is also named "Siri" but shares nothing but the name, people's perceptions of the current bad Siri will taint their impressions of the new one.
These models tend to have a "mind of their own", and I can totally, absolutely, see a current SOTA LLM convincing itself it needs to call 911 because you asked it how to disinfect a cut.
> Alright, from now on I will call you Anne Ambulance.
I don't expect the current US government to do anything about it though.
I admit I don't see the issue here. Companies are free to select their service providers, and free to dominate a market (as long as they don't abuse such dominant position).
It also lends credence to the DOJ's allegation that Apple is insulated from competition - the result of failing to produce their own winning AI service is an exclusive deal to use Google while all competing services are disadvantaged, which is probably not the outcome a healthy and competitive playing field would produce.
This feels a little squishy... At what size of each company does this stop being an antitrust issue? It always just feels like a vibe check, people cite market cap or marketshare numbers but there's no hard criteria (at least that I've seen) that actually defines it (legally, not just someones opinion).
The result of that is that it's sort of just up to whoever happens to be in charge of the governing body overseeing the case, and that's just a bad system for anyone (or any company) to be subjected to. It's bad when actual monopolistic abuse is happening and the governing body decides to let it slide, and it's bad when the governing body has a vendetta or directive to just hinder certain companies/industries regardless of actual monopolistic abuse.
No they were already being sued for antitrust violations, it just mirrors what they are accused of doing to exploit their platform.
https://storage.courtlistener.com/recap/gov.uscourts.njd.544...
It's the line of thinking that I'm trying to dig into more, not the specifics of this case. Now it feels like you're saying "this is anti-trust because someone accused them of anti-trust before".
If that case was prosecuted and Apple was found guilty, I suppose you can point to it as precedent. But again, does it only serve as precedent when it's a deal between Apple and Google? Is it only a precedent when there's a case between two "large" companies?
Again this is all really squishy, if companies aren't allowed to outsource development of another feature once they pass some sense of "large", when does it apply? What about the $1T pharmaceutical company that wants to use AI modeling? They're a large technically component company, if Eli Lily partnered with Gemini would you be sitting here saying that they also are abusing a monopolistic position that prevents competition in the AI model space?
No it's antitrust because they have a failed product, but purely by virtue of shutting out competitors from their platform they have been able to turn three years of flailing around into a win-by-outsourcing. What would Siri's position be like today if they hadn't blocked default voice assistants? Would they be able to recover from their plight to dominate the market just by adopting Google's technology? How would that measure against OpenAI, Anthropic or just using Google directly? This is why it's an antitrust issue.
"it's antitrust because they have a failed product" is objectively hilarious
> What would Siri's position be like today if they hadn't blocked default voice assistants?
Probably pretty much the same. What would Gemini's position be like today if they hadn't blocked out default voice assistants? You only get Gemini when you use Gemini, just like you only got Siri when you use Siri (up until this deal takes effect). Also Siri has used ChatGPT already, so I'm not even convinced this is a valid criticism. They already didn't block OpenAI from being part of Siri.
> Would they be able to recover from their plight to dominate the market just by adopting Google's technology?
This is relevant how?
> How would that measure against OpenAI, Anthropic or just using Google directly?
How would what measure against other ai models? How would their ability to recover from a lack of investing in a better "homemade" AI model differ if they used OpenAI instead of Gemini? How does that have anything to do with antitrust? That's a business case study type of question. Also, shouldn't they be allowed to recover from their own lack of developing a model by using the best tool available to them?
Why only in Japan? Because Japan forced them to: https://9to5mac.com/2025/12/17/apple-announces-sweeping-app-...
The problem isn't that they used another company's model. It's that they are using a model made by the only company competing with them in the market of mobile OS.
Sorry if I'm missing the point but if Apple had picked OpenAI, couldn't you have made the same comment? "nobody else can be the default voice assistant or power Siri, so where does this leave eg Gemini/Claude?".
However I don't see the link, how they are "using their duopoly", and why "they" would be using it but only one of them benefits from it. Being a duopoly, or even a monopoly, is not against anti-trust law by itself.
I can't wait for gemini to lecture me why I should throw away my android
Even "Play the album XY" leads to Siri only playing the single song. It's hilariously bad.
Me: "Hey Siri, play <well known hit song from a studio album that sold 100m copies"
Siri: "OK, here's <correct song but a live version nobody ever listens to, or some equally obscure remix>"
Being these things are at their core probability machines, ... How? Why?
Is Siri a probability machine? I didn't think it was an LLM at all right now? I thought it was some horrendous tree of switch statements, hence the difficulty of improving it.
Apple search is comically bad, though. Type in some common feature or app, and it will yield the most obscure header file inside the build deps directory of some Xcode project you forgot existed.
Siri: Would you like to answer?
Me: Yes
Siri: ...
Me: No + more words
Siri: Ok (shuts off)
The non-hardware AI industry is currently in an R&D race to establish and maintain marketshare, but with Apple's existing iPhone, iPad and Mac ecosystem they already have a market share they control so they can wait until the AI market stabilizes before investing heavily in their own solutions.
For now, Apple can partner with solid AI providers to provide AI services and benefits to their customers in the short term and then later on they can acquire established AI companies to jumpstart their own AI platform once AI technology reaches more long term consistency and standardization.
The biggest thing Apple has to do is get a generic pipeline up and running, that can support both cloud and non-cloud models down the road, and integrate with a bunch of local tools for agent-style workloads (e.g. "restart", "audio volume", "take screenshot" as tools that agents via different cloud/local models can call on-device).
But you may be right, maybe on-device won't be smart enough to decide it isn't smart enough. Though it does seem like the local LLMs have gotten awfully good.
> Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.
... https://blog.google/company-news/inside-google/company-annou...
I'm really curious how Apple is bridging the gap between consumer silicon and the datacenter scale stack they must have to run a customized Gemini model for millions of users.
RDMA over Thunderbolt is cool for small lab clusters but they must be using something else in the datacenter, right?
Last I heard most of their e2e storage for iCloud was on GCP.
They also use AWS.
Was this just a massive oversight at Apple? Were there not AI researchers at Apple sounding the alarm that they were way off with their technology and its capabilities? Wouldn't there be talk within the industry that this form of AI assistant would soon be looked at as useless?
Am I missing something?
Siri was never an “AI agent”, with intent based systems, you give the system phrases to match on (intents) and to fulfill an intent, all of the “slots” have to be fulfilled. For instance “I want to go from $source to $destination” and then the system calls an API.
There is no AI understanding - it’s a “1000 monkeys implementation”, you just start giving the system a bunch of variations and templates you want to match on in every single language you care about and match the intents to an API. That’s how Google and Alexa also worked pre LLM. They just had more monkeys dedicated to creating matching sentences.
Post LLM, you tell the LLM what the underlying system is capable of, the parameters the API requires to fulfill an action and the LLM can figure out the users intentions and ask follow up questions until it had enough info to call the API. You can specify the prompt in English and it works in all of the languages that the LLM has been trained on.
Yes I’ve done both approaches
I want to know why the executive leadership at Apple failed to see LLMs as the future of AI. ChatGPT and Gemini are what Siri should be at this point. Siri was one of the leading voice-automated assistants of the past decade, and now Apple's only options are to strap on an existing solution to the name of their product or let it go defunct. So now Siri is just an added layer to access Gemini? Perhaps with a few hard-coded solutions to automate specific tasks on the iPhone, and that's their killer app into the world of AI? That's pathetic.
Is Apple already such a bloated corporation that it can no longer innovate fast enough to keep up with modern trends? It seems like only a few years ago they were super lean and able to innovate better than any major tech company around. LLMs were being researched in 2017. I guess three years was too short of a window to change the direction of Siri. They should have seen the writing on the wall here.
I don’t know why, in my much smaller scale experience, converting to an LLM “tools” based approached from the Intent based approach is much more reliable.
Siri was behind pre LLM because Apple didn’t throw enough monkeys at the problem.
Everything that an assistant can do is “hardcoded” even when it is LLM based.
Old way: voice -> text -> pattern matching -> APIs to back end functionality.
New Way: voice -> text -> LLM -> APIs to back end functionality.
How often have you come across a case where Siri understood something and said “I can’t do that”? That’s not an AI problem. That’s Apple not putting people on the intent -> API mapping. An LLM won’t solve the issue of exposing the APIs to Siri.
I don't think Apple didn't have enough people working on Siri, I think they had too many people working on the wrong problems. If they had any eye on the industry like they did in their heyday when Jobs was at the helm they would've been all over LLMs like Sam Altman was with his OpenAI startup. This report of SIRI using Gemini going forward is one of the biggest signs that Apple is failing to innovate, let alone the constant rehashing of Iphone and IOS. They haven't been innovative in years.
And yes that's the point I was trying to make, AI assistants shouldn't be hardcoded to do certain things, that's not AI - but with Apple's marketing, they'd have you believe that SIRI is what AI should be, except now everyone's wiser, everyone and their grandmother has used ChatGPT which is really what SIRI should have been. Changes to the IOS API should roll out and an LLM backed AI assistant should be able to pick up on those changes automatically, SIRI should be an LLM trained on Apple Data, its APIS, your personal data (emails, documents,etc.), and a whole host of publicly available data. This would actually make SIRI useful going into the future.
Again, if Apple's marketing team were to be believed, SIRI would be the most advanced LLM on the planet, but from a technical standpoint, they haven't even started training an LLM at all. It's nonsense.
And ChatGPT can’t really “do anything” without access to tools.
You don’t want an LLM to have access to your total system without deterministic guardrails and limiting the permissions of what the tools can do just like you wouldn’t expose your entire database with admin privileges to the web.
You also don’t want to expose too many tools to the system. Every tool you expose you also have to have a description of what the tool does, the parameters it needs etc. Ot will both blow up your context window and start hallucinating. I suspect that’s why Alexa and Google Assistant got worse when they became LLM based and my narrow use cases don’t suffer those problems when I started implementing LLM based solutions.
And I am purposefully yada yada yadaing some of the technical complexities and I hate the entire “appeal to authority” thing. But I worked at AWS for 3.5 years until 2 years ago and I was at one point the second highest contributor to a popular open source “AWS Solution” that almost everyone in the niche had heard of dealing with voice automation. I really do know about this space.
I understand that AI assitants need access to tools in order to do anything on a computer, I've been working with AI augmented development for a few months now and everytime I need a prompt to run a tool it asks for permission first, or just straight up gives me the command to paste into a terminal.
ideally this would have been abstracted away if siri were an LLM, with Apple controlling which apis siri has access to and bipassing user confirmation all together.
It would have been neat if I were able to say, "Hey, Siri: send a text to John Smith with a playfully angry prose thanking him for not inviting me to the party". which would have the LLM automatically craft the message and send upon confirmation, perhaps with a disclaimer "made with ai" at the bottom of the text or something along those lines.
"Hey, Siri: What's the weather in Los Angeles, California" would fallback to a web api endpoint.
"Hey, Siri: How do I compile my C# application without Visual Studio" would provide step-by-step instructions on working with MSBUILD.
different prompts would fallback on different apis that only Apple would expose. Obviously not allowing the user to gain root access to the system, which is what you would expect from Apple.
I guess from a purely technical standpoint, you'd train two models, one as "Safe" and the other as "Unsafe". "Safe" is what would be used by the end-user, allowing them to access safe data, apis, messaging, web.. you name it. "Unsafe" would be used internally at Apple and would have system-wide access, access to unlimited user data, root privileges, perhaps unsafe image generation and web search... basically no limit to what an LLM could achieve.
Most of the differentiation is happening on the application/agent layer. Like Coworker.
The rest of it, is happening on post-training. Incremental changes.
We are not talking about EUV lithography here. There are no substantial moths of years of pure and applied research protected by patents.
SOTA AI models can have different architectures, vastly different compute in training, different ways of inferencing, different input data, different RL, and different systems around the model. Not to mention the significant personal user data that OpenAI is collecting.
Saying SOTA AI models are like sugar is insane.
Compute is a moat.
I understand other things like image recognition, wikipedia information, etc require external data sets, and transferring over local data to that end can be a privacy breach. But the local stuff should be easy, at least in one or two languages.
In the original announcement of the Siri revamped a couple of years ago, they specifically talked about having the on-device model handle everything it can, and only using the cloud models for the harder or more open ended questions.
The better the basic NLP tasks like named entity recognition, PoS tagging, Dependency Parsing, Semantic Role Labelling, Event Extraction, Constituency parsing, Classification/Categorization, Question Answering, etc, are implemented by the model layer, the farther you can go on implementing meaningful use-cases in your agent.
Apple can now concentrate on making Siri a really useful and powerful agent.
https://daringfireball.net/linked/2026/01/12/apple-google-fo...
"These models will help power future Apple Intelligence features, including a more personalized Siri coming this year."
"Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple's industry-leading privacy standards."
Source: https://blog.google/company-news/inside-google/company-annou...
Beyond Siri, Apple Foundation Models are available as API; will Google's technologies thus also be available as API? Will Apple reduce its own investment in building out the Foundation models?
EDIT: Here you go, it's very barebones, but it's better than nothing (I hope): https://github.com/cockbrand/siri-gemini-shortcut
As you'd expect, the code is vibed by Gemini :)
In September, a judge ruled against a worst-case scenario outcome that could have forced Google to divest its Chrome browser business.
The decision also allowed Google to continue to make deals such as the one with Apple."
How much is Google paying Apple now
If these anti-competitive agreements^1 were public,^2 headlines could be something like,
(A) "Apple agrees to use Google's Gemini for AI-powered Siri for $[payment amount]"
Instead, headlines are something like,
(B) "Apple picks Google's Gemini to run Ai-powered Siri"
1. In other words, they are exclusive and have anticompetitive effects
2. Neither CNBC nor I are suggesting that there is any requirement for the parties to make these agreements public. I am presenting a hypothetical relating to headlilnes, (A) versus (B), as indicated by the words "If" and "could"
https://www.bloomberg.com/news/articles/2025-12-01/openai-ta...
Google pays 20 billion to Apple annually for search traffic
Apple allegedly pays Google about 1 billion per year for Gemini
Perhaps Gemini sends more search traffic to Google
The search traffic and data collection is worth far more than Gemini
Sounds like Apple Foundation Models aren't exactly foundational.
I am certain Apple will do just fine in the AI revolution, in large part because such a massive distribution and brand advantage is extremely hard to overcome.
But why on earth would they do that? It's both cheaper and safer to buy Google's model, with whom they already have a longstanding relationship. Examples include the search engine deal, and using Google Cloud infrastructure for iCloud and other services. Their new "private cloud compute" already runs on GCP too, perfect! Buying Gemini just makes sense, for now. Wait a few years until the technology becomes more mature/stable and then replace it with their own for a reasonable price.
Because Apple Silicon is so good for LLM inferencing, I hope they also do a deal for small on-device Gemma models.
The idiomatic "British" way of doing this ...
Alternatively, for an Imperial-style approach, ...
As a professional software engineer you really should ...
in response to programming/Linux/etc. questions!(Because I just have a short blurb about my educational background, career, and geography in there, which with every other model I've tried works great to ensure British spelling, UK information, metric units, and cut the cruft because I know how to mkdir etc.)
It's given me a good laugh a few times, but just about getting old now.
It would take US antitrust approval, but under Trump, that's for sale.
Might sound crazy but remember they did exactly this for web search. And Maps as well for many years.
This way they go from having to build and maintain Siri (which has negative brand value at this point) and pay Google's huge inference bills to actually charging Google for the privilege.
Apple plainly doesn't believe in the uplift and impending AGI doom. Nor do they believe there's no value in AI services. They just think for NOW at least they can buy in better than they can own.
But based on Apples VLSI longterm vision, on their other behaviours in times past with IPR in any space, they will ultimately take ownership.
How? People have been saying this since CoreML dropped nine years ago. Apple is no closer to revamping Siri or rebuking CUDA than they were back then.
Apple private relay runs in cloudflare and fastly and I believe one other major. They certainly can and do run services for a long time with partners.
They certainly can. iCloud has ran on Azure and AWS for decades.
They have the time and the money and the customers, so I'm confident they will accomplish great things.
There are many times I want to type the same word that is already on the app screen but it autocorrects me to something completely different.
The current system suggests words I have never used, will never use and have never heard before instead of the obvious choice.
But I saw something else in that statement. Is there going to be some quantized version of Gemini tailored to run on-device on an M4? If so, that would catapult Apple into an entirely new category merging consumer hardware with frontier models.
On the one hand, they apparently want to be a service provider Microsoft-style. They are just signing a partnership with their biggest competitor and giving them access to their main competitive advantage, the most advanced AI available.
On the other hand, they want to be another Apple. They are locking down their phone. Are competing with the manufacturers of the best Android phones. Are limiting the possibility of distributing software on their system. Things that were their main differentiator.
It doesn't make sense. It's also a giant middle finger to the people who bought the Pixel for Gemini. Congrats, you were beta testers for iPhone users who won't have to share their data with Google for training Gemini. I have rarely seen a company as disrespectful to its customer.
Any details on privacy and data sharing surfaced yet?
https://blog.google/company-news/inside-google/company-annou...
Amazon/AWS was trying to push its partnership with Apple hard once that was revealed, including vague references to doing AI things, but AWS is just way to far behind at this point so looks like they lost out here to Google/GCP.
I didn't realize that Apple could possibly be more stupid in their strategy with AI, but now they've given the game to their biggest competitor in every arena in which they compete.
It's truly amazing how badly they've flubbed it.
Oh, well. What could have been great.
gnabgib•3w ago
johnthuss•3w ago
charliebwrites•3w ago
Apple explicitly acknowledged that they were using OpenAI’s GPT models before this, and now they’re quite easily switching to Google’s Gemini
hu3•3w ago
Surely research money is not the problem. Can't be lack of competence either, I think.
nothercastle•3w ago
IOT_Apprentice•3w ago
First, they touted features that no one actually built and then fired their AI figurehead “leader” who had no coherent execution plan—also, there appears to have been territorial squabbling going on, about who would build what.
How on earth did Apple Senior Management allow this to unravel? Too much focus on Services, yet ignoring their absolute failures with Siri and the bullshit that was Apple Intelligence, when AI spending is in the trillions?
LexGray•3w ago
Apple is competent at timing when to step into a market and I would guess they are waiting for AI to evolve beyond being considered untrustworthy slop.
johnthuss•3w ago
Angostura•3w ago
layer8•3w ago
Angostura•3w ago
WorldMaker•3w ago
Presumably cutting Google out of getting the data from this is part of why this story first was mentioned last year but is only now sounds close to happening. I think it's the same story/project.
Angostura•3w ago
johnthuss•3w ago
thinkindie•3w ago
robertlagrant•3w ago
thinkindie•3w ago
alt227•3w ago
drcongo•3w ago
9rx•3w ago
mathieuh•3w ago
Maybe I'm weird but mobile assistants have never been useful for me. I tried Siri a couple of times and it didn't work. I haven't tried it since because even if it worked perfectly I'm not sure I'd have any use for it.
I see it more like the Vision Pro. Doesn't matter how good the product ends up being, I just don't think it's something most people are going to have a use for.
As far as I'm concerned no one has proved the utility of these mobile assistants yet.
eli•3w ago
But it's a whole lot easier to switch from Gemini to Claude or Gemini to a hypothetical good proprietary LLM if it's white label instead of "iOS with Gemini"
heraldgeezer•3w ago
Depends on where you are. In my experience here in Sweden Google Maps is still better, Apple maps sent us for a loop in Stockholm (literally {{{(>_<)}}} )
anonzzzies•3w ago
al_borland•3w ago
burnte•3w ago
MBCook•3w ago
Google wanted to shove ads in it. Apple refused and to switch.
Their hand was forced by that refusal.
LexGray•3w ago
Apple announced last year they are putting their own ads in Maps so if that was the real problem the corporate leadership has done a complete 180 on user experience.
MBCook•3w ago
Apple is a very VERY different company than they were back then.
Back then they didn’t have all sorts of services that they advertised to you constantly. They didn’t have search ads in the App Store. They weren’t trying to squeeze every penny out of every customer all the time no matter how annoying.
lern_too_spel•3w ago
array_key_first•3w ago
MBCook•3w ago
It’s running away from them fast.
wat10000•3w ago
rrrrrrrrrrrryan•3w ago
It'll absolutely be interesting to see if "Google" or "Gemini" appear anywhere in the new Siri UI.
al_borland•3w ago
mdasen•3w ago
Yes, Apple is acknowledging that Google's Gemini will be powering Siri and that is a big deal, but are they going to be acknowledging it in the product or is this just an acknowledgment to investors?
Apple doesn't hide where many of their components come from, but that doesn't mean that those brands are credited in the product. There's no "fab by TSMC" or "camera sensors by Sony" or "display by Samsung" on an iPhone box.
It's possible that Apple will credit Gemini within the UI, but that isn't contained in the article or video. If Apple uses a Gemini-based model anonymously, it would be easy to switch away from it in the future - just as Apple had used both Samsung and TSMC fabs, or how Apple has used both Samsung and Japan Display. Heck, we know that Apple has bought cloud services from AWS and Google, but we don't have "iCloud by AWS and GCP."
Yes, this is a more public announcement than Apple's display and camera part suppliers, but those aren't really hidden. Apple's dealings with Qualcomm have been extremely public. Apple's use of TSMC is extremely public. To me, this is Apple saying "hey CNBC/investors, we've settled on using Gemini to get next-gen Siri happening so you all can feel safe that we aren't rudderless on next-gen Siri."
a_paddy•3w ago
qnpnpmqppnp•3w ago
freakynit•3w ago
Apple's brand is so dominant that even if they say Siri is "powered by Google", most users will still perceive it as an Apple service. The only way that changes is if Apple consistently and prominently surfaces the Google name on Siri — which seems unlikely (but who knows when the stakes are so high).
everfrustrated•3w ago
They will do everything possible to avoid that and so re-brand is the only likely outcome.
HarHarVeryFunny•3w ago
gallerdude•3w ago
HarHarVeryFunny•3w ago
If they do refer to it as "Gemini" then this is a huge win for Google, and huge loss for OpenAI, since it really seems that the "ChatGPT" brand is the only real "moat" that OpenAI have, although recently there has been about a 20% shift in traffic from ChatGPT to Gemini, so the moat already seems to be running dry.
dewey•3w ago
insin•3w ago
Google AI can make mistakes
dylan604•3w ago
aoeusnth1•3w ago
asadotzler•3w ago
So, yes, practically speaking, the Apple to Google payment offsets a tiny fraction of the Google to Apple payment, but real money will change hands for each and very likely separately.