> Testers have also reported accuracy issues, as well as a bug that causes Siri to cut users off when they’re speaking too quickly. And there are problems handling complex queries that require longer processing times.
> Another challenge: The new Siri sometimes falls back on its existing integration with OpenAI’s ChatGPT instead of using Apple’s own technology. That can happen even when Siri should be capable of handling the request.
Honestly seems crazy that Apple hasn't seen a class action suit. They were promising features and selling expensive hardware on those promises back in 2024 and they have yet to deliver anything nearly two years later. Huge black eye for a company that has built a reputation for doing demos and then immediately having the product ready to ship.
[1] https://www.bloomberg.com/news/articles/2026-02-11/apple-s-i...
https://clarksonlawfirm.com/lp/apple-intelligence-false-adve...
Then again it's very possible they are just flailing. I don't have strong beliefs either way.
Apple won't ever allow anyone else to own and control their core technologies (as we saw with Google Maps back in the day). We can safely assume their deal with Gemini is for Google to sell them a model-as-a-service component that can be swapped out at any moment, with either another model like Claude, or with Apple's own future models.
Of course the only people who make anything from class action lawsuits (snd there were) are lawyers
> “The right info, right when you need it.” That’s how Google describes Magic Cue, one of the most prominent new AI features on the Pixel 10 series. Using the power of artificial intelligence, Magic Cue is supposed to automatically suggest helpful info in phone calls, text messages, and other apps without you having to lift a finger.
However, the keyword there is “supposed” to... even when going out of my way to prompt Magic Cue, it either doesn’t work or does so little that I’m amazed Google made as big a deal about the feature as it did.
https://www.androidauthority.com/google-pixel-10-magic-cue-o...
How is shipping a broken feature better for users than admitting that the feature needs more work?
But it’s been a complex undertaking. The revamped Siri is built on an entirely new architecture dubbed Linwood. Its software will rely on the company’s large language model platform — known as Apple Foundations Models — which is now incorporating technology from Alphabet Inc.’s Google Gemini team.
What does surprise me is that Google Home is still so bad. They rolled out the new Gemini-based version, but if anything it's even worse than the old one. Same capabilities but more long-winded talking about them. It is still unable to answer basic questions like "what timer did you just cancel".
Apple got into the smartphone game at the right time with a lot of new ideas. But whatever the next big shift in technology is, they will be left behind. I don’t know if that is AI, but it’s clear that in AI they are already far behind other companies.
Instead they choose to optimize for shareholder value.
That said, I don't really use this functionality all that often, because it didn't really (effortlessly) solve a big need for me. Apple sitting out LLMs means they didn't build this competency along the way, even when the technology is now proven.
I think the same thing is true was VR - except Apple did invest heavily in it and bring a product to market. Maybe we won't see anything big for a while, and Silicon Valley becomes the next Detroit.
It'll be 15 years this October and I can't still use siri with my language.
While driving past a restaurant, I wanted to know if they were open for lunch and if they had gluten-free items on their menu.
I asked the "new" Siri to check this for me while driving, so I gave it a shot.
"I did some web searches for you but I can't read it out to you while driving."
Then what on earth is its purpose if not that!? THAT! That is what it's for! It's meant to be a voice assistant, not a laptop with a web browser!
I checked while stopped, and it literally just googled "restaurant gluten free menu" and... that's it. Nothing specific about my location. That's nuts.
Think about what data and access the phone has:
1. It knows I'm driving -- it is literally plugged into the car's Apple CarPlay port.
2. It knows where I am because it is doing the navigating.
3. It can look at the map and see the restaurant and access its metadata such as its online menu.
4. Any modern LLM can read the text of the web page and summarize it given a prompt like "does this have GF items?"
5. Text-to-voice has been a thing for a decade now.
How hard can this be? Siri seems to have 10x more developer effort sunk into refusing to do the things it can already do instead of... I don't know... just doing the thing.
valleyjo•2h ago
newman314•2h ago