Why is Apple's hardware being in demand for a use that undermines its non-Chinese competition a sign of missing the ball versus validation for waiting and seeing?
This is not a train that Apple has missed, this is a bunch of people who’ve tied, nailed, tacked, and taped their unicycles and skateboards together. Of course every cool project starts like that, but nobody is selling tickets for that ride.
Are people's agents actually clicking buttons (visual computer use) or is this just a metaphor?
I'm not asking if CU exists, but rather is this literally the driver of people's workflows? I thought everyone is just running Ralph loops in CC.
For an article making such a bold technological/social claim about a trillion dollar company, this seems a strange thing to be hand wavey about.
And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
Let other companies figure out the model. Let the industry figure out how to make it secure. Then Apple can integrate it with hardware and software in a way no other company can.
Right now we are still in very, very, very early days.
First Mover effect seems only relevant when goverment warrants are involved. Think radio licenses, medical patents, etc. Everywhere else, being a first mover doesnt seem to correlate like it should to success.
See social media, bitcoin, iOS App Store, blu-ray, Xbox live, and I’m sure more I can’t think of rn.
There are plenty of Android/Windows things that Apple has had for $today-5 years that work the exact same way.
One side isn’t better than the other, it’s really just that they copy each other doing various things at a different pace or arrive at that point in different ways.
Some examples:
- Android is/was years behind on granular permissions, e.g. ability to grant limited photo library access to apps
- Android has no platform-wide equivalent to AirTags
- Hardware-backed key storage (Secure Enclave about 5 years ahead of StrongBox)
- system-wide screen recording
While this was true about ten years ago, it's been a while since we've seen this model of software development from Apple succeed in recent years. I'm not at all confident that the Apple that gave us Mac OS 26 is capable of doing this anymore.
The software has been where most of the complaints have been in recent years.
That's a pretty optimistic outlook. All considered, you're not convinced they'll just use it as a platform to sell advertisements and lock-out competitors a-la the App Store "because everyone does it"?
The OS maker does not have to make all the killer software. In fact, Apple's pretty much the only game in town that's making hardware and software both.
(Yes android users are discriminated against in the dating market, tons of op eds are written about this, just google it before you knee jerk downvote the truth)
For example: https://x.com/michael_chomsky/status/2017686846910959668.
Except this doesn't stand up to scrutiny, when you look at Siri. FOURTEEN years and it is still spectacularly useless.
I have no idea what Siri is a "much nicer version" of.
> Apple can integrate it with hardware and software in a way no other company can.
And in the case of Apple products, oftentimes "because Apple won't let them".
Lest I be called an Apple hater, I have 3 Apple TVs in my home, my daily driver is a M2 Ultra Studio with a ProDisplay XDR, and an iPad Pro that shows my calendar and Slack during the day and comes off at night. iPhone, Apple Watch Ultra.
But this is way too worshipful of Apple.
These kinds of risks can only be _consented to_ by technical people who correctly understand them, let alone borne by them, but if this shipped there would be thousands of Facebook videos explaining to the elderly how to disable the safety features and open themselves up to identity theft.
The article also confuses me because Apple _are_ shipping this, it’s pretty much exactly the demo they gave at WWDC24, it’s just delayed while they iron this out (if that is at all possible). By all accounts it might ship as early as next week in the iOS 26.4 beta.
[1]: https://simonwillison.net/2025/Mar/8/delaying-personalized-s...
Ten years from now, there will be no ‘agent layer’. This is like predicting Microsoft failed to capitalize on bulletin boards social media.
The current ‘agent’ ecosystem is just hacks on top of hacks.
Of course AI will keep improving and more automation is a given.
It sounds to me like they still have the hardware, since — according to the article — "Mac Minis are selling out everywhere." What's the problem? If anything, this is validation of their hardware differentiation. The software is easy to change, and they can always learn from OpenClaw for the next iteration of Apple Intelligence.
Author spoke of compounding moats, yet Apple’s market share, highly performant custom silicon, and capital reserves just flew over his head. HN can have better articles to discuss AI with than this myopic hot take.
Saved you a click. This is the premise of the article.
So yeah, the market isn’t really signaling companies to make nice things.
and the very next line (because i want to emphasize it
> That trust—built over decades—was their moat.
This just ignores the history of os development at apple. The entire trajectory is moving towards permissions and sandboxing even if it annoys users to no end. To give access to an llm (any llm, not just a trusted one acc to author) the root access when its susceptible to hallucinations, jailbreak etc. goes against everything Apple has worked for.
And even then the reasoning is circular. "So you build all your trust, now go ahead and destroy it on this thing which works, feels good to me, but could occasionally fuck up in a massive way".
Not defending Apple, but this article is so far detached from reality that its hard to overstate.
It's obviously broken, so no, Apple Intelligence should not have been this.
I’m sure apple et al will eventually have stuff like OpenClaw but expecting a major company to put something so unpolished, and with such major unknowns, out is just asinine.
Steve Jobs
I used to think this was because they didn’t take AI seriously but my assumption now is that Apple is concerned about security over everything else.
My bet is that Google gets to an actually useful AI assistant before Apple because we know they see it as their chance to pull ahead of Apple in the consumer market, they have the models to do it, and they aren’t overly concerned about user privacy or security.
> the open-source framework that lets you run Claude, GPT-4, or whatever model you want to
And
> Here’s what people miss about moats: they compound
Swapping an OpenAI for an Anthropic or open weight model is the opposite of compounding. It is a race to the bottom.
> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”
From what I hear OC is not like that at all. People are going to want a model that reliably does what you tell it to do inside of (at a minimum) the Apple ecosystem.
> They could have charged $500 more per device and people would have paid it.
I sincerely doubt that. If Apple charged $500 for a feature it would have to be completely bulletproof. Every little failure and bad output would be harshly criticized against the $500 price tag. Apple's high prices are already a point of criticism, so adding $500 would be highly debated everywhere.
I don't pretend to know the future (nor do I believe anyone else who claims to be able to), but I think the opposite has a good chance of happening too, and hype would die down over "AI" and the bubble bursts, and the current overvaluation (imo at least. I still think it is useful as a tool, but overhyped by many who don't understand it.) will be corrected by the market; and people will look back and see it as the moment that Apple dodged a bullet. (Or more realistically, won't think about it at all).
I know you can't directly compare different situations, but I wonder if comparisons can be made with dot-com bubble. There was such hype some 20-30 years ago, with claims of just being a year or two away from, "being able to watch TV over the internet" or "do your shopping on the web" or "have real-time video calls online", which did eventually come true, but only much, much, later, after a crash from inflated expectations and a slower steady growth.*
* Not that I think some claims about "AI" will ever come true though, especially the more outlandish ones such as full-length movies made by a prompt of the same quality made by a Hollywood director.
I don't know what a potential "breaking point" would be for "AI". Perhaps a major security breach, even _worse_ prices for computer hardware than it is now, politics, a major international incident, environmental impact being made more apparent, companies starting to more aggressively monetize their "AI", consumers realising the limits of "AI", I have no idea. And perhaps I'm just wrong, and this is the age we live in now for the foreseeable future. After all, more than one of the things I have listed have already happened, and nothing happened.
terminalbraid•1h ago
ArchieScrivener•1h ago
thewhitetulip•49m ago
They don't say here is a 1000 $ iphone and there is a 60% chance you can successfully message or call a friend
The other 40% well? AGI is right around the corner and can US govt pls give me 1 trillion dollar loan and a bailout?
criddell•42m ago
sanex•26m ago