There is no point creating such document if elephant in the room is not addressed.
You might as well enumerate all the viruses ever made on Windows, point to them, and then ask why Microsoft isn’t proving they’ve shut them all down yet in their documents.
Microsoft does not sell Windows as a sealed, uncompromisable appliance. It assumes a hostile environment, acknowledges malware exists, and provides users and third parties with inspection, detection, and remediation tools. Compromise is part of the model.
Apple’s model is the opposite. iOS is explicitly marketed as secure because it forbids inspection, sideloading, and user control. The promise is not “we reduce risk”, it’s “this class of risk is structurally eliminated”. That makes omissions meaningful.
So when a document titled Apple Platform Security avoids acknowledging Pegasus-class attacks at all, it isn’t comparable to Microsoft not listing every Windows virus. These are not hypothetical threats. They are documented, deployed, and explicitly designed to bypass the very mechanisms Apple presents as definitive.
If Apple believes this class of attack is no longer viable, that’s worth stating. If it remains viable, that also matters, because users have no independent way to assess compromise. A vague notification that Apple “suspects” something, with no tooling or verification path, is not equivalent to a transparent security model.
The issue is not that Apple failed to enumerate exploits. It’s that the platform’s credibility rests on an absolute security narrative, while quietly excluding the one threat model that contradicts it. In other words Apple's model is good old security by obscurity.
> Lockdown Mode is an optional, extreme protection that’s designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats. Most people are never targeted by attacks of this nature. When Lockdown Mode is enabled, your device won’t function like it typically does. To reduce the attack surface that potentially could be exploited by highly targeted mercenary spyware, certain apps, websites, and features are strictly limited for security and some experiences might not be available at all.
> In this table, in the "iCloud Backup (including device and Messages backup)" row, under "Standard data protection",
> the "Encryption" column reads "In transit & on server". Yes, this means that Apple can read all of your messages
> out of your iCloud backups.
In addition to the things you mentioned, there's certainly a possibility of Apple attaching a virtual "shadow" device to someone's Apple ID with something like a hide_from_customer type flag, so it would be invisible to the customer.This shadow device would have it's own keys to read messages sent to your iCloud account. To my knowledge, there's nothing in the security model to prevent this.
However, iCloud backups actually are listed as "End-to-end" if you turn on the new Advanced Data Protection feature.
> the main reason a message wouldn't be properly end-to-end encrypted in Google's Messages app is when communicating with an iPhone user, because Apple has dragged their feet on implementing RCS features in iMessage
(or with any other android user who isn't using a first-party device / isn't using this one app)
> [...] Android's equivalent cloud backup service has been properly end-to-end encrypted by default for many years. Meaning that you don't need to convince the whole world to turn on an optional feature before your backups can be fully protected.
You make it out to seem that it's impossible for Google to read your cloud backups, but the article you link to [0] earlier in your post says that "this passcode-protected key material is encrypted to a Titan security chip on our datacenter floor" (emphasis added). So they have your encrypted cloud backup, and the only way to get the key material to decrypt it is to get it from an HSM in their datacenter, every part of which and the access to which they control... sounds like it's not really any better than Apple, from what I'm reading here. Granted, that article is from 2018 and I certainly have not been keeping up on android things.
[0] https://security.googleblog.com/2018/10/google-and-android-h...
Giving users an option between both paths is usually best. Most users care a lot more that they can’t restore a usable backup of their messages than they do that their messages are unreadable by the company storing them.
I used to work at a company where our products were built around encryption. Users here on HN are not the norm. You can’t trust that most users will save recovery codes, encryption seed phrases, etc in a manner that will be both available and usable when they need them, and then they tend to care a lot less about the privacy properties that provides and a lot more that they no longer have their messages with {deceased spouse, best friend, business partner, etc}.
If you want to see security done well (or at least better), see the GrapheneOS project.
The developers also appear to believe that the apps have a right to inspect the trustworthiness of the user's device, by offering to support apps that would trust their keys [1], locking out users who maintain their freedom by building their own forks.
It's disheartening that a lot of security-minded people seem to be fixated on the "AOSP security model", without realizing or ignoring the fact that a lot of that security is aimed at protecting the apps from the users, not the other way around. App sandboxing is great, but I should still be able to see the app data, even if via an inconvenient method such as the adb shell.
1. https://grapheneos.org/articles/attestation-compatibility-gu...
But if you wish to build it from source, it could probably be a good option.
I don't currently have any root on the phone, but I reserve the right to add it or run the userdebug build at a later date
That is not a bad thing. The alternative is not having apps that do these checks available on the platform at all. It’s ridiculous that someone should expect that every fork of it should have that capability (because the average developer is not going to accept the keys of someone’s one off fork).
If there’s anyone to blame, it should be the app developers choosing to do that (benefits of attestation aside).
Attestation is also a security feature, which is one of the points of GOS. People are free to use any other distribution of Android if they take issue with it.
Obviously I could be wrong here, this is just the general sentiment that I get from reading GOS documentation and its developer’s comments.
I don't actually disagree with this. The auditor is a perfectly valid use of it. It's good to be able to verify cryptographically your device is running what it's supposed to.
The problem is when it transcends ownership boundaries and becomes a mechanism to exert control over things someone doesn't own, like your bank or government controlling your phone. It is one of the biggest threats to ownership worldwide.
Note also that getting "trusted" comes at the cost of other security features, such as spoofing your location securely to apps:
Ah, the apps^Wgovernment (look at that page, most of it is government IDs) should be able to discriminate against me for daring to assert control over my own device. And GrapheneOS is saying:
Hey government! We pinky promise to oppress the user just the same, but even more securely and competently than Google/Samsung!
> what does it matter to you
It shows that the developers maybe don't fully have your best interests at heart?
I would really like to see a benchmark with and without security measures.
Apple makes available on a highly controlled basis iPhones which permit the user to disable “virtually all” of the security features. They’re available only to vetted security researchers who apply for one, often under some kind of sponsorship, and they’re designed to obviously announce what they are. For example they are engraved on the sides with “Confidential and Proprietary. Property of Apple”.
They’re loaned, not sold or given, remain Apple’s property, and are provided on a 12-month (optionally renewable) basis. You have to apply and be selected by Apple to receive one, and you have to agree to some (understandable but) onerous requirements laid out in an legal agreement.
I expect that if you were to interrogate these iPhones they would report that the CPU fuse state isn’t “Production” like the models that are sold.
They refer to these iPhones as Security Research Devices, or SRDs.
Apple has since confirmed in a statement provided to Ars that the US federal government “prohibited” the company “from sharing any information,” but now that Wyden has outed the feds, Apple has updated its transparency reporting and will “detail these kinds of requests” in a separate section on push notifications in its next report.Americans are not one person.
> So I don’t think anyone cares
Clearly they do.
> every CEO (definitely not just Tim Cook) is schmoozing with Trump.
Tim Cook was (supposedly) principled. I guess it's hard to pretend that you care about privacy or human rights while eating dinner next to bin Salman.
In a lot of ways Apple is as aligned to data privacy the same way other "platforms" are: to gatekeep the user data behind their ad service. It's better than selling your data, maybe, but you're still being tracked and monitored.
Hyperbole doesn’t help your point. They definitely care about security, their profits depend on it.
https://james.darpinian.com/blog/apple-imessage-encryption/
My current understanding of the facts:
1. Google defaults to encrypted backups of messages, as well as e2e encryption of messages.
2. Apple defaults only to e2ee of messages, leaving a massive backdoor.
3. Closing that backdoor is possible for the consumer, by enabling ADP (advanced data protection) on your device. However, this makes no difference, since 99.9% of the people you communicate will not close the backdoor. Thus, the only way to live is to assume that all the messages you send via iMessage will always be accessible to Apple, no matter what you do.
It's not like overall I think Google is better for privacy than Apple, but this choice by Apple is really at odds with their supposed emphasis on privacy.
We know now that it was all marketing talk. Apple didn’t like Meta so they spun a bunch of obstacles. Apple has and would use your data for ads, models and anything that keeps the shareholders happy. And we don’t know the half of the story where as a US corp, they’re technically obliged to share data from the not-E2EE iCloud syncs of every iPhone.
As was demonstrated in LA, it's starting to have significant civil rights consequences.
Would Google or Meta go bankrupt if they stopped selling ads? Yes. Apple wouldn’t.
Unbreakable phones are coming. We’ll have to decide who controls the cockpit: The captain? Or the cabin?This apparently includes retrieving all photos from iCloud in chunks of specified size, which seems an infinitely better option than attempting to download them through the iCloud web interface which caps downloads to 1000 photos at a time with than impressive download speeds.
OGEnthusiast•1h ago