This is a great list of academic attacks, but it proves less than you think.
Yes, TEEs have been broken in dozens of ways. Side channels, transient execution, voltage manipulation, interrupt timing... etc. To be fair, you could make an equally impressive list for many security primitives.
The question isn't "can TEEs be broken?" since clearly they can, but rather what's your threat model and what are your alternatives?
What TEEs actually defend against is passive compromise. They force an attacker to actively exploit rather than just read memory. That legal and operational distinction matters enormously in practice.
The alternative to TEE is "no hardware isolation at all," and that's strictly worse for every threat model where TEEs provide value.
Additionally, you still get attestation which gives you cryptographic proof of what code is running.
libroot•1h ago
Sure, threat models matter, but that's exactly the point. TEEs are marketed as if they solve the "malicious infrastructure" problem. Cloud providers tell you they can't see your data, and vendors pitch TEEs as some kind of hardware-rooted guarantee. If your threat model is "malicious sysadmin" or "host operator" then the fact that the root of trust is opaque, unauditable, and repeatedly compromised does matter.
Saying "what's your alternative" also misses the criticism. The issue isn't whether TEEs can reduce some threats compared to no isolation at all. Obviously they can in some scenarios. The issue is that their trust model is misrepresented: you're still trusting vendors and firmware you can't inspect, and history shows that trust is often misplaced. That's not "no alternative", that's "don't build your security story on black boxes with a track record of holes."
If the only way TEEs "work" is if you lower your expectations to "slightly better than nothing," then the marketing and security claims around them are deeply misleading. At that point, calling them "trusted" environments is just branding, not security.
> Additionally, you still get attestation which gives you cryptographic proof of what code is running.
Remote attestation ultimately relies on the same implicit trust it claims to replace. For example this paper[1] from 2019 showed how AMD's PSP secure boot can be compromised, giving an attacker an possibility to load a patched firmware that grants arbitrary read/write access to the PSP memory, which then allows the attacker to extract the Chip Endorsement Key (CEK), which is AMD's attestation root key. Once you have the CEK, you can forge attestation reports (for example impersonate a legitimate SEV platform) or bypass attestation entirely. And the CEK had (changed in 2023) an infinite lifetime and there was no rollback protection, so even if AMD issued a firmware update, attackers could revert to the old vulnerable firmware and re-extract the CEK.
rasengan•2h ago
Yes, TEEs have been broken in dozens of ways. Side channels, transient execution, voltage manipulation, interrupt timing... etc. To be fair, you could make an equally impressive list for many security primitives.
The question isn't "can TEEs be broken?" since clearly they can, but rather what's your threat model and what are your alternatives?
What TEEs actually defend against is passive compromise. They force an attacker to actively exploit rather than just read memory. That legal and operational distinction matters enormously in practice.
The alternative to TEE is "no hardware isolation at all," and that's strictly worse for every threat model where TEEs provide value.
Additionally, you still get attestation which gives you cryptographic proof of what code is running.
libroot•1h ago
Saying "what's your alternative" also misses the criticism. The issue isn't whether TEEs can reduce some threats compared to no isolation at all. Obviously they can in some scenarios. The issue is that their trust model is misrepresented: you're still trusting vendors and firmware you can't inspect, and history shows that trust is often misplaced. That's not "no alternative", that's "don't build your security story on black boxes with a track record of holes."
If the only way TEEs "work" is if you lower your expectations to "slightly better than nothing," then the marketing and security claims around them are deeply misleading. At that point, calling them "trusted" environments is just branding, not security.
> Additionally, you still get attestation which gives you cryptographic proof of what code is running.
Remote attestation ultimately relies on the same implicit trust it claims to replace. For example this paper[1] from 2019 showed how AMD's PSP secure boot can be compromised, giving an attacker an possibility to load a patched firmware that grants arbitrary read/write access to the PSP memory, which then allows the attacker to extract the Chip Endorsement Key (CEK), which is AMD's attestation root key. Once you have the CEK, you can forge attestation reports (for example impersonate a legitimate SEV platform) or bypass attestation entirely. And the CEK had (changed in 2023) an infinite lifetime and there was no rollback protection, so even if AMD issued a firmware update, attackers could revert to the old vulnerable firmware and re-extract the CEK.
[1]: https://arxiv.org/pdf/1908.11680
Edit:
And very recently, the new Battering RAM[2] (Sep 2025) and WireTap[3] (Oct 2025) attacks have broken Intel SGX and AMD SEV-SNP remote attestations.
[2]: https://batteringram.eu/
[3]: https://wiretap.fail/