I would love to see more focus on device manufacturers protecting the user instead of trying to protect themselves.
Prime example where the TPM could be fantastic: embedded devices that are centrally coordinated. For example, networking equipment. Imagine if all UniFi devices performed a measured boot and attested to their PCR values before the controller would provision them. This could give a very strong degree of security, even on untrusted networks and even if devices have been previously connected and provisioned by someone else. (Yes, there’s a window when you connect a device where someone else can provision it first.
But instead companies seem to obsess about protecting their IP even when there is almost no commercial harm to them when someone inevitably recovers the decrypted firmware image.
https://arxiv.org/abs/2304.14717
https://www.usenix.org/system/files/conference/usenixsecurit...
It’s not hard to protect an FDE key in a way that one must compromise both the TPM and the OS to recover it [0]. What is very awkward is protecting it such that a random user in the system who recovers the sealed secret (via a side channel or simply booting into a different OS and reading it) cannot ask the TPM to decrypt it. Or protecting one user’s TPM-wrapped SSH key from another user.
I have some kludgey ideas for how to do this, and maybe I’ll write them up some day.
[0] Seal a random secret to the TPM and wrap the actual key, in software, with the sealed secret. Compromising the TPM gets the wrapping key but not the wrapped key.
The system could extend one of the PCRs, or an NVPCR, with some unique user credential locked to the user directory. Then you can't recreate the PCR records in any immediate way.
But you can't just recreate a key under one of the hierarchies anyway. You still need to posses the keyfile.
Sure, but can the system context-switch that PCR between two different users?
Right, no it can't.
But this was not really something the TPM was suppose to solve.
1. Have some PCRs that are not in the TPM at all but instead have their values sent from the driver along with any command that references them.
2. Have some policy commands that are aimed at the driver, not the TPM. The TPM will always approve them, but they contain a payload that will be read and either accepted or rejected by the driver.
3. Have a way to create a virtual TPM that is hosted by the real TPM and a way to generate attestations that attest to both the real TPM part (using the real TPM's attestation key hierarchy and whatever policy was needed to instantiate the virtual TPM) and to the virtual TPM's part of the attestation. And then give less-trusted code access only to the virtual TPM.
#3 would be very useful for VMs and containers and such, too.
https://github.com/OP-TEE/optee_ftpm
Or you mean dedicated TPM?
In context of trusted boot — not much. If your specific application doesn't require TPM 2.0 advanced features, like separate NVRAM and different locality levels, then it's not worth to use dedicated chip.
However if you want something like PIN brute force protection with a cooldown on a separate chip, dTPM will do that. This is more or less exactly why Apple, Google and other major players have separate chip for most sensitive stuff—to prevent security bypasses when the attacker gained code execution (or some kind of reset) on the application processor.
TPM and x86 trusted boot / root of trust are completely separate things, linked _only_ by the provision of measurements from the (_presumed_!) good firmware to the TPM.
x86 trusted boot relies on the same SoC manufacturer type stuff as in ARM land, starting with a fused public key hash; on AMD it's driven by the PSP (which is ARM!) and on Intel it's a mix of TXE and the ME.
This is a common mistake and very important to point out because using TPM alone on x86 doesn't prove anything; unless you _also_ have a root of trust, an attacker could just be feeding the "right" hashes to the TPM and you'd never know better.
You more or less can't do that on x86, and have to rely on existing proprietary code facilities to implement measured boot using TPM (as the only method), for which you can implement trusted boot, using TPM and all the previous measures proprietary code made to it.
Not going to say they are non-existent, but probably the only mention of not using UEFI on Intel chips was in the presentation of Linux optimization for automotive from Intel itself, where they booted Linux in 2 seconds from the cold boot.
Anyway, I think we're both on the same page regardless that TPM and hardware root of trust are not the same thing. In some configurations TPM can (weakly) attest that the hardware root of trust is present, but it doesn't actually do any hardware trust root, and that looks architecturally very similar on x86 to how it looks anywhere else (mask ROM verifies a second bootloader against RTL or manufacturing fused chipmaker public key hash, second bootloader measures subsequent material against OEM fused key hash, and so it goes).
If you don't need the TPM checkbox, most vendors have simple signing fuses that are a lot easier than going fTPM.
So yes incorporating a separate secure element\TPM chip into a design is probably more secure, but ultimately the right call will always depend on your threat model.
TPMs can be reprogrammed by the customer. If the device needs to be returned for repairs, the customer can remove their TPM, so that even the manufacturer cannot crack open the box and have access to their secrets.
That's only theory though, as the box could actually be "dirty" inside; for instance it could leak the secrets to obtained from the TPM to mass storage via a swap partition (I don't think they are common in embedded systems, though).
Essentially, TPM is a standardized API for implementing a few primitives over the state of PCRs. Fundamentally, TPM is just the ability to say "encrypt and store this blob in a way that it can only be recovered if all of these values were sent in the right order," or "sign this challenge with an attestation that can only be provided if these values match." You can use a TEE to implement a TPM and on most modern x86 systems (fTPM) this is how it is done anyway.
You don't really need an fTPM either in some sense; one could use TEE primitives to write a trusted application that should perform similar tasks, however, TPM provides the API by which most early-boot systems (UEFI) provide their measurements, so it's the easiest way to do system attestation on commodity hardware.
What a TPM does is provides a chip with some root key material (seeds) which can be extended with external data (PCRs) in a way which is a black box, and then that black box data can be used to perform cryptographic operations. So essentially, it is useful only for sealing data using the PCR state or attesting that the state matches.
This becomes an issue once you realize what's sending the PCR values; firmware which needs its own root of trust.
This takes you to Intel Boot Guard and AMD PSB/PSP, which implement traditional secure boot root of trust starting from a public key hash fused into the platform SoC. Without these systems, there's not really much point using a TPM, because an attacker could simply send the "correct" hashes for each PCR and reproduce the internal black-box TPM state for a "good" system.
Nothing prevents all the parties (the one you are attesting to and the central authority you use for indirection) to save everything and cross reference at any point in the future.
The same problem and often worse is present in DRM systems.
In the case of Widevine DRM you are actually leaking a static HWID to every license server, no collusion required. This is because there is no indirection involved, you give the license server the public key of the private key fused in the secure enclave for this purpose. The only safeguard is that every license server needs a certificate from Google to function (secure enclave will reject forming a request on invalid cert).
There are a lot of license servers.
As a side note, this is how they impose a cost on pirates. They employ forensic watermarks for the content streamed to subscribers - at the CDN level, they can do it cheaply using A/B watermarking, the cost is to store double the size of every file. When that content shows up in p2p piracy they trace it to the account and the device's DRM system public key and revoke its ability to view content (on the level of the license server) and ban the account.
ARM may have the market now… but RISC-V is the fastest growing and it may be poising to eat ARM’s lunch
dfajgljsldkjag•2w ago
derekerdmann•2w ago
bangaladore•2w ago
TPMs work great when you have a mountain of supporting libraries to abstract them from you. Unfortunately, that's often not the case in the embedded world.
RedShift1•2w ago
Nextgrid•2w ago
This means you generally need an authenticated boot chain (via PCR measurements) and then have your Java app "seal" the key material to that.
It's not a problem with the TPM per-se, it's no different if you were using an external smartcard or HSM - the HSM still needs to ensure it's talking to the right app and not an impersonator (and if you use keypair authentication for that, then your app must store the keypair somewhere - you've just moved the authentication problem elsewhere).
bangaladore•2w ago
plagiarist•2w ago
I am using TMP for this on x86 machines that I want to boot headless. If I need to replace the disk I can just do a regular wipe and feel pretty comfortable.
I'd use a Yubikey or other security token with the Pi, but the device needs to boot without user intervention and the decryption code I'm aware of forces user presence whether or not the Yubikey requires that.