I get the benefits of restoring a full backup, but in this instance it would seem to lose practical security benefits for theoretical purity.
Cookies are generally persisted to disk in one of your browser's many caches.
User asks Company (with human staff) to login and do the same thing. Perhaps the company is an accounting firm, a legal firm, or a “manage my company for me” kind of firm. No problem.
User asks Company which makes self-hosted business management tools to login to their online banking. Oh shit!!! This is a violation of the ToS! The Company that makes this tool is violating the bank’s rights! The user doesn’t understand how they’re letting themselves get hacked!! Block block block! (Also some banks realise that can charge a fee for such access!)
Everyone on HN sees how that last case — the most useful given how great automation is these days — should be permitted.
I wish the governing layers of society could also see how useful such automation is.
These Device-Bound Session Credentials could result in the death of many good automation solutions.
The last hope is TPM emulation, but I’m sure that TPM attestation will become a part of this spec, and attestation prevents useful emulation. In this future, Microsoft and others will be able to charge the banks a great deal of money to help “protect their customers” via TPM attestation licensing fees, involving rotation, distribution, and verification of keys.
I’m guessing the protocol will somehow prevent one TPM being used for too many different user accounts with one entity (bank), preventing cloud-TPM-as—a-service being a solution to this. If you have 5,000 users that want to let your app connect to their Bobby's Bank online banking, then you’ll need 5,000 different TPMs. Also Microsoft (or whoever) could detect and blacklist “shared” TPMs entirely to kill TPMaaS entirely.
Robotic Process Automation on the user’s desktop, perhaps in a hidden Puppeteer browser, could still work. But that’s obviously a great deal harder to implement than just “install this Chrome extension and press this button to give me your cookies.”
Goodbye web freedom, and my software product :(
Anything that can be done via phone will be done via ai talking to ai.
I was all ready to disagree with you but apparently you're correct. Color me surprised.
> DBSC will also not prevent an attack if the attacker is replacing or injecting into the user agent at the time of session registration as the attacker can bind the session either to keys that are not TPM bound, or to a TPM that the attacker controls permanently.
This is a very pleasant surprise. I've grown accustomed to modern auth protocols (and other tech stacks as well) having DRM functionality baked into them where they can attest the vendor of the device or the software stack being used to perform the auth. It's become bad enough that at this point I just reflexively assume that any new web technology is hostile to user autonomy.
As long as banks are held accountable or generally blamed for people handing over their savings to foreign scammers, any kind of external access will be considered a threat. Every single time people get scammed by fake apps or fake websites or fake calls, a large section of society goes "the bank should've prevented this!!!".
Here, one particular bank is popular because of their pro-crypto stance, their high interest rates, and their app-only approach. That makes them an extremely easy target for phishing and scamming, and everyone blames the bank for the old men pressing the "yes I want to log in with a QR code" button when a stranger calls them. Of course, banks could stop scams like that, so the calls to maybe delay transferring tens of thousands for human review aren't exactly baseless, but this is how you get the situation where businesses struggle to integrate with banking apps.
There are initiatives such as PSD2, but those are not exactly friendly to the "move fast and break things" companies that you'll find on HN (because moving fast and breaking things is not a good idea when you're talking about managing people's life savings).
The TPM is used here because it's the most secure way to store a keypair like this. But, as the spec says:
> DBSC will not prevent temporary access to the browser session while the attacker is resident on the user’s device. The private key should be stored as safely as modern operating systems allow, preventing exfiltration of the session private key, but the signing capability will likely still be available for any program running as the user on the user’s device.
In other words, if a more secure alternative than TPMs comes into play, browsers should migrate. If no TPM is available, something like a credential service would also suffice.
As for TPM emulation: it already exists. Of course, TPMs also contain a unique, signed certificate from the TPM manufacturer that can be validated, so it's possible for TPM-based protocols to deny emulated TPMs. The Passkey API supports mechanisms like that, which makes Passkeys a nice way to validate that someone is a human during signup, though the API docs tell you not to do that.
I don't think this will be a worthwhile security benefit for most sites, and comes with trade-offs, but we already have trade-offs for higher security around sensitive things like banking and email where most users need a lot of protection.
There are no guard rails built in to make sure this isn't used by everyone and their dog as long as it makes site automation just a bit more difficult. Also kiss goodbye to browsing the internet without a governement/bigcorp™ approved TPM.
You might need to build a custom version of Chrome that supports bypassing the user interaction requirements.
Honestly this fairly simple scheme looks a lot like what I wish webauthn could have been.
>DBSC is not designed to give hosts any sort of guarantee about the specific device a session is registered to, or the state of this device.
Nevermind then. Also makes it more or less useless as a security measure but atleast not outright harmful like the famous WEI proposal.
Define "security". This is incredibly useful for mitigating bearer token exfiltration which is the stated purpose. It's also the same way ssh keypairs work and those are clearly much more secure than passwords.
It's only "insecure" from the perspective of a service host who wants to exert control over end users.
Even webauthn leaves attestation as an optional thing. Even in the case that the service operator requires it, so long as they don't engage in vendor whitelisting you can create a snakeoil authority on the fly.
The main advantage this has over webauthn is that it is so much simpler.
That would have the benefit that every web service automatically gets added security.
One implementation might be:
* Have a secure enclave/trustzone worker store the cookie jar. The OS and browser would never see cookies.
* When the browser wants to make an HTTPS request containing a cookie, the browser send "GET / HTTP/1.0 Cookie: <placeholder>" to the secure enclave.
* The secure enclave replaces the placeholder with the cookie, and encrypts the https traffic, and sends it back to the OS to be sent over the network.
* Someone inspecting the page with developer tools
* Logs that accidentally (or intentionally) contain the cookie
* A corporate (or government) firewall that intercepts plaintext traffic
* Someone with temporary physical access to the machine that can use the TPM or secure enclave to decrypt the cookie jar.
* A mistake in the cookie configuration and/or DNS leads to the cookie getting sent to the wrong server.
This would protect against those scenarios.
1) TLS
2) make your cookie __Secure- or __Host- - which then require the secure attribute.
If DNS is wrong, it should then point to a server without the proper TLS cert and your cookie wouldn't get sent.
For example, the server could verify the cookie and replace it with some marker like 'verified cookie of user ID=123', and then the whole application software doesn't have access to the actual cookie contents.
This replacement could be at any level - maybe in the web server, maybe in a trusted frontend loadbalancer (who holds the tls keys), etc.
Additionally, the TPM will now need to have a root store of root CAs. Will the TPM manufacturer update the root store? Users won't be able to install a custom root CA. That's going to be a problem, because custom root CAs are needed for a variety of different purposes.
When a user gets an HTTPS certificate error, now it'll be impossible for the user to bypass it.
According to BigTech that's a feature, not a bug.
Ironically the design you propose, juggling headers over to a secure enclave and having the secure enclave form the TLS tunnel, is significantly more complex than just using an asymmetric keypair in a portable manner. That's been standard practice for SSH for I don't even know how long now - at least 2 decades.
Oh also there's a glaring issue with your proposed implementation. The attacker simply initiates the request using their own certificate, intercepts the "secure" encrypted result, and decrypts that. You could attempt mitigations by (for example) having the secure enclave resolve DNS but at that point you're basically implementing an entire shadow networking stack on the secure enclave and the exercise is starting to look fairly ridiculous.
Binding a session cookie to a device is pretty simple though. You just send a nonce header + the cookie signed with the nonce using a private key. What the chrome team is getting wrong here is that there is no need for these silly short lived cookies that need to be refreshed periodically.
First of all, this approach have the nice fact that now we need new TPMs capable of doing that, and even if people could update it, we will need to wait for everybody to update their TPMs. So lets wait another 10 to 15 years before we’re really sure.
Second, the attack vector google’s approach is trying to protect against is assuming someone stole your cookies. Might as well assume that someone has gained root on your machine. Can you protect against that? Google’s approach does regardless of how “owned” your machine is, yours doesn’t.
It’s not like you’re gonna hand off the TLS stream to the tpm to write a bit into it, then hand it back to the OS to continue. The tpm can’t write to a Linux tcp socket. whatever value the tpm is returning can be captured and replayed indefinitely or for the max length of the session.
So you’re back where you started and you need to have a “keep alive” mechanism with the server about these sessions.
Google’s approach is simpler. A private key you refresh your ownership of every X minutes. Even if I’m root on your machine. Whatever I steel from it has a short expiration time. It cuts down the unnecessary step of having the tpm hold the cookie too. Plus it doesn’t introduce any limitations on the cookie size
https://github.com/w3c/webauthn/issues/199#issuecomment-2669...
Importantly, the presence of attestation in webauthn could potentially compromise privacy or user choice in certain cases. DBSC has zero support for that.
You could certainly use a webauthn credential to establish a DBSC session though.
If someone gets short lived access to a control panel for something, there are normally ways to twiddle settings to, for example, create more user accounts, or slacken permissions.
If someone gets short lived access to a datastore, they can download all the data.
etc.
Many sites already have some protections against that by for example requiring you to enter your password and/or 2fa code to disable 2fa, change privacy settings, update an email address, etc.
In the case of bearer tokens there are many cases where attackers have managed to steal them without achieving full device compromise. Since it's literally sending the key in plaintext (horribly insecure) all it takes is tricking the client software into sending the header to the wrong place a single time.
This seems false? Given the description in the article, the short lived cookie could be used from another device during its lifetime. Having this short lived cookie and having the browser proactively refresh it seems like a bad design to me. The proof of possession should be a handshake at the start of each connection. With HTTP3 you shouldn't need a lot of connections.
> The proof of possession should happen at the start of each connection. With HTTP3 you shouldn't need a lot of connections.
That could possibly be workable in some situations, but it would add a lot of complexity to application layer load balancers, or reverse proxies, since they would somehow need to communicate that proof of possession to the backend for every request. And it makes http/3 or http/2 a requirement.
That said, the DBSC scheme has the rather large advantage that it can be bolted on to the current bearer token scheme with minimal changes and should largely mitigate the current issues.
Step 2: TPM required, and your cookies are no longer yours.
I actually like the idea as long as you hold the keys. Unfortuately, the chasm to cross is so small that I can't see this ending in a way beneficial for users.
Ed: reading a bit more closely it sounds like the request is more of a notification and actually all the real work happens in the user's browser, so you could presumably ignore it and hope the generated bandwidth to your server is pretty low.
I wish other browsers implemented this kind of self protection, but I suppose that is difficult to do for third party browsers. This seems like a great improvement as well, but it seems this is quite overengineered to work around security limitations of desktop operating systems.
I mean, if my threat model starts with "I have a mal/spyware running alongside my browser with access to all my local files", I would pretty much call it game over.
eg: local network access, access to the documents and desktop folder, screen recording, microphone access, accessibility access (for keylogging), full disk access, all require you to grant permission
This is a big problem I have with desktop security - people just give up when faced with something so trivial as user privileged malware. I consider it a huge flaw in desktop security that user privilege malware can get away with so many things.
macOS is really the only desktop OS that doesn't just give up when faced with same user privileged malware (in good and bad ways). So there it's likely a good mitigation - macOS also doesn't permit same user privileged processes to silently key log, screen record, network trace and various other things that are possible on Windows and common Linux configurations.
Another approach is to police everything behind rules (the way selinux or others do), which is even better in theory. In practice, you waste a ton of time bending those policies to your specific needs. A typical user won't take that.
Then there is the flatpak+portal isolation model, which is probably the most pragmatic, but not without its own compromises and limitations.
The attitude of trusting by default, and chrooting/jailing in case of doubt probably still have decades to live.
Asymmetric crypto is more complex and resource intensive but is useful when you have concerns about the remote endpoint impersonating you. However that's presumably not a concern when the authentication is unique to the ( server, client ) pair as it appears to be in this case. This doesn't appear to be an identity scheme hence my question.
(This is not criticism BTW. I am always happy to see the horribly insecure bearer token model being replaced by pretty much anything else.)
userbinator•10h ago
matt123456789•10h ago
djrj477dhsnv•10h ago
sylos•9h ago
dns_snek•5h ago
https://news.ycombinator.com/item?id=36910146
thayne•8h ago
nicce•10h ago
> Servers cannot correlate different sessions on the same device unless explicitly allowed by the user.
I read it like browser can always correlate public/private key to the website (it knows if there is authenticated tab/window somewhere).
Why they are making this possible, if you could store the information in random UUID and just connect it to the cookie? What is the use case where you want to connect new session instead of using the old one?
fc417fc802•3h ago
> What is the use case where you want to connect new session instead of using the old one?
Multiple accounts? Clear cookies and visit the next day? Probably other stuff as well. The import point is that DBSC doesn't itself increase the ability of website operators to track you beyond what they can already do.