It's a great idea but the trust chains are so complex they are hard to reason about.
In "simple" public key encryption reasonably technically literate people can reason about it ("not your key, not your X") but with private compute there are many layers, each of which works in a fairly complex way and AFAIK you always end up having to trust a root source of trust that certifies the trusted device.
It's good in the sense it is trust minimization, but it's hard to explain and the cynicism (see HN comments similar to "you can't trust it because big tech/gov interference etc) means I am sadly pessimistic about the uptake.
I wish it wasn't so though. The cynicism in particular I find disappointing.
On the one hand you have systems where anyone at any company in the value chain can inspect your data ad hoc , with no auditing or notification.
On the other hand, you have systems that prevent casual security / privacy violations but could still be subverted by a state actor or the company that has the root of trust.
Neither is perfect. But it’s cynical and nihilistic to profess to see no difference.
Risk reduction should be celebrated. Those who see no value in it come across as zealots.
Simple question. What if csam is sent to ai. Would it stop, or report to authorities or allow processing ? Same for bad stuff.
grugagag•11h ago
What is this and what is this supposed to mean? I have a hard time trusting these companies with any privacy and while this wording may be technically correct they’ll likely extract all meaning from your communication, probably would even have some AI enabled surveillance service.
ipsum2•11h ago
> This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests — like summarizing unread WhatsApp threads or getting writing suggestions — in our secure and private cloud environment.
brookst•8h ago
..but how can FB prove it isn’t all a smokescreen, and requests are printed out and faxed to evil people? They can’t, of course, and some people like to demand proof of the negative as a way of implying wrongdoing in a “just asking questions” manner.
ATechGuy•7h ago
1. https://tinfoil.sh 2. https://www.privatemode.ai
justanotheratom•11h ago
WhatsApp did not used to be end-to-end encypted, then in 2021 it was - a step in the right direction. Similary, AI interaction in WhatsApp today is not private, which is something they are trying to improve with this effort - another step in the right direction.
mhio•9h ago
btw whatsapp implemented the signal protocol around 2016.
justanotheratom•8h ago
if you find something deceitful in the business practice, that should certainly be called out and even prosecuted. I don't see why an effort to improve privacy has to get a skeptical treatment, because big business bad bla bla
asadm•11h ago
If a company is trying to move their business to be more privacy focused, at least we can be non-dismissive.