>broken SGX metadata protections
Citation needed. Also, SGX is just there to try to verify what the server is doing, including that the server isn't collecting metadata. The real talking is done by the responses to warrants https://signal.org/bigbrother/ where they've been able to hand over only two timestamps of when the user created their account and when they were last seen. If that's not good enough for you, you're better off using Tor-p2p messengers that don't have servers collecting your metadata at all, such as Cwtch or Quiet.
>weak supply chain integrity
You can download the app as an .apk from their website if you don't trust Google Play Store.
>a mandate everyone supply their phone numbers
That's how you combat spam. It sucks but there are very few options outside the corner of Zooko's triangle that has your username look like "4sci35xrhp2d45gbm3qpta7ogfedonuw2mucmc36jxemucd7fmgzj3ad".
>and agree to Apple or Google terms of service to use it?
Yeah that's what happens when you create a phone app for the masses.
I wish apple & google provided a way to verify that an app was actually compiled from some specific git SHA. Right now applications can claim they're opensource, and claim that you can read the source code yourself. But there's no way to check that the authors haven't added any extra nasties into the code before building and submitting the APK / ios application bundle.
It would be pretty easy to do. Just have a build process at apple / google which you can point to a git repo, and let them build the application. Or - even easier - just have a way to see the application's signature in the app store. Then opensource app developers could compile their APK / ios app using github actions. And 3rd parties could check the SHA matches the app binaries in the store.
Is that a typo or are you really implying half the human population use Signal?
Edit: I misread, you are counting almost every messaging app user.
Same goes for Whatsapp, but the marketing is different there.
Also while we would expect heavy promotion for a trapped app from some agency it's also a very reasonable situation for a protocol/app that actually was secure.
You can of course never be sure but the fact that it's heavily promoted/used by people on both the whistleblowers, large corporations and multiple different National Officials at the same time is probably the best trustworthyness signal we can ever get for something like this.
(if all of these can trust it somewhaat it has to be a ridiculously deep conspiracy to not have leaked at least to some national security agency and forbidden to use(
To be fair, that is largely because WhatsApp partnered with Open Whisper to bring the Signal protocol into Whatsapp. So effectively, you're saying "Signal-the-app is hardly more private than another app that shares Signal-the-protocol".
In practical terms, the only way for Signal to be significantly more private than WhatsApp is if WhatsApp were deliberately breaking privacy through some alternative channel (e.g. exfiltrating messages through a separate connection to Meta).
Kind of because Whatsapp adopted Signal's E2EE... And not even that long ago!
"Confer - Truly private AI. Your space to think."
"Your Data Remains Yours, Never trained on. Never sold. Never shared. Nobody can access it but you."
"Continue With Google"
Make of that what you will.
It wouldn't be long until Google and Gemini can read this information and Google knows you are using Confer.
Wouldn't trust it regardless if Email is available.
The fact that confer allows Google login shows that Confer doesn't care about users privacy.
Usually in a context where a cypherpunk deploys E2EE it means only the intended parties have access to plaintexts. And when it's you having chat with a server it's like cloud backups, the data must be encrypted by the time it leaves your device, and decrypted only once it has reached your device again. For remote computing, that would require LLM handles ciphertexts only, basically, fully homomorphic encryption (FHE). If it's that, then sure, shut up and take my money, but AFAIK the science of FHE isn't nearly there yet.
So the only alternative I can see here is SGX where client verifies what the server is doing with the data. That probably works against surveillance capitalism, hostile takeover etc., but it is also US NOBUS backdoor. Intel is a PRISM partner after all, and who knows if national security requests allow compelling SGX keys. USG did go after Lavabit RSA keys after all.
So I'd really want to see this either explained, or conveyed in the product's threat model documentation, and see that threat model offered on the front page of the project. Security is about knowing the limits of the security design so that the user can make an informed decision.
ChatGPT already knows more about me than Google did before LLMs, but would I switch to inferior models to preserve privacy? Hard tradeoff.
vaylian•1h ago