The guardian doesn’t call it that explicitly but that’s exactly what they have built here and I think the cover of a news app is brilliant in a lot of ways.
The only thing I would mention on top here as well is if you are planning to leak something using this app I still wouldn’t feel comfortable doing it on any device that could be investigated.
For example a work provided phone. While having the guardian app is itself in no way incriminating if you were to play out the scenario of an internal leak investigation at an organisation that has just ended up on the front pages of the guardian I think you could end up with a very short list by simply asking:
1. WHO had access to that information to begin with?
2. WHO had that app on their phone or the App Store shows it as previously downloaded or they wouldn’t make their phone available for inspection.
So if you’re in a scenario where you’re leaking something only known to a small group and / or your device can be inspected by someone relevant… I’d continue to strongly recommend making contact via a device that isn’t tied to you like your partner or someone you trust.
Remember, the ACTUAL goal here is to defeat the investigation and the best thing you can possibly do here is to not stand out from the crowd of suspects any more than anyone else.
There’s a very short link however between this app and the information you provided turning up in the guardian specifically that might not actually give you the cover you think you have (beyond the technical parts that they took care of which look brilliant for the record). But the next best thing by far I think you could do to help with that larger goal is to use a device not linked to you and that can’t be inspected to begin with.
I just wanted to point that out because it wasn’t called out in the threat model and I could realistically see people getting caught that way.
I would certainly recommend that readers not use a work phone, not only for the reasons you've stated but also that a lot of work devices use mobile device management software which is functionally spyware. To your point, dealing with having a very small anonymity set is tricky regardless of the technology used.
We do go to great lengths to make usage of the app to blow the whistle plausibly deniable. Data is segmented to "public" and "secret" repositories, where any secret data is stored within a fixed-sized/encrypted vault protected by a KDF technique that was developed by one of the team in Cambridge (https://eprint.iacr.org/2023/1792.pdf)
But of course, all this could be for nothing if you've just got corporate spyware on your device.
This is certainly something we've talked about internally but I've double checked the in-app FAQs and I think we could be more clear about recommending users not use on a work device, especially with MDM. We'll get that updated as soon as possible. Thanks!
-- edit
I should add that we do some basic detection devices that have been rooted or are in debug mode and issue a warning to the user before they continue. I'd be interested in what we can do to detect MDM software but I fear it might become a cat-and-mouse game so it's preferable that folks not use their work devices at all.
Edit: you might want to consider putting that warning about work devices in the app itself right before someone pushes forward with making potentially life changing decisions and doesn’t rely on them reading an FAQ. I see you already have an onboarding flow in place. It would be really simple to make that the first screen of it.
I'll see if we can get something together before the next app release. Thanks again!
If the message is encrypted for the reporter and they're the only ones who can read it, what does the organization do to manage this? Are passwords for private keys saved with the org, or are the keys saved with multiple accounts? What do you do when someone forgets their password?
Cool app; just encryption management when it comes to human users must have lots of trade-offs.
We’ve got some basic filtering for full on DoS type attacks already.
The difficulty here is that a user can produce a reasonable amount of spam from a spread of IP addresses which would be disruptive to our journalist users but below threshold to be considered a DoS attack.
It’s tricky because we can’t have anything that could link a given message to a given user as that would break anonymity.
We’ve got some ideas with anonymous credentials from app attentions for the more long term. E.g. if you’re expected to submit 1 message an hour from your queue you can request 24 single use tokens from the API by performing an attestation that you’re running a genuine app. You then spend these as you send messages. We don’t have a full spec for this right now such that it can be fully anonymous but that’s the general idea.
There’s also some possible spam detection we can do in the journalist GUI which we’re interested in exploring. Right now the spam control is quite basic (muting) but the message rate is low due to the threshold mixer anyways so not so bad.
On key management:
Each journalist has an encrypted vault which requires a key derived from a password. If this password is lost and the journalist has no backup then it’s game over. We need to regenerate their identity in the key hierarchy as if they were a new user and messages they’ve not seen are lost, there is no way to pick up those sources again.
We have some plans on using MLS as an inter-journalist protocol which should enable having multiple actual humans per journalist/desk listed in the app. That would depend on the journalists agreeing to have their vault be shared of course. Once multiple humans are backing a single vault then the risk of password loss becomes smaller as if one journalist loses their password the other journalist should be able to share their back messages to them.
Supported outlets: https://securedrop.org/directory/
- secure drop uses TOR. That's observable at a network level or via access to a users device. In many contexts being a TOR user is sufficient to out the leaker. Having a news app installed is less suspicious
- provides an easy way for a naive source to avoid exposing themselves on the initial contact. That's because their network traffic looks like every other user, and the app storage is deniable (takes up space even if not in use).
Coverdrop doesn't actually provide a way to send large files like secure drop. The paper suggests that the journalists would talk the source through how to safely use securedrop over coverdrop messaging.
So if you have enough opsec awareness and tech savvy to use securedrop safely it may be simpler to go straight there.
In terms of how it's different. We attain anonymity without requiring a user to install Tor Browser, which we think is significant. Building this feature into our news app lowers the barrier of entry for non-technical sources quite significantly, and we think helps them achieve good OPSEC basically by default.
CoverDrop (aka Secure Messaging) has a few limitations right now that we'll be working to overcome in the next few months. Primarily that we don't support document upload due to the fact that our protocol only sends a few KB per day. Right now a journalist has the option to pivot the user onto another platform e.g. Signal. This is already better since the journalist can assess the quality of, and risks posted to, the source before giving their Signal number.
The current plan to improve this within the CoverDrop system is to allow a journalist to assess the risk posted to a source and, if they deem it acceptable, send them a invite link to upload documents which the client will encrypt with their keys before sending. This affects anonymity of course so we'll be investigating ways in which we can do this while doing our best to keep the source anonymous. There are a few techniques we could use here, for example making the document drop look like an encrypted email attachment being sent to a GMail account. I like this[1] paper as an example of an approach we could take that is censorship resistant.
Another limitation is that the anonymity of our system is largely predicated on the large install base of our app. In the UK/US/AU we have a pretty large install base so the anonymity properties provided by the protocol are nice, but if another smaller news agency were to pick up our tech as it stands right now then they wouldn't have this property. That said, in practice just having our plausibly deniable storage approach is a pretty big improvement over other whistleblowing approaches (PGP, Tor based, etc), even if you're the only person in the set of possible sources using the app.
[1] https://petsymposium.org/popets/2022/popets-2022-0068.pdf
ajb•8mo ago
It's worth noting that in the UK, the official secrets act 1920 actually protected anonymous contacts with newspapers. It's a shame this was dropped in later legislation