As they said, they were on it...
From what I can piece together while the site is down, it seems like they've uncovered 14 exploitable vulnerabilities in GnuPG, of which most remain unpatched. Some of those are apparently met by refusal to patch by the maintainer. Maybe there are good reasons for this refusal, maybe someone else can chime in on that?
Is this another case of XKCD-2347? Or is there something else going on? Pretty much every Linux distro depends on PGP being pretty secure. Surely IBM & co have a couple of spare developers or spare cash to contribute?
I haven't seen those outside of old mailing list archives. Everyone uses detached signatures nowadays, e.g. PGP/MIME for emails.
Edit: even better. It was both. There is a signature type confusion attack going on here. I still didn't watch the entire thing, but it seems that unlike gpg, they do have to specify --cleartext explicitly for Sequoia, so there is no confusion going on that case.
A major part of the problem is that GPG’s issues aren’t cash or developer time. It’s fundamentally a bad design for cryptographic usage. It’s so busy trying to be a generic Swiss Army knife for every possible user or use case that it’s basically made of developer and user footguns.
The way you secure this is by moving to alternative, purpose-built tools. Signal/WhatsApp for messaging, age for file encryption, minisign for signatures, etc.
If there were a PGP vulnerability that actually made it possible to push unauthorized updates to RHEL or Fedora systems, then probably IBM would care, but if they concluded that PGP's security problems were a serious threat then I suspect they'd be more likely to start a migration away from PGP than to start investing in making PGP secure; the former seems more tractable and would have maintainability benefits besides.
That's mostly incorrect in both counts. One is that lots of mirrors are still http-only or http default https://launchpad.net/ubuntu/+archivemirrors
The other is that if you get access to one of the mirrors and replace a package, it's the signature that stops you. Https is only relevant for mitm attacks.
> they'd be more likely to start a migration away from PGP
The discussions started ages ago:
Debian https://wiki.debian.org/Teams/Apt/Spec/AptSign
Fedora https://lists.fedoraproject.org/archives/list/packaging@list...
If you only need a specific version and you already know what that one is, then using a cryptographic hash will be a better way to verify packages, although that only applies for one specific version of one specific package. So, using an encrypted protocol (HTTPS or any other one) alone will not help, although it will help in combination with other things; you will need to do other things as well, to improve the security.
Though this type of juggling knives is not unique to Linux. AMD and many other hardware vendors ship executables over unencrypted connections for Windows. All just hoping that not a single similar vulnerability or confusion can be found.
Debian, and indeed most projects, do not control the download servers you use. This is why security is end-to-end where packages are signed at creation and verified at installation, the actual files can then pass through several untrusted servers and proxies. This was sound design in the 90s and is sound design today.
In all other cases they only used sequoia as a tool to build data for demonstrating gpg vulnerabilities.
Fortunately, it turned out that there wasn't anything particularly wrong with the current standards so we can just do that for now and avoid the standards war entirely. Then we will have interoperability across the various implementations. If some weakness comes up that actually requires a standards change then I suspect that consensus will be much easier to find.
Archive link: https://web.archive.org/web/20251227174414/https://www.gnupg...
(PGP/GPG are of course hamstrung by their own decision to be a Swiss Army knife/only loosely coupled to the secure operation itself. So the even more responsible thing to do is to discard them for purposes that they can’t offer security properties for, which is the vast majority of things they get used for.)
(I think you already know this, but want to relitigate something that’s not meaningfully controversial in Python.)
(I think you already know this as well)
This is exactly analogous to the Web PKI, where you trust CAs to identify individual websites, but the websites themselves control their keypairs. The CA's presence intermediates the trust but does not somehow imply that the CA itself does the signing for TLS traffic.
Again, I must emphasize that this is identical in construction to the Web PKI; that was intentional. There are good criticisms of PKIs on grounds of centrality, etc., but “the end entity doesn’t control the private key” is facially untrue and sounds more like conspiracy than anything else.
On my web server where the certificate is signed by letsencrypt I do have a file which contains a private key. On pypi there is no such thing. I don't think the parallel is correct.
With attestations on PyPI, the issuance window is 15 minutes instead of 90 days. So the private key is kept in memory and discarded as soon as the signing operation is complete, since the next signing flow will create a new one.
At no point does the private key leave your machine. The only salient differences between the two are file versus memory and the validity window, but in both cases PyPI’s implementation of attestations prefers the more ideal thing with respect to reducing the likelihood of local private key disclosure.
But also, that’s an implementation detail. There’s no reason why PyPI couldn’t accept attestations from local machines (using email identities) using this scheme; it’s just more engineering and design work to determine what that would actually communicate.
(This would of course require Codeberg to become an IdP + demonstrate the ability to maintain a reasonable amount of uptime and hold their own signing keys. But I think that's the kind of responsibility they're aiming for.)
But AFAICT, Certbot has rotated private keys automatically on reissuance since at least 2016[1]. There’s no reason not to in a fully automated scheme. I would expect all of the other major issuing clients to do the same.
[1]: https://community.letsencrypt.org/t/do-new-private-keys-get-...
Most people have never heard of it and never used it.
(So I agree that it’s de facto dead, but that’s not the same thing as formal deprecation. The latter is what you do explicitly to responsibly move people away from something that’s not suitable for use anymore.)
But werner at this point has a history of irresponsible decisions like this, so it's sadly par for the course by now.
Another particularly egregious example: https://dev.gnupg.org/T4493
I’m still working through how to use this but I have it basically setup and it’s great!
Keychain and 1Password are doing variants of the same thing here: both store an encrypted vault and then give you credentials by decrypting the contents of that vault.
In any case I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough. The fact the private key never leaves the 1Password vault unencrypted and is synced between my devices is pretty neat. From a security standpoint it is indeed a step down from having my key on a physical key device, but the hassle of setting up a new Yubikey was not quite worth it.
I’m sure 1Password is not much better than having a passphrase-protected key on disk. But it’s a lot more convenient.
Did you try to SSH in verbose mode to ascertain any errors? Why did you assume the hardware "broke" without anyone objective qualifications of an actual failure condition?
> I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough
How is trusting a closed-source, for-profit, subscription-based application with your SSH credential "secure enough"?
Choosing convenience over security is certainly not unreasonable, but claiming both are achieved without any compromise borders on ludicrous.
If you use crypographic command line tools to verify data sent to you, be mindful on what you are doing and make sure to understand the attacks presented here. One of the slides is titled "should we even use command line tools" and yes, we should because the alternative is worse, but we must be diligent in treating all untrusted data as adversarial.
Handling untrusted input is core to that.
The problem with PGP is that it's a Swiss Army Knife. It does too many things. The scissors on a Swiss Army Knife are useful in a pinch if you don't have real scissors, but tailors use real scissors.
Whatever it is you're trying to do with encryption, you should use the real tool designed for that task. Different tasks want altogether different cryptosystems with different tradeoffs. There's no one perfect multitasking tool.
When you look at the problem that way, surprisingly few real-world problems ask for "encrypt a file". People need backup, but backup demands backup cryptosystems, which do much more than just encrypt individual files. People need messaging, but messaging is wildly more complicated than file encryption. And of course people want packet signatures, ironically PGP's most mainstream usage, ironic because it relies on only a tiny fraction of PGP's functionality and still somehow doesn't work.
All that is before you get to the absolutely deranged 1990s design of PGP, which is a complex state machine that switches between different modes of operation based on attacker-controlled records (which are mostly invisible to users). Nothing modern looks like PGP, because PGP's underlying design predates modern cryptography. It survives only because nerds have a parasocial relationship with it.
And there are lots of tools for file encryption anyways. I have a bash function using openssh, sometimes I use croc (also uses PAKE), etc.
I need an alternative to "gpg --encrypt --armor --recipient <foo>". :)
That's literally age.
Would "fetch a short-lived age public key" serve your use case? If so, then an age plugin that build atop the AuxData feature in my Fediverse Public Key Directory spec might be a solution. https://github.com/fedi-e2ee/public-key-directory-specificat...
But either way, you shouldn't have long-lived public keys used for confidentiality. It's a bad design to do that.
This statement is generic and misleading. Using long-lived keys for confidentiality is bad in real-time messaging, but for non-ephemeral use cases (file encryption, backups, archives) it is completely fine AND desired.
> Would "fetch a short-lived age public key" serve your use case?
Sadly no.
> This statement is generic and misleading.
It may be generic, but it's not misleading.
> Using long-lived keys for confidentiality is bad in real-time messaging, but for non-ephemeral use cases (file encryption, backups, archives) it is completely fine.
What exactly do you mean by "long-lived"?
The "lifetime" of a key being years (for a long-lived backup) is less important than how many encryptions are performed with said key.
The thing you don't want is to encrypt 2^50 messages under the same key. Even if it's cryptographically safe to do that, any post-compromise key rotation will be a fucking nightmare.
The primary reason to use short-lived public keys is to limit the blast radius. Consider these two companies:
Alice Corp. uses the same public key for 30+ years.
Bob Ltd. uses a new public key for each quarter over the same time period.
Both parties might retain the secret key indefinitely, so that if Bob Ltd. needs to retrieve a backup from 22 years ago, they still can.
Now consider what happens if both of them lose their currently-in-use secret key due to a Heartbleed-style attack. Alice has 30 years of disaster recovery to contend with, while Bob only has up to 90 days.
Additionally, file encryption, backups, and archives typically use ephemeral symmetric keys at the bottom of the protocol. Even when a password-based key derivation function is used (and passwords are, for whatever reason, reused), the password hashing function usually has a random salt, thereby guaranteeing uniqueness.
The idea that "backups" magically mean "long-lived" keys are on the table, without nuance, is extremely misleading.
> > Would "fetch a short-lived age public key" serve your use case?
> Sadly no.
shrug Then, ultimately, there is no way to securely satisfy your use case.
The Alice / Bob comparison is asymmetric in a misleading way. You state Bob Ltd retains all private keys indefinitely. A Heartbleed-style attack on their key storage infrastructure still compromises 30 years of backups, not 90 days. Rotation only helps if only the current operational key is exposed, which is an optimistic threat model you did not specify.
Additionally, your symmetric key point actually supports what I said. If data is encrypted with ephemeral symmetric keys and the asymmetric key only wraps those, the long-lived asymmetric key's exposure does not enable bulk decryption without obtaining each wrapped key individually.
> "There is no way to securely satisfy your use case"
No need to be so dismissive. Personal backup encryption with a long-lived key, passphrase-protected private key, and offline storage is a legitimate threat model. Real-world systems validate this: SSH host keys, KMS master keys, and yes, even PGP, all use long-lived asymmetric keys for confidentiality in non-ephemeral contexts.
And to add to this, incidentally, age (the tool you mentioned) was designed with long-lived recipient keys as the expected use case. There is no built-in key rotation or expiry mechanism because the authors considered it unnecessary for file encryption. If long-lived keys for confidentiality were inherently problematic, age would be a flawed design (so you might want to take it up with them, too).
In any case, yeah, your point about high-fan-out keys with large blast radius is correct. That is different from "long-lived keys are bad for confidentiality" (see above with regarding to "age").
... If you're going to use a passphrase anyway why not just use a symmetric cipher?
In fact for file storage why not use an encrypted disk volume so you don't need to use PGP?
> In fact for file storage why not use an encrypted disk volume so you don't need to use PGP?
Different threat models. Disk encryption (LUKS, VeraCrypt, plain dm-crypt) protects against physical theft. Once mounted, everything is plaintext to any process with access. File-level encryption protects files at rest and in transit: backups to untrusted storage, sharing with specific recipients, storing on systems you do not fully control. You cannot send someone a LUKS volume to decrypt one file, and backups of a mounted encrypted volume are plaintext unless you add another layer.
Veracrypt, and I'm sure others, allow you to do exactly this. You can create a disk image that lives in a file (like a .iso or .img) and mount/unmount it, share it, etc.
But even if that was somehow unreasonable or undesired, you can use Filippo's age for that. PGP has no use case that isn't done better by some other tool, with the possible exception of "cosplay as a leet haxor"
No. Having 30 years of secret keys at all is not the same of having 30 years of secret keys in memory.
And we have massive issues due to the fact that the ongoing-decrying of "shut everything off" and the following non-improvement-without-an-alternative because we have to talk with people of other organizations (and every organization runs their own mailserver) and the only really common way of communication is Mail.
And when everyone has a GPG Key, you get.. what? an keyring.
You could say, we do not need gpg, because we control the mailserver, but what if a mailserver is compromised and the mails are still in mailboxes?
the public keys are not that public, only known to the contenders, still, it's an issue and we have a keyring
Look closely at the UX I'm proposing in https://github.com/fedi-e2ee/pkd-client-php?tab=readme-ov-fi...
Tell me why this won't work for your company.
Yes there aren't a lot of good options for that. If you're using something like a Microsoft software stack with active directory or similar identity/account management then there's usually some PKI support in there to anchor to.
Across organisations, there's really very very few good solutions. GPG specifically is much too insecure when you need to receive messages from untrusted senders. There's basically S/MIME which have comparable security issues, then we have AD federation or Matrix.org with a server per org.
> You could say, we do not need gpg, because we control the mailserver, but what if a mailserver is compromised and the mails are still in mailboxes?
How are you handling the keys? This is only true if user's protect their own keypairs with strong passwords / yubikey applet, etc.
Keyrings are awful. I want to supply people’s public keys each time. I have never, in my entire time using cryptography, wanted my tool to guess or infer what key to verify with. (Heck, JOSE has a long history of bugs because it infers the key type, which is also a mistake.)
I have an actual commercial use case that receives messages (which are, awkwardly, files sent over various FTP-like protocols, sigh), decrypts and verifies them, and further processes them. This is fully automated and runs as a service. For horrible legacy reasons, the files are in PGP format. I know the public key with which they are signed (provisioned out of band) and I have the private key for decryption (again, provisioned out of band).
This would be approximately two lines of code using any sane crypto library [0], but there really isn’t an amazing GnuPG alternative that’s compatible enough.
But GnuPG has keyrings, and it really wants to use them and to find them in some home directory. And it wants to identify keys by 32-bit truncated hashes. And it wants to use Web of Trust. And it wants to support a zillion awful formats from the nineties using wildly insecure C code. All of this is actively counterproductive. Even ignoring potential implementation bugs, I have far more code to deal with key rings than actual gpg invocation for useful crypto.
[0] I should really not have to even think about the interaction between decryption and verification. Authenticated decryption should be one operation, or possibly two. But if it’s two, it’s one operation to decapsulate a session key and a second operation to perform authenticated decryption using that key.
That's 64 bits these days.
>I should really not have to even think about the interaction between decryption and verification.
Messaging involves two verifications. One to insure that you are sending the message to who you think you are sending the message. The other to insure that you know who you received a message from. That is an inherent problem. Yes, you can use a shared key for this but then you end up doing both verifications manually.
> That's 64 bits these days.
The fact that it’s short enough that I even need to think about whether it’s a problem is, frankly, pathetic.
> Messaging involves two verifications. One to insure that you are sending the message to who you think you are sending the message. The other to insure that you know who you received a message from. That is an inherent problem. Yes, you can use a shared key for this but then you end up doing both verifications manually.
I can’t quite tell what you mean.
One can build protocols that do encrypt-then-sign, encrypt-and-sign, sign-then-encrypt, or something clever that combines encryption and signing. Encrypt-then-sign has a nice security proof, the other two combinations are often somewhat catastrophically wrong, and using a high quality combination can have good performance and nice security proofs.
But all of the above should be the job of the designer of a protocol, not the user of the software. If my peer sends me a message, I should provision keys, and then I should pass those keys to my crypto library along with a message I received (and perhaps whatever session state is needed to detect replays), and my library should either (a) tell me that the message is invalid and not give me a guess as to its contents or (b) tell me it’s valid and give me the contents. I should not need to separately handle decryption and verification, and I should not even be able to do them separately even if I want to.
Please resist the temptation to personally attack others.
I think you mean that 64 bits of hash output could be trivially collided using, say, Pollard's rho method. But it turns out that simple collisions are not an issue for such hashes used as identities. The fact that PGP successfully used 32 bits (16 bits of effort for a collision) for so long is actually a great example of the principle.
>...encrypt-then-sign, encrypt-and-sign, sign-then-encrypt...
You mean encrypt-then-MAC here I think.
>...I should not even be able to do them separately even if I want to.
Alas that is not possible. The problem is intrinsic to end to end encrypted messaging. Protocols like PGP combine them into a single key fingerprint so that the user does not have to deal with them separately. You still have to verify the fingerprint for people you are sending to and the fingerprint for the people who send you messages.
But to the point, how long should something like a key fingerprint be?
At least 128 bits for most threat models. 192+ is preferable for mine.
https://soatok.blog/2024/07/01/blowing-out-the-candles-on-th...
My threat model assumes you want an attacker advantage of less than 2^-64 after 2^64 keys exist to be fingerprinted in the first place, and your threat model includes collisions.
If I remember correctly, cloud providers assess multi-user security by assuming 2^40 users which each will have 2^50 keys throughout their service lifetime.
If you round down your assumption to 2^34 users with at most 100 public keys on average (for a total of 2^41 user-keys), you can get away with 2^-41 after 2^41 at about 123 bits, which for simplicity you can round up to the nearest byte and arrive at 128 bits.
The other thing you want to keep in mind is, how large are the keys in scope? If you have 4096-bit RSA keys and your fingerprints are only 64 bits, then by the pigeonhole principle we expect there to be 2^4032 distinct public keys with a given fingerprint. The average distance between fingerprints will be random (but you can approximate it to be an order of magnitude near 2^32).
In all honesty, fingerprints are probably a poor mechanism.
Why so high? Computers are fast and massively parallel these days. If a cryptosystem fully relies on fingerprints, a second preimage of someone’s fingerprint where the attacker knows the private key for the second preimage (or it’s a cleverly corrupt key pair) catastrophically breaks security for the victim. Let’s make this astronomically unlikely even in the multiple potential victim case.
And it’s not like 256 bit hashes are expensive.
(I’m not holding my breath on fully quantum attacks using Grover’s algorithm, at high throughput, against billions of users, so we can probably wait a while before 256 bits feels uncomfortably short.)
A key fingerprint is a usability feature. It has no other purpose. Otherwise we would just use the public key. Key fingerprints have to be kept as short as possible. So the question is, how short can that be? I would argue that 256 bit key fingerprints are not really usable.
Signal messenger is using 100 bits for their key fingerprint. They combine two to make a 60 digit decimal number. Increasing that to 256 x 2 bits would mean that they would end up with 154 decimal digits. That would be completely unusable.
OK, to be clear, I am specifically contending that a key fingerprint does not include collisions. My proof is empirical, that no one has come up with an attack on 64 bit PGP key fingerprints.
Collisions mean that an attacker can generate two or more messaging identities with the same fingerprint. How would that help them in some way?
I have to admit that I don't actually understand it. First the attacker gets some kernel devs to sign key1 of the two keys with colliding key IDs. Why? How does that help the attacker? Then I am guessing that the attacker signs some software with key1. Are the signatures important here? Then the attacker signs the malicious software with key2? Key2 isn't signed by any developers so if that was important the attack fails. If it wasn't important then why mention it?
Could you please provide a more detailed description of the attack? It seems to me that the sort of attack you are describing would require some trusted third party to trick. Like a TLS certifying authority for example.
No. I mean that 64 bits can probably be inexpensively attacked to produce first or second preimages.
It would be nice if a decentralized crypto system had memorable key identifiers and remained secure, but I think that is likely to be a pipe dream. So a tool like gpg shouldn’t even try. Use at least 128 bits and give three choices: identify keys by an actual secure hash or identify them by a name the user assigns or pass them directly. Frankly I’m not sure why identifiers are even useful — see my original complaint about keyrings.
>> ...I should not even be able to do them separately even if I want to.
>Alas that is not possible. The problem is intrinsic to end to end encrypted messaging. Protocols like PGP combine them into a single key fingerprint so that the user does not have to deal with them separately.
Huh? It’s possible. It’s not even hard. It could work like this:
$ better_gpg decrypt_and_auth --sender_pubkey [KEY] --recipient_privkey [KEY]
Ciphertext input is supplied on stdin. Plaintext output appears on stdout but only if the message validates correctly.
Keep in mind that you would have to generate a valid keypair, or something that could be made into a valid keypair for each iteration. That fact is why PGP got along with 32 bit key IDs for so long. PGP would still be using 32 bit key IDs if it wasn't that someone figured out how to mess with RSA exponents to greatly speed up the process. Ironically, the method with the slowest keypair generation became the limiting factor.
It isn't like this is a new problem. People have been designing and using key fingerprint schemes for over a quarter of a century now.
>$ better_gpg decrypt_and_auth --sender_pubkey [KEY] --recipient_privkey [KEY]
How do you know that the recipient key actually belongs to the recipient? How does the recipient know that the sender key actually belongs to you (so it will validate correctly)?
GPG's keyring handling has also been a source of exploits. It's much safer to directly specify recipient rather than rely on things like short key IDs which can be bruteforced.
Automatic discovery simply isn't secure if you don't have an associated trust anchor. You need something similar to keybase or another form of PKI to do that. GPG's key servers are dangerous.
You technically can sign with age, but otherwise there's minisign and the SSH spec signing function
As a followup, is there anything in existence that supports "large-scale public key management (with unknown recipients)"? Or "automatic discovery, trust management"? Even X.509 PKI at its most delusional doesn't claim to be able to do that.
That's a "great" idea considering the recent legal developments in the EU, which OpenPGP, as bad as it is, doesn't suffer from. It would be great if the author updated his advice into something more future-proof.
If you want a suggestion for secure messaging, it's Signal/WhatsApp. If you want to LARP at security with a handful of other folks, GPG is a fine way to do that.
The available open-source options come nowhere close to the messaging security that Signal/Whatsapp provide. So you're left with either "find a way to access Signal after they pull out of whatever region has criminalized them operating with a backdoor on comms" or "pick any option that doesn't actually have strong messaging security".
Eh?
There are alternatives, try Ricochet (Refresh) or Cwtch.
Which jurisdiction are you on about? [1] Pick your poison.
For example, UK has a law forcing suspects to cooperate. This law has been used to convict suspects who weren't cooperating.
NL does not, but police can use force to have a suspect unlock a device using finger or face.
The answer is not to live with it, but become politically active to try to support your principles. No software can save you from an authoritarian government - you can let that fantasy die.
And that's even if you are innocent on the underlying charge or search.
Encryption in this political climate, is a pick your poison.
- Either you go to jail for years but you know your gov and other actors has no access to your data.
- or you store on remote/proprietary apps, stay free, but your gov or other actors may or may not have access to it.
There are many reasons, but one of them is that for the overwhelming majority of humans on the planet, their apps aren't being compiled from source on their device. So since you have to account for the fact that the app in the App Store may not be what's in some git repo, you may as well just start with the compiled/distributed app.
Signal and WhatsApp do not belong in the same sentence together. One's open source software developed and distributed by a nonprofit foundation with a lengthy history of preserving and advancing accessible, trustworthy, verifiable encrypted calling and messaging going back to TextSecure and RedPhone, the other's a piece of proprietary software developed and distributed by a for-profit corporation whose entire business model is bulk harvesting of user data, with a lengthy history of misleading and manipulating their own users and distributing user data (including message contents) to shady data brokers and intelligence agencies.
To imply these two offer even a semblance of equivalent privacy expectations is misguided, to put it generously.
The preceding comment is saying that source security is insufficient, not that transparency is irrelevant.
Both you and the preceding commenter are correct that just running binaries signed and distributed by Alphabet (Google) and/or Apple presents room for additional risks beyond those observable in the source code, but the solution to this problem isn't to say "and therefore source availability doesn't matter at all for anyone", it's to choose to build from source or to obtain and install APKs built and signed by the developers, such as via Accrescent or Obtanium (pulls directly from github, gitlab, etc releases).
There's a known-good path. Most people do not take the known-good path. Their choice to do so does not invalidate or eliminate the desirable properties of known-good path (verifiability, trustworthiness).
I genuinely do not understand the argument you and the other user are making. It reads to me like an argument that goes "Yes, there's a known, accurate, and publicly documented recipe to produce a cure for cancer, but it requires prerequisite knowledge to understand that most people lack, and it's burdensome to follow the recipe, so most people just buy their vials from the untrustworthy CancerCureCorporation, who has the ability to give customers a modified formula that keeps them sick rather than giving them the actual cure, and almost nobody makes the cure themselves without going through this untrustworthy but ultimately optional intermediary, so the public documentation of the cure doesn't matter at all, and there's no discernable difference between having the cure recipe and not having the cure recipe."
Thankfully, I didn’t say that.
The population of users who have a verifiable path from an open source repo to an app on their device is a rounding error in the set of humans using messaging apps.
You raised concerns about lack of source availability, and I’ve been consistent in my replies that source availability is not the way that somebody wants secure messaging is going to know they’re getting it. They’re going to get it because they’re using a popular platform with robust primitives, whose compiled/distributed apps receive constant scrutiny from security researchers.
Signal and WhatsApp are that. Concerns about Meta’s other work are just noise, in part because analysis of the WhatsApp distributed binaries doesn’t rely on promises from Meta.
[†] Source is always good to have, but it's insufficient.
That said, you seem to be refuting even that idea. While your reputation precedes you, and while I haven't been in the field quite as long as you, I do have a few dozen CVEs, I've written surreptitious side channel backdoors and broken production cryptographic schemes in closed-source software doing binary analysis as part of a red team alongside former NCC folks. I don't know a single one of them who would say that lacking access to source code increases your ability to establish a chain of trust.
Can you please explain how lacking access to source code, being ONLY able to perform dynamic analysis, rather than dynamic analysis AND source code analysis, can ever possibly lead to an increase in the maximum possible confidence in the behavior of a given binary? That sounds like a completely absurd claim to me.
There's a lot of background material I'd have to bring in to attempt to bring you up to speed here, but my favorite simple citation here is just: Google [binary lifter].
The source code is just a set of hints about what the binary does. You don't need the hints to discern what a binary is doing.
At a high level, what I'm fundamentally contending is that WhatsApp is less trustworthy and secure than Signal. I can have a higher degree of confidence in the behavior and trustworthiness of the Signal APK I build from source myself than I can from WhatsApp, which I can't even build a binary of myself. I'd simply be given a copy of it from Google Play or Apple's App Store.
Signal's source code exhibits known trustworthy behavior, i.e. not logging both long-term and ephemeral cryptographic keys and shipping them off to someone else's servers. Sure, Google Play and Apple can modify this source code, add a backdoor, and the binary distributed by Google Play and Apple can have behavior that doesn't match the behavior of the published source code. You can detect this fairly easily, because you have a point of reference to compare to. You know what the compiled bytecode from the source code you've reviewed looks like, because you can build it yourself, no trust required[1], it's not difficult to see when that differs in another build.
With WhatsApp, you don't even have a point of reference of known good behavior, i.e. not logging both long-term and ephemeral cryptographic keys and shipping them off to someone else's server, in the first place. You can monitor all the disk writes, you can monitor all the network activity. Just because YOU don't observe cryptographic keys being logged, either in-memory, or on disk, or being sent off to some other server, doesn't mean there isn't code present to perform those exact functions under conditions you've never met and never would - it's entirely technically feasible for Google and Apple to be fingerprinting a laundry list of identifiers of known security researchers and be shipping them binaries with behavior that differs from the behavior of ordinary users, or even for them to ship targeted backdoored binaries to specific users at the demand of various intelligence agencies.
The upper limit for the trustworthiness of a Signal APK you build from source yourself is on a completely different planet from the trustworthiness of a WhatsApp APK you only have the option of receiving from Google.
And again, none of this even begins to factor in Meta's extensive track record on deliberately misleading users on privacy and security through deceptive marketing and subverting users' privacy extensively. Onavo wasn't just capturing all traffic, it was literally doing MITM attacks against other companies' analytics servers with forged TLS certificates. Meta was criminally investigated for this and during discovery, it came out that executives understood what was going on, understood how wrong it was, and deliberately continued with the practice anyway. Actual technical analysis of the binaries and source code aside, it's plainly ridiculous to suggest that software made by that same corporation is as trustworthy as Signal. One of these apps is a messenger made by a company with a history of explicitly misleading users with deceptive privacy claims and employing non-trivial technical attacks against their own users to violate their own users' privacy, the other is made by a nonprofit with a track record of being arguably one of the single largest contributors to robust, accessible, audited, verifiable secure cryptography in the history of the field. I contend that suggesting these two applications are equally secure is irrational, impossible to demonstrate or verify, and indefensible.
[1] Except in your compiler, linker, etc... Ken Thompson's 'Reflections on Trusting Trust' still applies here. The argument isn't that source code availability automatically means 100% trustworthy, it means the upper boundary for trustworthiness is higher than without source availability.
I've been largely ignoring your sideline commentary about not trusting Meta and their other work outside of WhatsApp. Mostly because the whole thrust of my argument is that an app's security is confirmed by analyzing what the code does, not by listening to claims from the author.
Beyond that, I've been commenting in good faith about the core thrust of our disagreement, which is whether or not a lack of available source code disqualifies WhatsApp as a viable secure messaging option alongside Signal.
As part of that, I had to respond midway through because you put a statement in quotation marks that was not actually something I'd said.
Can you please explain how lacking access to source code, being ONLY able to perform dynamic analysis, rather than dynamic analysis AND source code analysis, can ever possibly lead to an increase in the maximum possible confidence in the behavior of a given binary?
This doesn't make sense, because not having source code doesn't limit you to dynamic analysis. I assumed, 2 comments back, you were just misunderstanding SOTA reversing; you got mad at me about that. But the thing you "never stated it wasn't" is right there in the comment history. Acknowledge that and help me understand where the gap was, or this isn't worth all the words you're spending on it.
I want secure messaging, not encrypted SMS. I want my messages to sync properly between arbitrary number of devices. I want my messaging history to not be lost when I lose a device. I want not losing my messaging history to not be a paid feature. I want to not depend on a shady crypto company to send a message.
I send long messages via Signal, typed on a desktop computer, all the time. (In fact, I almost exclusively use Signal through my desktop app.)
You don't have to use it like "encrypted SMS"! You're free.
> I want my messages to sync properly between arbitrary number of devices. I want my messaging history to not be lost when I lose a device.
OK. https://signal.org/blog/a-synchronized-start-for-linked-devi...
> I want not losing my messaging history to not be a paid feature.
I genuinely don't understand what you mean here. From https://signal.org/blog/introducing-secure-backups/
"If you do decide to opt in to secure backups, you’ll be able to securely back up all of your text messages and the last 45 days’ worth of media for free."
If you have a metric fuckton of messages, that does cost money, sure, but as they say:
"If you want to back up your media history beyond 45 days, as well as your message history, we also offer a paid subscription plan for US$1.99 per month."
"This is the first time we’ve offered a paid feature. The reason we’re doing this is simple: media requires a lot of storage, and storing and transferring large amounts of data is expensive. As a nonprofit that refuses to collect or sell your data, Signal needs to cover those costs differently than other tech organizations that offer similar products but support themselves by selling ads and monetizing data."
If you want Signal to host the encrypted storage, that costs money. If you don't want to pay Signal money, they provide 45 days of backup for free.
If you want to self-host your own backups (at your own cost), that's easy to do.
You can literally set up SyncThing to stream your on-device backups to your NAS, cloud storage, or whatever.
> I want to not depend on a shady crypto company to send a message.
Shady crypto company?
Are you referring to MobileCoin? That feature isn't in the pipeline for sending messages.
I checked! https://soatok.blog/2025/02/18/reviewing-the-cryptography-us...
Using it as something more than encrypted SMS requires persistent message history between devices.
> metric fuckton of messages
“More than 45 days” is a metric fuckton? Seriously?
> If you want Signal to host the encrypted storage, that costs money. If you don't want to pay Signal money, they provide 45 days of backup for free.
I don’t want Signal to store my messages. I want Signal to not lock in my messages on their servers, so I can sync them between my devices and back them up into my own backups.
> If you want to self-host your own backups (at your own cost), that's easy to do.
Except there’s no way to move it between platforms. I have more than one device.
> Are you referring to MobileCoin? That feature isn't in the pipeline for sending messages.
I don’t want shady crypto company to hold my data hostage, and there’s no way to store it on my hardware and then move it between platforms. That’s my problem with signal.
> A Synchronized Start for Linked Devices
It only properly transfers 45 days. You can’t have more than one phone. Phones are special “primary devices” and AFAIK you can’t restore your messages if you lose your phone even if you have logged-in Signal Desktop.
Signal is not holding you hostage.
I’ve already lost message history consistency because one of my devices was offline for too long. The messages are there on my other device, but Signal refuses to let me copy my data from one of my devices to another. Signal is, quite literally, worse at syncing message history than IRC — at least with IRC I can set up a bouncer and have a consistent view of history on all of my devices, but there’re no Signal bouncers.
The point is that whatever secure messenger you use, it must plausibly be secure. Email cannot plausibly be made secure. Whatever other benefits you might get from using it --- federation, open source, UX improvements, universality --- come at the cost of grave security flaws.
Most people who use encrypted email are doing so in part because it does not matter if any of their messages are decrypted. They simply aren't interesting or valuable. But in endorsing a secure messenger of any sort, you're influencing the decisions of people whose messages are extremely sensitive, even life-or-death sensitive. For those people, federation or cross-platform support can't trump security, and as practitioners we are obligated to be clear about that.
It’s important to me — as a user — that a communication tool doesn’t lose my data, and Signal already did. Actual practicioners keep recommending Signal and sure, I believe that in a weird scenario where my encryption keys are somehow compromised without also compromising my local message history, Signal’s double-ratchet will do wonders — but it doesn’t actually work as a serious communication tool.
It’s also kinda curious that while the “email cannot be made secure” mantra is constantly repeated online, basically every organization that needs secure communication uses email. Openwall are certainly practicioners, and they use PGP-over-email: are they commiting malpractice?
Say more. Plenty of people use Signal as a serious communication tool.
> Openwall are certainly practicioners, and they use PGP-over-email: are they commiting malpractice?
They, and other communities that use GPG-encrypted emails are LARPing, and it’s only fine because their emails don’t actually matter enough for anybody to care about compromising them.
It’s not malpractice to LARP: plenty of people love getting out their physical or digital toys and playing pretend. But if you’re telling other people that your foam shield can protect them from real threats, you are lying.
I did say more already. Maybe you believe in serious communication tools that can’t synchronize searchable history between devices, but I don’t.
> They, and other communities that use GPG-encrypted emails are LARPing, and it’s only fine because their emails don’t actually matter enough for anybody to care about compromising them.
Are we talking about the same Openwall? Are you aware what Openwall’s oss-security mailing list is? Please, do elaborate how nobody cares about getting access to an unlimited stream of zerodays for basically every Unix-like system.
https://oss-security.openwall.org/wiki/mailing-lists/distros
> Only use these lists to report security issues that are not yet public
> To report a non-public medium or high severity 2) security issue to one of these lists, send e-mail to distros [at] vs [dot] openwall [dot] org or linux [dash] distros [at] vs [dot] openwall [dot] org (choose one of these lists depending on who you want to inform), preferably PGP-encrypted to the key below.
The few jobs that actually care about this stuff, like journalists, do use signal.
Openwall doesn't get security via pgp, it gets a spam filter.
Something like this happens basically every time I try to use Matrix though. Messages are not decrypting, or not being delivered, or devices can’t be authenticated for some cryptic reason. The reason I even tried to use Element Desktop is because my nheko is seemingly now incapable of sending direct messages (the recepient just gets infinite “waiting for message”).
https://photos.goldstein.lol/share/OIgowBN4Wmi4zlm8DmDP0s8jH...
I wrote this to answer this exact question last year.
as a recent dabbling reader of introductory popsci content in cryptography, I've been wondering about what are the different segmentation of expert roles in the field?
e.g. in Filippo's blogpost about Age he clarified that he's not a cryptographer but rather a cryptography engineer, is that also what your role is, what are the concrete divisions of labor, and what other related but separate positions exists in the overall landscape?
where is the cutoff point of "don't roll your own crypto" in the different levels of expertise?
I do not have a Ph.D in Cryptography (not even an honorary one), so I do not call myself a Cryptographer. (Though I sometimes use "Cryptografur" in informal contexts for the sake of the pun.)
Ideally, this person will also design the system that uses the crypto, because no matter how skilled the people on a standards committee might be their product will always be, at best, a baroque nightmare with near-infinite attack surface, at worst an unusable pile of crap. IPsec vs. Wireguard is a prime example, but there are many others.
"don't" roll your own cover everything from "don't design your own primitive" to "don't make your own encryption algorithm/mode" to "don't make your own encryption protocol", to "don't reimplement an existing version of any of the above and just use an encryption library"
(and it's mostly "don't deploy your own", if you want to experiment that's fine)
(That's nothing to do with how hard cryptography is, just with how little demand there is for serious cryptography engineering, especially compared with the population of people who have done serious academic study of it.)
Which, from where I stand, means that PGP is the only viable solution because I don't have a choice. I can't replace PGP with Sigstore when publishing to Maven. It's nice to tell me I'm dumb because I use PGP, but really it's not my choice.
> Use SSH Signatures, not PGP signatures.
Here I guess it's just me being dumb on my own. Using SSH signatures with my Yubikeys (FIDO2) is very inconvenient. Using PGP signatures with my Yubikeys literally just works.
> Encrypted Email: Don’t encrypt email.
I like this one, I keep seeing it. Sounds like Apple's developer support: if I need to do something and ask for help, the answer is often: "Don't do it. We suggest you only use the stuff that just works and be happy about it".
Sometimes I have to use emails, and cryptographers say "in that case just send everything in plaintext because eventually some of your emails will be sent in plaintext anyway". Isn't it like saying "no need to use Signal, eventually the phone of one of your contacts will be compromised anyway"?
You don't have a choice today. You could have a choice tomorrow if enough people demanded it.
Don't let PGP's convenience (in this context) pacify you from making a better world possible.
To me, there will be no reason to use PGP the day I find practical alternatives for the remaining use-cases I have. And I feel like signing git commits is not a weird use-case...
This is true both for GPG and S/MIME
Email encryption self-compromises itself in a way Signal doesn't
I really would like to replace PGP with the "better" tool, but:
* Using my Yubikey for signing (e.g. for git) has a better UX with PGP instead of SSH
* I have to use PGP to sign packages I send to Maven
Maybe I am a nerd emotionally attached to PGP, but after a year signing with SSH, I went back to PGP and it was so much better...
This might be true of comparing GPG to SSH-via-PIV, but there's a better way with far superior UX: derive an SSH key from a FIDO2 slot on the YubiKey.
With GPG it just works.
(see "uv" option here https://fidoalliance.org/specs/fido-v2.0-ps-20190130/fido-cl... - the -sk key types in SSH are just a clever way of abusing the FIDO protocol to create a signing primitive)
This is so insane to me. The whole point of using cryptography is to keep private information private. Its hard to think of ways PGP could fail more as a security / privacy tool.
Keyservers are simply a convenient way to get a public key (identity). Most people don't have to use them.
https://www.latacora.com/blog/2020/02/19/stop-using-encrypte...
The problem mostly concerns the oldest parts of PGP (the protocol), which gpg (the implementation) doesn't want or cannot get rid of.
Seems like a legitimate difference of opinion. The researcher wants a message with an invalid format to return an integrity failure message. Presumably the GnuPGP project thinks that would be better handled by some sort of bad format error.
The exploit here is a variation on the age old idea of tricking a PGP user into decrypting an encrypted message and then sending the result to the attacker. The novelty here is the idea of making the encrypted message look like a PGP key (identity) and then asking the victim to decrypt the fake key, sign it and then upload it to a keyserver.
Modifying a PGP message file will break the normal PGP authentication[1] (that was not acknowledged in the attack description). So here is the exploit:
* The victim receives a unauthenticated/anonymous (unsigned or with a broken signature) message from the attacker. The message looks like a public key.
* Somehow (perhaps in another anonymous message) the attacker claims they are someone the victim knows and asks them to decrypt, sign and upload the signed public key to a keyserver.
* They see nothing wrong with any of this and actually do what the attacker wants ignoring the error message about the bad message format.
So this attack is also quite unlikely. Possibly that affected the decision of the GnuPG project to not change behaviour in this case, particularly when such a change could possibly introduce other vulnerabilities.
[1] https://articles.59.ca/doku.php?id=pgpfan:pgpauth
Added: Wait. How would the victim import the bogus PGP key into GPG so they could sign it? There would normally be a preexisting key for that user so the bogus key would for sure fail to import. It would probably fail anyway. It will be interesting to see what the GnuPG project said about this in their response.
(1) Rewrites the ciphertext of a PGP message
(2) Introducing an entire new PGP packet
(3) That flips GPG into DEFLATE compression handling
(4) And then reroutes the handling of the subsequent real message
(5) Into something parsed as a plaintext comment
This happens without a security message, but rather just (apparently) a zlib error.
In the scenario presented at CCC, they used the keyserver example to demonstrate plaintext exfiltration. I kind of don't care. It's what's happening under the hood that's batshit; the "difference of opinion" is that the GnuPG maintainers (and, I guess, you) think this is an acceptable end state for an encryption tool.
Keys (even quantum safe) are small enough that having one per application is not a problem at all. If an application needs multi-context, they can handle it themselves. If they do it badly, the damage is contained to that application. If someone really wants to make an application that just signs keys for other applications to say "this is John Smith's key for git" and "this is John Smith's key for email" then they could do that. Such an application would not need to concern itself with permissions for other applications calling into it. The user could just copy and paste public keys, or fingerprints when they want to attest to their identity in a specific application.
The keyring circus (which is how GPG most commonly intrudes into my life) is crazy too. All these applications insist on connecting to some kind of GPG keyring instead of just writing the secrets to the filesystem in their own local storage. The disk is fully encrypted, and applications should be isolated from one another. Nothing is really being accomplished by requiring the complexity of yet another program to "extra encrypt" things before writing them to disk.
I'm sure these bad ideas come from the busy work invented in corporate "security" circles, which invent complexity to keep people employed without any regard for an actual threat model.
For most apps on non-mobile devices, there isn't filesystem isolation between apps. Disk/device-level encryption solves for a totally different threat model; Apple/Microsoft/Google all ship encrypted storage for secrets (Keychain, Credential Manager, etc), because restricting key material access within the OS has merit.
> I'm sure these bad ideas come from the busy work invented in corporate "security" circles, which invent complexity to keep people employed without any regard for an actual threat model.
Basically everything in PGP/GPG predates the existence of "corporate security circles".
If there isn't there should be. At least my Flatpaks are isolated from each other.
> Apple/Microsoft/Google all ship encrypted storage for secrets (Keychain, Credential Manager, etc), because restricting key material access within the OS has merit.
The Linux equivalents are suspicious and stuck in the past to say the least. Depending on them is extra tedious on top of the tediousness of any PGP keyrings, god forbid a combination of the two.
> Basically everything in PGP/GPG predates the existence of "corporate security circles".
Then we know where this stuff came from.
I can’t figure out what you mean by this.
I see this sentiment a lot, but you later hint at the problem. Any "replacement" needs to solve for secure key distribution. Signing isn't hard, you can use a lot of different things other than gpg to sign something with a key securely. If that part of gpg is broken, it's a bug, it can/should be fixed.
The real challenge is distributing the key so someone else can verify the signature, and almost every way to do that is fundamentally flawed, introduces a risk of operational errors or is annoying (web of trust, trust on first use, central authority, in-person, etc). I'm not convinced the right answer here is "invent a new one and the ecosystem around it".
(We’re also long past the point where key distribution has been a significant component of the PGP ecosystem. The PGP web of trust and original key servers have been dead and buried for years.)
What do you mean? Web of Trust? Keyservers? A combination of both? Under what use case?
As a practical implementation of "six degrees of Kevin Bacon", you could get an organic trust chain to random people.
Or at least, more realistically, to few nerds. I think I signed 3-4 peoples signatures.
The process had - as they say - a low WAF.
GPG is terrible at that.
0. Alice's GPG trusts Alice's key tautologically. 1. Alice's GPG can trust Bob's key because it can see Alice's signature. 2. Alice's GPG can trust Carol's key because Alice has Bob's key, and Carol's key is signed by Bob.
After that, things break. GPG has no tools for finding longer paths like Alice -> Bob -> ??? -> signature on some .tar.gz.
I'm in the "strong set", I can find a path to damn near anything, but only with a lot of effort.
The good way used to be using the path finder, some random website maintained by some random guy that disappeared years ago. The bad way is downloading a .tar.gz, checking the signature, fetching the key, then fetching every key that signed in, in the hopes somebody you know signed one of those, and so on.
And GPG is terrible at dealing with that, it hates having tens of thousands of keys in your keyring from such experiments.
GPG never grew into the modern era. It was made for persons who mostly know each other directly. Addressing the problem of finding a way to verify the keys of random free software developers isn't something it ever did well.
I vaguely recall the PGP manuals talking about scenarios like a woman secretly communicating with her lover, or Bob introducing Carol to Alice, and people reading fingerprints over the phone. I don't think long trust chains and the use case of finding a trust path to some random software maintainer on the other side of the planet were part of the intended design.
I think to the extent the Web of Trust was supposed to work, it was assumed you'd have some familiarity with everyone along the chain, and work through it step by step. Alice would known Bob, who'd introduce his friend Carol, who'd introduce her friend Dave.
if you have a website put your keys in a dedicated page and direct people there
If you are in an org there can be whatever kind of centralised repo
Add the hashes to your email signature and/or profile bios
There might be a nice uniform solution using DNS and derived keys like certificate chains? I am not sure but I think it might not be necessary
This is something S/MIME does and I wouldn't say it doesn't do so well. You can start from mailbox validation and that already beats everything PGP has to offer in terms of ownership validation. If you do identity validation or it's a national PKI issuing the certificate (like in some countries) it's a very strong guarantee of ownership. Coughing baby (PGP) vs hydrogen bomb level of difference.
It much more sounds to me like an excuse to use PGP when it doesn't even remotely offer what you want from a replacement.
This is why basically every modern usage of GPG either doesn't rely on key distribution (because you already know what key you want to trust via a pre-established channel) or devolves to the other party serving up their pubkey over HTTPS on their website.
This is a bit like looking at electric cars and saying ~"well you can't claim to be a viable replacement for gas cars until you can solve flight"
Most of the entries have to do with ways to compromise the unencrypted text presented to the user, so that the displayed message doesn't match the signed message. This allows for multiple different kinds of exploit.
But in the git commit case the main thing we care about, for commits authored by anyone whose signature we trust, is that the actual commit matches the signature, and git itself enforces that.
Of course, it's possible that a malicious user could construct a commit that expands to something misleading (with or without GPG). But that comes back to the point of signatures in the first place - if your repo allows random anonymous people to push signed commits, then you might have an issue.
The attack on detached signatures (attack #1) happens because GnuPG needs to run a complicated state machine that can put processing into multiple different modes, among them three different styles of message signature. In GPG, that whole state machine apparently collapses down to a binary check of "did we see any data so that we'd need to verify a signature?", and you can selectively flip that predicate back and forth by shoving different packets into message stream, even if you've already sent data that needs to be verified.
The malleability bug (attack #4) is particularly slick. Again, it's an incoherent state machine issue. GPG can "fail" to process a packet because it's cryptographically invalid. But it can also fail because the message framing itself is corrupted. Those latter non-cryptographic failures are handled by aborting the processing of the message, putting GPG into an unexpected state where it's handling an error and "forgetting" to check the message authenticator. You can CBC-bitflip known headers to force GPG into processing DEFLATE compression, and mangle the message such that handling the message prints the plaintext in its output.
The formfeed bug (#3) is downright weird. GnuPG has special handling for `\f`; if it occurs at the end of a line, you can inject arbitrary unsigned data, because of GnuPG's handling of line truncation. Why is this even a feature?
Some of these attacks look situational, but that's deceptive, because PGP is (especially in older jankier systems) used as an encryption backend for applications --- Mallory getting Alice to sign or encrypt something on her behalf is an extremely realistic threat model (it's the same threat model as most cryptographic attacks on secure cookies: the app automatically signs stuff for users).
There is no reason for a message encryption system to have this kind of complexity. It's a deep architectural flaw in PGP. You want extremely simple, orthogonal features in the format, ideally treating everything as clearly length-delimited opaque binary blobs. Instead you get a Weird Machine, and talks like this one.
Amazing work.
Because you're clearly presenting it as a defense of PGP on a thread from a presentation clearly delineating breaks in it using exactly the kind of complexity that the article you're responding to predicts would cause it to break.
I think this supports my contention that we spend much too much time quibbling about cryptographic trivialities when it comes to end to end encrypted messaging. We should spend more time on the usability of such systems.
It's a pretty great ecosystem, most hardware smartcards are surrounded by a lot of black magic and secret handshakes and stuff like pkcs#11 and opensc/openct are much much harder to configure.
I use it for many things but not for email. Encrypted backups, password manager, ssh keys. For some there are other hardware options like fido2 but not for all usecases and not the same one for each usecase. So I expect to be using gpg for a long time to come.
Isn't this what ffmpeg did recently? They seemed to get a ton of community support in their decision not to fix a vulnerability
GnuPG for all its flaws has a copyleft license (GPL3) making it difficult to "embrace extend extinguish". If you replace it with a project that becomes more successful but has a less protective (for users) license, "we the people" might lose control of it.
Not everything in software is about features.
And a single developer deciding for the entirety of the debian project just also happened to be a canonical employee by pure chance?
A company adopts some software with a free but not copyleft license. Adopts means they declare "this is good, we will use it".
Developers help develop the software (free of charge) and the company says thank you very much for the free labour.
Company puts that software into everything it does, and pushes it into the infrastructure of everything it does.
Some machines run that software because an individual developer put it there, other machines run that software because a company put it there, some times by exerting some sort of power for it to end up there (for example, economic incentives to vendors, like android).
A some point the company says "you know what, we like this software so much that we're going to fork it, but the fork isn't going to be free or open source. It's going to be just ours, and we're not going to share the improvements we made"
But now that software is already running in a lot of machines.
Then the company says "we're going to tweak the software a bit, so that it's no longer inter-operable with the free version. You have to install our proprietary version, or you're locked out" (out of whatever we're discussing hypothetically. Could be a network, a standard, a protocol, etc).
Developers go "shit, I guess we need to run the proprietary version now. we lost control of it."
This is what happened e.g. with chrome. There's chromium, anyone can build it. But that's not chrome. And chrome is what everybody uses because google has lock-in power. Then google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads". Then they make tweaks to chrome so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
And all of this was initially build by free labour, which google took, by people who thought they were contributing to some commons in a sense.
Copyleft licenses protect against this. Part of the license says: if you use these licenses, and you make changes to the software, you have to share the changes as well, you can't keep them for yourself".
Because Google has their attention. You can use chromium, but most people don't and pick the first thing they see. Also, Chrome is a much better name, err, not better but easier to say.
> Then google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads". Then they make tweaks to chrome so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
You and I have a different definition of "forced". But, are you speculating this might happen, or do you have an example of it happening?
> And all of this was initially build by free labour, which google took, by people who thought they were contributing to some commons in a sense.
Do you have an example of a site that works better in chrome, than it does in chromium? I'll even take an example of a site that works worse in the version of chromium before manifest v2 was disabled, compared to whatever version of chrome you choose?
> Copyleft licenses protect against this. Part of the license says: if you use these licenses, and you make changes to the software, you have to share the changes as well, you can't keep them for yourself".
Is chromium not still foss? Other than branding, what APIs or features are missing from the FOSS version? You mentioned manifest v3, but I'm using firefox because of it, so I don't find that argument too compelling. I don't think FOSS is worse, I think google is making a bad bet.
Right. So at that point all those contributing developers are free to fork, and maintain the fork. You have just as much control as you always did.
And of course being MIT or GPL doesn't make a difference, the company is permitted to change the license either way. [1]
So here's the thing, folk are free to use the company product or not. Folk are free to fork or not.
In practice of course the company version tends to win because products need revenue to survive. And OSS has little to zero revenue. (The big revenue comes from, you know, companies who typically sell commercial software.)
Even with the outcome you hypothesize (and clearly that is a common outcome) OSS is still ahead because they have the code up to the fork. And yes, they may have contributed to earn this fork.
But projects are free to change license. That's just built into how licenses work. Assuming that something will be GPL or MIT or whatever [2] forever is on you, not them.
[1] I'm assuming CLA us in play because without that your explanation won't work.
[2] yes, I think GPL sends a signal of intention more than MIT, but it's just a social signal, it doesn't mean it can't change. Conversely making it GPL makes it harder for other developers to adopt in the first place since most are working in non-GPL environments.
Yep. And we've seen this happen. Eg, MariaDB forked off from MySQL. Illumos forked from Solaris. Etc. Its not a nice thing to have to do, but its hardly a doomsday situation.
> chrome is what everybody uses because google has lock-in power.
Incorrect. At least on Windows, Chrome is not the default browser, it is the browser that most users explicitly choose to install, despite Microsoft's many suggestions to the contrary.
This is what most pro-antitrust arguments miss. Even when consumers have to go out of their way to pick Google, they still do. To me, this indicates that Google is what people actually want, but that's an inconvenient fact which doesn't fit the prevailing political narrative.
> so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
What is a Chrome API that web developers could possibly implement but that "isn't public?" What would that even mean in this context?
> google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads".
And that could have happened just as well if Chrome was 100% open source and GPL.
Even if you accept the claim that Manifest V3's primary purpose was not increasing user security at face value (and that's a tenuous claim at best), it was perfectly possible for all third-party browsers (notably including Edge, which has 0 dependency on Google's money) to fork Chromium in a way that kept old extensions working. However, open source does not mean that features will magically appear in your software. If Google is the primary maintainer and Google wishes to remove some feature, maintaining that feature in your fork requires upkeep, upkeep that most Chromium forkers were apparently unwilling to provide. This has nothing to do with whether Chrome is open source or not.
Maybe have it run CLI in compatibility mode when called as `gpg` but have completely new one when called normally
Yeah; I actually used to do that to (use the "default license"), but eventually came to the same realisation and have been moving all my projects to full copyleft.
No... I don't think that's how software works. Do you have an example of that happening? Has any foss project lost control of the "best" version of some software?
> Not everything in software is about features.
I mean, I would happily make the argument that the ability to use code however I want without needing to give you, (the people,) permission to use my work without following my rules a feature. But then, stopping someone from using something in a way you don't like, is just another feature of GPL software too, is it not?
For me, I am of two minds. On one hand, the fact that billion-dollar empires are built on top of what is essentially unpaid volunteer work does rankle and makes me much more appreciative of copyleft.
On the other hand, most of my hobbyist programming work has continued to be released under some form of permissive license, and this is more of a reality of the fact that I work in ecosystems where use of the GPL isn't merely inconvenient, but legally impossible, and the pragmatism of permissive licenses win out.
I do wish that weak copyleft like the Mozilla Public License had caught on as a sort of middle ground, but it seems like those licenses are rare enough to where their use would invite as much scrutiny as the GPL, even if it was technically allowed. Perhaps the FSF could have advocated more strongly for weak copyleft in area where GPL was legally barred, but I suppose they were too busy not closing the network hole in the GPLv3 to bother.
What sort of ecosystems are these?
I also like Rust, but the above would be true before I started using Rust (I agree it’s not a programming language thing)
If the new components were GPL licensed there would be less opposition, but we just get called names and our opinions discarded. After all such companies have more effective marketing departments.
But at some point, for things like, a very small-but-useful library or utility, I had a change of heart. I felt that it's better for the project to use non-copyleft licenses.
I do this as a rule now for projects where the scope is small and the complexity of a total rewrite is not very large for several engineers at a large company.
For small stuff, the consideration is, I want people to use it, period.
When devs look at open source stuff and see MIT / Apache, they know they can use it no questions asked. When they see GPL etc. then they will be able to use it in some cases and not others depending on what they are working on. I don't want to have that friction if it's not that important.
For a lot of stuff I publish, it's really just some small thing that I tried to craft thoughtfully and now I want to give it away and hope that someone else benefits. Sometimes it gets a few million downloads and I get feedback, and I just like that experience. Often whatever the feedback is it helps me make the thing better which benefits my original use case, or I just learn things from the experience.
Often I'm not trying to build a community of developers around that project -- it's too small for that.
I still like the GPL and I have nothing against it. If I started working on something that I anticipated becoming really large somehow, I might try to make it GPL. And I feel great about contributing to large GPL projects.
I just feel like even though I'm friendly to the GPL, it's definitely no longer my default, because I tend to try to publish very small useful units. And somehow I've convinced myself that it's better for the community and for the projects themselves if those kind of things are MIT / Apache / WTFPL or similar.
I hope that makes sense.
I realized that I can be seen as one of those that treats the GPL as weird or not normal, because I don't really use it anymore. But I'm not trying to be an enemy of the GPL or enable embrace-extend-extinguish tactics. It's just that it a very nuanced thing for me I guess nowadays. Your comment caused me to reflect on this.
> As others have pointed out, GnuPG is a C codebase with a long history (going on 28 years). On top of that, it's a codebase that is mostly uncovered by tests, and has no automated CI. If GnuPG were my project, I would also be anxious about each change I make. I believe that because of this the LibrePGP draft errs on the side of making minimal changes, with the unspoken goal of limiting risks of breakage in a brittle codebase with practically no tests. (Maybe the new formats in RFC 9580 are indeed "too radical" of an evolutionary step to safely implement in GnuPG. But that's surely not a failing of RFC 9580.)
* https://articles.59.ca/doku.php?id=pgpfan:schism
Nothing has improved and everything has gotten worse since I wrote that. Both factions are sleepwalking into an interoperability disaster. Supporting one faction or the other just means you are part of the problem. The users have to resist being made pawns in this pointless war.
>Maybe the new formats in RFC 9580 are indeed "too radical" of an evolutionary step to safely implement in GnuPG.
Traditionally the OpenPGP process has been based on minimalism and rejected everything without a strong justification. RFC-9580 is basically everything that was rejected by the LibrePGP faction (GnuPG) in the last attempt to come up with a new standard. It contains a lot of poorly justified stuff and some straight up pointless stuff. So just supporting RFC-9580 is not the answer here. It would require significant cleaning up. But again, just supporting LibrePGP is not the answer either. The process has failed yet again and we need to recognize that.
None of this matters now. Everyone is cheerfully walking into an interoperability disaster that will cause much harm. There isn't any real chance GnuPG will lose this war, it is pretty much infrastructure at this point. But the war will cause a lot of harm to the PGP ecosystem, possibly even to the point that it becomes unusable in practice. This is an actual crisis.
Either faction can stop this. But at this point both factions are completely unreasonable and are worthy of criticism.
That sentence is too long, it should read:
>RFC-9580 is basically everything
The RFC has every idea that anyone involved in its creation ever thought of tossed into it. It's over a hundred pages long and just keeps going and going and going. Want AEAD? We have three of them, two of which have essentially zero support in crypto libraries. PRFs? We've got four, and then five different ways to apply them to secret-key encryption. PKC algorithms? There's a dozen of them, some parameterized so there's up to half a dozen subtypes, including mystery ones like EdDSALegacy which seems to be identical to Ed25519 but gets treated differently. Compression algorithms? There are three you can choose from. You get the idea.
And then there's the Lovecraftian horror of the signature packets with their infinite subtypes and binding signatures and certification signatures and cross-signatures and revocation signatures and timestamp signatures and confirmation signatures and signature signatures and signature signature signatures and signature signature signature aarrgghhh I'm going insane!
The only way you can possibly work with this mess without losing your mind is to look at what { GPG, Sequioa } will accept and then implement a minimal intersection of the two, so you can talk to the two major implementations without ending up in a madhouse.
That said, these issues are not a big deal.
The first one concerns someone manually reading a signature with cat (which is completely untrusted at that stage, since nothing has been verified), then using the actual tool meant to parse it, and ignoring that tool’s output. cat is a different tool from minisign.
If you manually cat a file, it can contain arbitrary characters, not just in the specific location this report focuses on, but anywhere in the file.
The second issue is about trusting an untrusted signer who could include control characters in a comment.
In that case, a malicious signer could just make the signed file itself malicious as well, so you shouldn’t trust them in the first place.
Still, it’s worth fixing. In the Zig implementation of minisign, these characters are escaped when printed. In the C implementation, invalid strings are now rejected at load time.
Something that will encrypt using AES-256 with a passphrase, but also using asymmetric crypto. Oh, and I want my secret keys printable if needed. And I want to store them securely on YubiKeys once generated (https://github.com/drduh/YubiKey-Guide). I want to be able to encrypt my backups to multiple recipients. And I want the same keys (stored on Yubikeys, remember?) to be usable for SSH authentication, too.
And by the way, if your fancy tool is written using the latest language du jour with a runtime that changes every couple of years or so, or requires huge piles of dependencies that break if you even as much as sneeze (python, anyone?), it won't do.
BTW, in case someone says "age", I actually followed that advice and set it up just to be there on my systems (managed by ansible). Apart from the fact that it really slowed down my deployments, the thing broke within a year. And I didn't even use it. I just wanted to see how reliable it will be in the most minimal of ways: by having it auto-installed on my systems.
If your fancy tool has less than 5 years of proven maintenance record, it won't do. Encryption is for the long term. I want to be able to read my stuff in 15-30 years.
So before you go all criticizing GnuPG, please understand that there are reasons why people still use it, and are actually OK with the flaws described.
The cryptographic requirements of different problems --- backup, package signing, god-help-us secure messaging --- are in tension with each other. No one design adequately covers all the use cases. Trying to cram them all into one tool is a sign that something other than security is the goal. If that's the case, you're live action roleplaying, not protecting people.
I'd be interested in whether you could find a cryptographer who disagrees with that. I've asked around!
[†] I am aware that SAK nerds love the saw.
I'm somewhat amused that every time this kind of discussion comes up, the answer is "you are holding it wrong". I have a feeling the world of knowledgeable crypto folks is somewhat detached from user reality.
If a single tool isn't possible, give me three tools. But if those three tools each require separate sets of keys with their own key management systems, I'm not sure if the user's problem is being addressed.
> I want to be able to encrypt my backups to multiple recipients.
Presumably this means you want to encrypt the backup once and have multiple decryption keys or something?
The rest of your original comment are constraints around how you want it to work.
I'm very curious about this. Tell me more.
Poster says:
> slowed down my deployments
I take that to mean the _deployment_ step, not the deployed system.
Yes, I know this can surely be explained and isn't a "fair" comparison. But then again my time is limited and I need predictable, reliable tools.
Is this a comparable complaint worth mentioning, and if it is are you sure you actually need cryptography? It slowed things down a bit, so you don't really want to move on from demonstrably too-complex to not have bugs GnuPG?
Stop asking for it, for your own good, please. If you don't understand the entire spec you can't use it safely.
You want special purpose tools. Signal for communication, Age for safer file encryption, etc.
What exact problems did you have with age? You're not explaining how it broke anything. Are you compiling yourself? Age has yubikey support and can do all you described.
> if your fancy tool has less than 5 years of proven maintenance record, it won't do. Encryption is for the long term. I want to be able to read my stuff in 15-30 years.
This applies to algorithms, it does not apply to cryptographic software in the same way. The state of art changes fast, and while algorithms tend to stand for a long time these days there are significant changes in protocol designs and attack methods.
Downgrade protection, malleability protection, sidechannel protection, disambiguation, context binding, etc...
You want software to be implemented by experts using known best practices with good algorithms and audited by other experts.
If you read the various how-tos out there it is not that hard to use, just people do not want to read anything more than 2 lines. That is the main issue.
My only complaint is Thunderbird now uses its own homegrown encryption, thus locking you into their email client. Seems almost all email clients have their own way of encryption, confusing the matters even more. I now use mutt because it can be easily likned to GnuPG and it does not lock me into a specific client.
The video linked above contains multiple examples of people using GnuPG's CLI in ways that it was seemingly intended to be used. Blaming users for holding it wrong seems facile.
The entire point of every single valid criticism of PGP is that you cannot make a single functionally equivalent alternative to PGP. You must use individual tools that are good at specific things, because the "Swiss Army knife" approach to cryptographic tool design has yielded empirically poor outcomes.
If you have an example of how age broke for you, I think its maintainers would be very interested in hearing that -- I've been using it directly and indirectly for 5+ years and haven't had any compatibility or runtime issues with it, including when sharing encrypted files across different implementations of age.
(I think you can make a tie-in argument here, though: PGP's packet design and the state machine that falls out of it is a knock-on effect of how many things PGP tries to do. PGP would maybe not have such a ridiculously complicated packet design if it didn't try and do so many things.)
Not to say you can't then make an umbrella interface for interacting with them all as a suite, but perhaps the issue has become that gpg has not appropriately followed the Unix philosophy to begin with.
Not that I've got the solution for you. Just calling out the nature of your demands somewhat being at odds with a key design principle that made Unix and Unix-likes great to begin with.
Here you can find a reply from GnuPG: https://www.openwall.com/lists/oss-security/2025/12/29/9
And btw, it was mentioned in the talk that GnuPG does not sign commits. That’s just wrong. Everything, including the release tarballs, is signed.
rurban•1mo ago
But trust in Werner Koch is gone. Wontfix??
corndoge•1mo ago
karambahh•1mo ago
rawmaterial•1mo ago
cpach•1mo ago
People who are serious about security use newer, better tools that replace GPG. But keep in mind, there’s no “one ring to rule them all”.
perching_aix•1mo ago
singpolyma3•1mo ago
perching_aix•1mo ago
tptacek•1mo ago
akerl_•1mo ago
This page is a pretty direct indicator that GPG's foundation is fundamentally broken: you're not going to get to a good outcome trying to renovate the 2nd story.
singpolyma3•1mo ago
akerl_•1mo ago
arccy•1mo ago
johnisgood•1mo ago
cpach•1mo ago
johnisgood•1mo ago
akerl_•1mo ago
johnisgood•1mo ago
pabs3•1mo ago
arccy•1mo ago
ameliaquining•1mo ago
p2detar•1mo ago
> Don't.
https://www.latacora.com/blog/2019/07/16/the-pgp-problem/#en...
I’m not sure I completely agree here. For private use, this seems fine. However, this isn’t how email encryption is typically implemented in an enterprise environment. It’s usually handled at the mail gateway rather than on a per-user basis. Enterprises also ensure that the receiving side supports email encryption as well.
edit: formatting
tptacek•1mo ago
p2detar•1mo ago
It is, GPG take care of that.
> If the idea here is, a private group of friends can just agree never to put anything in their subjects, or to accidentally send unencrypted replies
That’s not what I’m talking about. It’s an enterprise - you cannot send non-encrypted emails from your work mail account, the gateway takes care of it. It has many rules, including such based on the sender and recipient.
Surely, someone can print the mail and carry it out of the company’s premises, but at this point it’s intentional and the cat’s already out of the bag.
tptacek•1mo ago
p2detar•1mo ago
edit: smime
akerl_•1mo ago
Either it’s just theatre, encrypting emails internally and then stripping it when they’re delivered, or you still need every recipient to be managing their own keys anyways to be able to decrypt/validate what they’re reading.
p2detar•1mo ago
> you still need every recipient to be managing their own keys anyways to be able to decrypt/validate what they’re reading.
Nope, that is handled at the gateway on the receiving side.
edit: Again, the major point here is to ensure no plain text email gets relayed. TLS does not guarantee that plain text email doesn't get relayed by a wrongly configured relay on its route.
akerl_•1mo ago
jcranmer•1mo ago
kuschku•1mo ago
LtWorf•1mo ago
ghickPit•1mo ago
Why do high-profile projects, such as Linux and QEMU, still use GPG for signing pull requests / tags?
https://docs.kernel.org/process/maintainer-pgp-guide.html
https://www.qemu.org/docs/master/devel/submitting-a-pull-req...
Why does Fedora / RPM still rely on GPG keys for verifying packages?
This is a staggering ecosystem failure. If GPG has been a known-lost cause for decades, then why haven't alternatives ^W replacements been produced for decades?
talideon•1mo ago
GPG is what GP is referring to as a lost cause. Now, it can be debated whether PGP-in-general is a lost cause too, but that's not what GP is claiming.
ghickPit•1mo ago
It is though what both the fine article, and tptacek in these comments, are claiming!
Avamander•1mo ago
> then why haven't alternatives ^W replacements been produced for decades?
Actually we do have alternatives for it.
For example Git supports S/MIME and could absolutely be used to sign commits and tags. Even just using self-signed certificates wouldn't be far off from what PGP offers. However if people used their digital IDs like many countries offer, mission-critical code could have signatures with verifiable strong identities.
Though there are other approaches as well, both for signing and for encrypting. It's more that people haven't really considered migrating.
talideon•1mo ago
Also no, the gpg.fail site makes no such claims. Now, tptacek, has said that, but he didn't write the comment you were replying to.