The Pretty Good Privacy (PGP) protocol was created to provide a means to securely communicate via e-mail. Unfortunately, it was a bandage applied to a protocol that has only increased significantly in complexity since PGP was released. The ad-hoc nature of PGP combined with the increasing complexity of e-mail itself has lead to rather unfortunate implementation failures that have left PGP users vulnerable. A newly released attack enables attackers to spoof PGP signatures:
Digital signatures are used to prove the source of an encrypted message, data backup, or software update. Typically, the source must use a private encryption key to cause an application to show that a message or file is signed. But a series of vulnerabilities dubbed SigSpoof makes it possible in certain cases for attackers to fake signatures with nothing more than someone’s public key or key ID, both of which are often published online. The spoofed email shown at the top of this post can’t be detected as malicious without doing forensic analysis that’s beyond the ability of many users.
The spoofing works by hiding metadata in an encrypted email or other message in a way that causes applications to treat it as if it were the result of a signature-verification operation. Applications such as Enigmail and GPGTools then cause email clients such as Thunderbird or Apple Mail to falsely show that an email was cryptographically signed by someone chosen by the attacker. All that’s required to spoof a signature is to have a public key or key ID.
The good news is that many PGP plugins have been updated to patch this vulnerability. The bad news is that this is the second major vulnerability found in PGP in the span of about a month. It’s likely that other major vulnerabilities will be discovered in the near future since the protocol appears to be receiving a lot of attention.
PGP is suffering from the same fate as most attempts to bolt security onto insecure protocols. This is why I urge people to utilize secure communication technology that was designed from the start to be secure and has been audited. While there are no guarantees in life, protocols that were designed from the ground up with security in mind tend to fair better than protocols that were bolted on after the fact. Of course designs can be garbage, which is where an audit comes in. The reason you want to rely on a secure communication tool only after it has been audited is because an audit by an independent third-party can verify that the tool is well designed and provides effective security. And audit isn’t a magic bullet, unfortunately those don’t exist, but it allows you to be reasonably sure that the tool you’re using isn’t complete garbage.