One VPN Provider to Rule Them All

When somebody first develops an interest in privacy, the first piece of advice they usually come across is to use a virtual private network (VPN). Because their interest in privacy is newly developed, they usually have little knowledge beyond that they “need a VPN.” So they do a Google (again, their interest in privacy is new) search for VPN and find a number of review sites and providers. Being a smart consumer they read the review sites and choose a provider that consistently receives good reviews. What the poor bastard doesn’t know is that many of those review sites and providers are owned by the same company (a company, I will add, that is shady as fuck):

Kape Technologies, a former malware distributor that operates in Israel, has now acquired four different VPN services and a collection of VPN “review” websites that rank Kape’s VPN holdings at the top of their recommendations. This report examines the controversial history of Kape Technologies and its rapid expansion into the VPN industry.

If you’re not familiar with Kape Technologies, the linked report provides a good overview. If you want a TL;DR, Kape Technologies has a history of distributing malware and now owns ExpressVPN, CyberGhost, Private Internet Access, and Zenmate. Because of Kape Technologies’ history, I would advise against using one of its VPN providers. It’s not impossible for a company to turn over a new leaf, but with other options available (at least until Kape buys them all), why take chances?

If you’re a person with a newfound interest in privacy and looking for recommendations, I unfortunately don’t have any good recommendations for review sites. The handful of review sites that I used to trust have either disappeared or been bought by VPN providers (which by itself doesn’t necessary make a review site untrustworthy, but I’m always wary of such conflicts of interest).

As far as VPN providers go, I use Mullvad and I like it. It supports WireGuard (my preferred VPN protocol), doesn’t ask for any personally identifiable information when signing up for an account, accepts anonymous forms of payment (including straight cash mailed in an envelope), and seems determined to remain independent (at least for now).

Trade-offs

I frequently recommend Signal as a secure messaging platform because it strikes a good balance between security and usability. Unfortunately, as is always the case with security, the balance between security and usability involves trade-offs. One of the trade-offs made by Signal has recently become the subject of some controversy:

When Signal Desktop is installed, it will create an encrypted SQLite database called db.sqlite, which is used to store the user’s messages. The encryption key for this database is automatically generated by the program when it is installed without any interaction by the user.

As the encryption key will be required each time Signal Desktop opens the database, it will store it in plain text to a local file called %AppData%\Signal\config.json on PCs and on a Mac at ~/Library/Application Support/Signal/config.json.

When you open the config.json file, the decryption key is readily available to anyone who wants it.

How could the developers of Signal make such an amateurish mistake? I believe the answer lies in the alternative:

Encrypting a database is a good way to secure a user’s personal messages, but it breaks down when the key is readily accessible to anyone. According to Suchy, this problem could easily be fixed by requiring users to enter a password that would be used to generate an encryption key that is never stored locally.

In order to mitigate this issue the user would be required to do more work. If the user is required to do more work, they’ll likely abandon Signal. Since Signal provides very good transport security (the messages are secure during the trip from one user to another) abandoning it could result in the user opting for an easier to use tool that didn’t provide as effective or any transport security, which would make them less secure overall.

iOS and many modern Android devices have an advantage in that they often have dedicated hardware that encryption keys can be written to but not read from. Once a key is written to the hardware data can be sent to it to be either encrypted or decrypted with that key. Many desktops and laptops have similar functionality thanks to Trusted Platform Modules (TPM) but those tend to require user setup first whereas the smartphone option tends to be seamless to the user.

There is another mitigation option here, which is to utilize full-disk encryption to encrypt all of the contents on your hard drive. While full-disk encryption won’t prevent resident malware from accessing Signal’s database, it will prevent the database from being copied from the computer by a thief or law enforcers (assuming they seized the computer when it was off instead of when the operating system was booted up and thus the decryption key for the drive was resident in memory).

A Seemingly Good Idea with a Steep Price

When you use a free e-mail provider, you are the product, which means that the provider most likely snoops through the contents of your e-mail to deliver targeted ads. Because of this I encourage people to move away from free providers. Paid e-mail providers are less inclined to snoop through your e-mails but the best option is to host your own e-mail server. Unfortunately, hosting e-mail is a pain in the ass so very few people are interested in doing it. A new product, Helm, is promising the best of both worlds: self-hosted e-mail without the complexity of administering an e-mail server. From a technical standpoint, it looks like a solid product:

The service takes a best-of-both-worlds approach that bridges the gap between on-premises servers and cloud-based offerings. The server looks stylish and is small enough to be tucked into a drawer or sit unnoticed on a desk. It connects to a network over Ethernet or Wi-Fi and runs all the software required to serve email and calendar entries to authorized devices. An expansion slot allows an additional five terabytes of storage.

The server also provides a robust number of offerings designed to make the service extremely hard to hack, including:

  • A system-on-a-chip from NXP that stores keys for full-disk encryption and other crypto functions to ensure keys are never loaded into memory, where they might be leaked. The disk encryption is designed to prevent the contents from being read without the key, even if someone gets physical possession of the device.
  • Support for secure boot and keys that are hardwired during manufacture so the device can only run or install authorized firmware and firmware updates. The devices are manufactured in the US or Mexico to ease concerns about supply-chain weaknesses.
  • Firmware that only communicates over an encrypted VPN tunnel. This measure prevents employees of the user’s ISP, or anyone monitoring the home or office connection, from knowing who the user is communicating with. The firmware also automatically generates TLS certificates from the free Let’s Encrypt service.
  • Before being backed up in the cloud, messages are encrypted using a key that’s stored on the personal server and is available only to the end user. That means if the cloud server is ever hacked or the provider is legally compelled to turn over the backed up data, it can’t be decrypted without the key.
  • Two-factor authentication that’s based on what Helm calls “proximity based security.” The tokens that generate one-time passwords can only be installed on a smartphone that has come into close physical proximity with the Helm device during pairing by someone who knows the device password. Pairing new phones, adding email accounts, or making other changes not only requires a device password but also an OTP from an already-paired phone.

Technical specifications and implementation often don’t match so I’ll be interested to see how well this product works in the wild. However, I’m guessing that this product isn’t going to fly off of the shelves because the price is steep:

The startup is betting that people will be willing to pay $500 to purchase the box and use it for one year to host some of their most precious assets in their own home. The service will cost $100 per year after that. Included in the fee is the registration and automatic renewal of a unique domain selected by the customer and a corresponding TLS certificate from Let’s Encrypt.

$500 is a lot of money for a consumer-grade embedded computer and a $100 per year subscription fee isn’t chump change no matter how you shake it. You can buy a ProtonMail subscription for significantly less and enjoy what most consumer would consider pretty reasonable security. But if you want a self-hosted e-mail option without the hassle that usually accompanies setting up and maintaining your own e-mail server (and have a few Benjamins to spare), this may be a product to look into.

Properly Warning Users About Business Model Changes

I have an update from my previous article about how the developers of GPGTools botched their changeover from offering a free software suite to a paid software suite. It appears that they listened to those of us who criticized them for not properly notifying their users that the latest update will change the business model because this is the new update notification:

That’s how you properly inform your users about business model changes.

How Not to Handle Business Model Changes

GPGTools is a software suite that makes using OpenPGP on macOS easier. I’ve recommended this tool for quite some time to the three people who are interested in encrypting the contents of their e-mail. While the tool was freely available, the development team has been warning users for over a year that the suite would eventually move to a paid model. I completely understand their motivation. A man has to eat after all. However, there are proper ways to change business models and improper ways. The GPGTools team chose the improper way.

Here is the latest update notification for GPGTools:

It looks innocuous enough but if you install it, you’ll discover that your Mail.app plugin will be a one month trial. The initial screen of the update note doesn’t indicate that this update is the one that moves GPGTools from free to paid. You have to scroll down to learn that tidbit of information. Since most users probably don’t scroll through the entire update note, they will likely be rather surprised when their free app is now telling them that they have to pay.

Another issue with GPGTools’s transition is that there is no English version of the terms of distribution. Since GPGTools is based in Germany, this might not seem odd but everything else on the site is translated into English. If you’re going to toss a license agreement at somebody, you should provide it in every language that your application supports.

The final major problem with the transitions, which has fortunately been fixed now but you can read about it by digging through the announcement thread on Twitter, was that there was no information about the license being sold. When you went to buy a license, the site originally didn’t tell you if the license was per computer, per user, or something else. Now the site states that the purchase covers one person and activation on up to three computers (a limit that I find more restrictive than I prefer).

I’m not one to criticize somebody when they make an effort to profit from their endeavors but GPGTools’s transition from a free suite to a paid suite should be a valuable lesson on how not to perform such a transition.

If you’re ever in a situation where you want to begin charging users for something that you have been providing for free, here are a few rules.

First, don’t foist the change on users out of the blue. Announce your intentions early. Moreover, give your users a firm date as soon as possible. GPGTools’s development team kept saying that the change would come eventually but never provided a hard date.

Second, if you’re going to change the business model through an update, make sure that the update informs users in a very obvious manner. That information should be the first thing in the update note. It wouldn’t hurt to put that part of the note in big bold letters so it jumps out at the user. An even better solution would be to release another free version that told the user that the next version would be the one that transitioned over to a paid model. When the next update was released, have the app clearly tell the user that it will transition the software over to a paid model.

Third, make sure you tell the user what they’re purchasing. The link to buy the software should inform the user if the license is per user, per computer, a monthly subscription, or something else.

Fourth, make any license agreements available in every language that the software supports. If the application is translated into English, then the user should expect an English version of any license agreements to be available.

If anybody is wondering if I’m going to buy a license for GPGTools, the answer is maybe. I haven’t been enamored with the GPGTools development team. Its biggest problem has been a lack of timeliness. Mail.app doesn’t support plugins so the GPGTools plugin requires a fair bit of hackery and often breaks between major macOS releases. GPGTools has often been months behind of major macOS releases, which means that there has often been months where the tool simply doesn’t work if you’re running the current version of macOS. I’m willing to overlook such an issue for a free tool (you get what you pay for) but not a paid tool. So the GPGTools development team will have to demonstrate an ability to have working versions of its software available when new versions of macOS are released before I’ll purchase a license. I also find the three computer limitation too restrictive. I’d rather see it bumped up to at least five computers or better yet unlimited computers (merely make it a per user license agreement).

If the GPGTools development team does resolve these issues, I’ll likely buy a license. It’s only $23.90 (for the current major version, it is implied that a new license will be required for the next major release), which is reasonable. And while I don’t use encrypted e-mail very often (not for lack of want but for lack of people who also use it), I do like to throw money at teams that make quality products and GPGTools, minus the issue noted in the previous paragraph, has been a quality product.

Avoid E-Mail for Security Communications

The Pretty Good Privacy (PGP) protocol was created to provide a means to securely communicate via e-mail. Unfortunately, it was a bandage applied to a protocol that has only increased significantly in complexity since PGP was released. The ad-hoc nature of PGP combined with the increasing complexity of e-mail itself has lead to rather unfortunate implementation failures that have left PGP users vulnerable. A newly released attack enables attackers to spoof PGP signatures:

Digital signatures are used to prove the source of an encrypted message, data backup, or software update. Typically, the source must use a private encryption key to cause an application to show that a message or file is signed. But a series of vulnerabilities dubbed SigSpoof makes it possible in certain cases for attackers to fake signatures with nothing more than someone’s public key or key ID, both of which are often published online. The spoofed email shown at the top of this post can’t be detected as malicious without doing forensic analysis that’s beyond the ability of many users.

[…]

The spoofing works by hiding metadata in an encrypted email or other message in a way that causes applications to treat it as if it were the result of a signature-verification operation. Applications such as Enigmail and GPGTools then cause email clients such as Thunderbird or Apple Mail to falsely show that an email was cryptographically signed by someone chosen by the attacker. All that’s required to spoof a signature is to have a public key or key ID.

The good news is that many PGP plugins have been updated to patch this vulnerability. The bad news is that this is the second major vulnerability found in PGP in the span of about a month. It’s likely that other major vulnerabilities will be discovered in the near future since the protocol appears to be receiving a lot of attention.

PGP is suffering from the same fate as most attempts to bolt security onto insecure protocols. This is why I urge people to utilize secure communication technology that was designed from the start to be secure and has been audited. While there are no guarantees in life, protocols that were designed from the ground up with security in mind tend to fair better than protocols that were bolted on after the fact. Of course designs can be garbage, which is where an audit comes in. The reason you want to rely on a secure communication tool only after it has been audited is because an audit by an independent third-party can verify that the tool is well designed and provides effective security. And audit isn’t a magic bullet, unfortunately those don’t exist, but it allows you to be reasonably sure that the tool you’re using isn’t complete garbage.

EFAIL

A vulnerability was announced yesterday that affects both OpenPGP and S/MIME encrypted e-mails. While this was initially being passed off as an apocalyptic discovery, I don’t think that it’s scope is quite as bad as many are claiming. First, like all good modern vulnerabilities, it has a name, EFAIL, and a dedicated website:

The EFAIL attacks exploit vulnerabilities in the OpenPGP and S/MIME standards to reveal the plaintext of encrypted emails. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.

The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim’s email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.

The weakness isn’t in the OpenPGP or S/MIME encryption algorithms themselves but in how mail clients interact with encrypted e-mails. If your e-mail client is configured to automatically decrypt encrypted e-mails and allows HTML content to be displayed, the encrypted potion of your e-mail could be exfiltrated by a malicious attacker.

I generally recommend against using e-mail for secure communications in any capacity. OpenPGP and S/MIME are bandages applied to an insecure protocol. Due to their nature as a bolted on feature added after the fact, they are unable to encrypt a lot of data in your e-mail (the only thing they can encrypt is the body). However, if you are going to use it, I generally recommend against allowing your client to automatically decrypt your encrypted e-mails. Instead at least require that your enter a password to decrypt your private key (this wouldn’t defend against this attack if your client is configured to display HTML e-mail content but it would prevent malicious e-mails from automatically exfiltrating encrypted content). Better yet, have your system setup in such a manner where you actually copy the encrypted contents of an e-mail into a separate decryption program, such as the OpenPGP command line tools, to view the secure contents. Finally, I would recommend disabling the ability to display HTML e-mails in your client if you are at all concerned about security.

If you perform the above practices, you can mitigate this attack… on your system. The real problem is, as always, other people’s systems. While you may perform the above practices, you can’t guarantee that everybody with whom you communicate will as well. If an attacker can exploit one party, they will generally get the e-mails sent by all parties. This is why I’d recommend using a communication tool that was designed to be secure from the beginning, such as Signal, over e-mail with OpenPGP or S/MIME. While tools like Signal aren’t bulletproof, they are designed to be secure by default, which makes them less susceptible to vulnerabilities created by an improper configuration.

Set a Strong Password on Your Phone

My girlfriend and I had to take our cat to the emergency vet last night so I didn’t have an opportunity to prepare much material for today. However, I will leave you with a security tip. You should set a strong password on your phone:

How long is your iPhone PIN? If you still use one that’s only made by six numbers (or worse, four!), you may want to change that.

Cops all over the United States are racing to buy a new and relatively cheap technology called GrayKey to unlock iPhones. GrayShift, the company that develops it, promises to crack any iPhone, regardless of the passcode that’s on it. GrayKey is able to unlock some iPhones in two hours, or three days for phones with six digit passcodes, according to an anonymous source who provided security firm Malwarebytes with pictures of the cracking device and some information about how it works.

The article goes on to explain that you should use a password with lowercase and upper case letters, numbers, and symbols. Frankly, I think such advice is antiquated and prefer the advice given in this XKCD comic. You can create more bits of entropy if you have a longer password that is easier to remember. Instead of having something like “Sup3r53cretP@5sw0rd” you could have “garish-bethel-perry-best-finale.” The second is easier to remember and is actually longer. Moreover, you can increase your security by tacking on additional words. If you want a randomly generated password, you can use a Diceware program such as this one (which I used to generate the latter of the two passwords.

The Beginning of the End for Unsecured Websites

Chrome looks to be the first browser that is going to call a spade a spade. Starting in July 2018, Chrome will list all websites that aren’t utilizing HTTPS as unsecured:

For the past several years, we’ve moved toward a more secure web by strongly advocating that sites adopt HTTPS encryption. And within the last year, we’ve also helped users understand that HTTP sites are not secure by gradually marking a larger subset of HTTP pages as “not secure”. Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”.

I think Let’s Encrypt was the catalyst that made this decision possible. Before Let’s Encrypt was released, acquiring and managing TLS certificates could be a painful experience. What made matters worse is that the entire process had to be redone whenever the acquired TLS certificates expired. Let’s Encrypt turned that oftentimes annoying and expensive process into an easy command. This made it feasible for even amateur website administrators to implement HTTPS.

The Internet is slowly moving to a more secure model. HTTPS not only prevents third parties from seeing your web traffic but, maybe even more importantly, it also prevents third parties from altering your web traffic.

Open Whisper Systems Released Standalone Desktop Client

Signal is my favorite messaging application. It offers very good confidentiality and is easy to use. I also appreciate the fact that a desktop client was released, which meant I didn’t have to pull out my phone every time I wanted to reply to somebody. What I didn’t like though was the fact that the Signal desktop client was a Chrome app. If you use a browser besides Chrome you had to install Chrome just to use Signal’s desktop client. Fortunately, Google announced that it was deprecating Chrome apps and that forced Open Whisper Systems to release a standalone desktop client.

Now you can run the Signal desktop client without having to install Chrome.