Security for Me, Not for Thee

Google has announced several security changes. However, it’s evident that those changes are for its security, not the security of its users:

According to Google’s Jonathan Skelker, the first of these protections that Google has rolled out today comes into effect even before users start typing their username and password.

In the coming future, Skelker says that Google won’t allow users to sign into accounts if they disabled JavaScript in their browser.

The reason is that Google uses JavaScript to run risk assessment checks on the users accessing the login page, and if JavaScript is disabled, this allows crooks to pass through those checks undetected.

Conveniently JavaScript is also used to run a great deal of Google’s tracking software.

Disabling JavaScript is a great way to improve your browser’s security. Most browser-based malware and a lot of surveillance capabilities rely on JavaScript. With that said, disabling JavaScript entirely also makes much of the web unusable because web developers love to use JavaScript for everything, even loading text. But many sites will provide at least a hobbled experience if you choose to disable JavaScript.

Mind you, I understand why Google would want to improve its security and why it would require JavaScript if it believed that doing so would improve its overall security. But it’s important to note what is meant by improving security here and what potential consequences it has for users.

Deafening the Bug

I know a lot of people who put a piece of tape over their computer’s webcam. While this is a sane countermeasure, I’m honestly less worried about my webcam than the microphone built into my laptop. Most laptops, unfortunately, lack a hardware disconnect for the microphone and placing a piece of tap over the microphone input often isn’t enough to prevent it from picking up sound in whatever room it’s located. Fortunately, Apple has been stepping up its security game and now offers a solution to the microphone problem:

Little was known about the chip until today. According to its newest published security guide, the chip comes with a hardware microphone disconnect feature that physically cuts the device’s microphone from the rest of the hardware whenever the lid is closed.

“This disconnect is implemented in hardware alone, and therefore prevents any software, even with root or kernel privileges in macOS, and even the software on the T2 chip, from engaging the microphone when the lid is closed,” said the support guide.

The camera isn’t disconnected, however, because its “field of view is completely obstructed with the lid closed.”

While I have misgivings with Apple’s recent design and business decisions, I still give the company credit for pushing hardware security forward.

Implementing a hardware cutoff for the microphone doesn’t require something like Apple’s T2 chip. Any vendor could put a hardware disconnect switch on their computer that would accomplish the same thing. Almost none of them do though, even if they include hardware cutoffs for other peripherals (my ThinkPad, for example, has a build in cover for the webcam, which is quite nice). I hope Apple’s example encourages more vendors to implement some kind of microphone cutoff switch because being able to listen to conversations generally allows gathering more incriminating evidence that merely being able to look at whatever is in front of a laptop.

Good News from the Arms Race

Security is a constant arms race. When people celebrate good security news, I caution them from getting too excited because bad news is almost certainly soon to follow. Likewise, when people are demoralized by bad security news, I tell them not to lose hope because good news is almost certainly soon to follow.

Earlier this year news about a new smartphone cracking device called GrayKey broke. The device was advertised as being able to bypass the full-disk encryption utilized by iOS. But now it appears that iOS 12 renders GrayKey mostly useless again:

Now, though, Apple has put up what may be an insurmountable wall. Multiple sources familiar with the GrayKey tech tell Forbes the device can no longer break the passcodes of any iPhone running iOS 12 or above. On those devices, GrayKey can only do what’s called a “partial extraction,” sources from the forensic community said. That means police using the tool can only draw out unencrypted files and some metadata, such as file sizes and folder structures.

Within a few months I expect the manufacturer of the GrayKey device to announce an update that gets around iOS’s new protections and within a few months of that announcement I expect Apple to announce an update to iOS that renders GrayKey mostly useless again. But for the time being it appears that law enforcers’ resources for acquiring data from a properly secured iOS device are limited.

Trade-offs

I frequently recommend Signal as a secure messaging platform because it strikes a good balance between security and usability. Unfortunately, as is always the case with security, the balance between security and usability involves trade-offs. One of the trade-offs made by Signal has recently become the subject of some controversy:

When Signal Desktop is installed, it will create an encrypted SQLite database called db.sqlite, which is used to store the user’s messages. The encryption key for this database is automatically generated by the program when it is installed without any interaction by the user.

As the encryption key will be required each time Signal Desktop opens the database, it will store it in plain text to a local file called %AppData%\Signal\config.json on PCs and on a Mac at ~/Library/Application Support/Signal/config.json.

When you open the config.json file, the decryption key is readily available to anyone who wants it.

How could the developers of Signal make such an amateurish mistake? I believe the answer lies in the alternative:

Encrypting a database is a good way to secure a user’s personal messages, but it breaks down when the key is readily accessible to anyone. According to Suchy, this problem could easily be fixed by requiring users to enter a password that would be used to generate an encryption key that is never stored locally.

In order to mitigate this issue the user would be required to do more work. If the user is required to do more work, they’ll likely abandon Signal. Since Signal provides very good transport security (the messages are secure during the trip from one user to another) abandoning it could result in the user opting for an easier to use tool that didn’t provide as effective or any transport security, which would make them less secure overall.

iOS and many modern Android devices have an advantage in that they often have dedicated hardware that encryption keys can be written to but not read from. Once a key is written to the hardware data can be sent to it to be either encrypted or decrypted with that key. Many desktops and laptops have similar functionality thanks to Trusted Platform Modules (TPM) but those tend to require user setup first whereas the smartphone option tends to be seamless to the user.

There is another mitigation option here, which is to utilize full-disk encryption to encrypt all of the contents on your hard drive. While full-disk encryption won’t prevent resident malware from accessing Signal’s database, it will prevent the database from being copied from the computer by a thief or law enforcers (assuming they seized the computer when it was off instead of when the operating system was booted up and thus the decryption key for the drive was resident in memory).

The End of TLS 1.0 and 1.1

Every major browser developer has announced that they will drop support for Transport Layer Security (TLS) 1.0 and 1.1 by 2020:

Apple, Google, Microsoft, and Mozilla have announced a unified plan to deprecate the use of TLS 1.0 and 1.1 early in 2020.

TLS (Transport Layer Security) is used to secure connections on the Web. TLS is essential to the Web, providing the ability to form connections that are confidential, authenticated, and tamper-proof. This has made it a big focus of security research, and over the years, a number of bugs that had significant security implications have been found in the protocol. Revisions have been published to address these flaws.

Waiting until 2020 gives website administrators plenty of time to upgrade their sites, which is why I’ll be rolling my eyes when the cutoff date arrives and a bunch of administrators whine about the major browsers “breaking” their websites.

Every time browser developers announced years ahead of time that support will be dropped for some archaic standard, there always seems to be a slew of websites, include many major websites, that continue relying on the dropped standard after the cutoff date.

A Lot of Websites Don’t Fix Security Issues

Last year Google announced that it would be removing the Symantec root certificate from Chrome’s list of trusted certificates (this is because Symantec signed a lot of invalid certificates). This notification was meant to give web administrators time to acquire new certificates to replace their Symantec signed ones. The time of removal is fast approaching and many web administrators still haven’t updated their certificates:

Chrome 70 is expected to be released on or around October 16, when the browser will start blocking sites that run older Symantec certificates issued before June 2016, including legacy branded Thawte, VeriSign, Equifax, GeoTrust and RapidSSL certificates.

Yet despite more than a year to prepare, many popular sites are not ready.

Security researcher Scott Helme found 1,139 sites in the top one million sites ranked by Alexa, including Citrus, SSRN, the Federal Bank of India, Pantone, the Tel-Aviv city government, Squatty Potty and Penn State Federal to name just a few.

The headline of this article is, “With Chrome 70, hundreds of popular websites are about to break.” A more accurate headline would have been, “Administrators of hundreds of websites failed to fix major security issue.” Chrome isn’t the culprit in this story. Google is doing the right thing by removing the root certificate of an authority that failed to take proper precautions when issuing certificates. The administrators of these sites on the other hand have failed to do their job of providing a secure connection for their users.

All Data Is for Sale

What happens when a website that sells your personal information asks you to input your phone number to enable two-factor authentication? Your phone number is sold to advertisers:

Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information.” I managed to place an ad in front of Alan Mislove by targeting his shadow profile. This means that the junk email address that you hand over for discounts or for shady online shopping is likely associated with your account and being used to target you with ads.

There really is no reason for a website to require a phone number to enable two-factor authentication. Short Message Service (SMS) is not a secure protocol so utilizing it for two-factor authentication, which many websites sadly do, is not a good idea. Moreover, as this study has demonstrated, handing over your phone number just gives the service provider another piece of information about you to sell.

Instead of SMS-based two-factor authentication websites should at a minimum offer two-factor authentication that utilizes apps that use Time-based One-time Password Algorithm (TOTP) or HMAC-based One-time Password Algorithm (HOTP) like Authy and Google Authenticator. Better yet websites should offer two-factor authentication that utilizes hardware tokens like YubiKeys.

Properly Warning Users About Business Model Changes

I have an update from my previous article about how the developers of GPGTools botched their changeover from offering a free software suite to a paid software suite. It appears that they listened to those of us who criticized them for not properly notifying their users that the latest update will change the business model because this is the new update notification:

That’s how you properly inform your users about business model changes.

Cloudflare Makes Tor Use More Bearable

One of the biggest annoyances of using the Tor Browser is that so many sites that rely on Cloudflare services throw up CAPTCHA challenges before allowing you to view content. Yesterday Cloudflare announced a change to its service that should make life more bearable for Tor users:

Cloudflare launched today a new service named the “Cloudflare Onion Service” that can distinguish between bots and legitimate Tor traffic. The main advantage of this new service is that Tor users will see far less, or even no CAPTCHAs when accessing a Cloudflare-protected website via the Tor Browser.

The new Cloudflare Onion Service needed the Tor team to make “a small tweak in the Tor binary,” hence it will only work with recent versions of the Tor Browser –the Tor Browser 8.0 and the new Tor Browser for Android, both launched earlier this month.

Hallelujah!

The Power of Public Shaming

Every major security breach is followed by calls for politicians to enact more stringent regulations. When I see people demanding additional government regulations I like to point out that there is a list of alternative solutions that can yield far better results (especially since regulations, being a product of government, are extremely rigid and slow to change, which makes them a solution ill-suited to fast moving markets). One of those solutions is public shaming. It turns out that public shaming is often a viable solution to security issues:

See the theme? Crazy statements made by representatives of the companies involved. The last one from Betfair is a great example and the entire thread is worth a read. What it boiled down to was the account arguing with a journalist (pro tip: avoid arguing being a dick to those in a position to write publicly about you!) that no, you didn’t just need a username and birth date to reset the account password. Eventually, it got to the point where Betfair advised that providing this information to someone else would be a breach of their terms. Now, keeping in mind that the username is your email address and that many among us like cake and presents and other birthday celebratory patterns, it’s reasonable to say that this was a ludicrous statement. Further, I propose that this is a perfect case where shaming is not only due, but necessary. So I wrote a blog post..

Shortly after that blog post, three things happened and the first was that it got press. The Register wrote about it. Venture Beat wrote about it. Many other discussions were held in the public forum with all concluding the same thing: this process sucked. Secondly, it got fixed. No longer was a mere email address and birthday sufficient to reset the account, you actually had to demonstrate that you controlled the email address! And finally, something else happened that convinced me of the value of shaming in this fashion:

A couple of months later, I delivered the opening keynote at OWASP’s AppSec conference in Amsterdam. After the talk, a bunch of people came up to say g’day and many other nice things. And then, after the crowd died down, a bloke came up and handed me his card – “Betfair Security”. Ah shit. But the hesitation quickly passed as he proceeded to thank me for the coverage. You see, they knew this process sucked – any reasonable person with half an idea about security did – but the internal security team alone telling management this was not cool wasn’t enough to drive change.

As I mentioned above, regulations tend to be rigid and slow to change. Public shaming on the other hand is often almost instantaneous. It seldom takes long for a company tweet that makes an outrageous security claim to be bombarded with criticism. Within minutes there are retweets by people mocking the statement, replies from people explaining why the claim is outrageous, and journalists writing about how outrageous the claim is. That public outrage, unlike C-SPAN, quickly reaches the public at large. Once the public becomes aware of the company’s claim and why it’s bad, the company has to being worrying about losing customers and by extent profits.