Don’t Trust Snoops

Software that allows family members to spy on one another is big business. But how far can you trust a company that specializes in enabling abusers to keep a constant eye on their victims? Not surprisingly, such companies can’t be trusted very much:

mSpy, the makers of a software-as-a-service product that claims to help more than a million paying customers spy on the mobile devices of their kids and partners, has leaked millions of sensitive records online, including passwords, call logs, text messages, contacts, notes and location data secretly collected from phones running the stealthy spyware.

Less than a week ago, security researcher Nitish Shah directed KrebsOnSecurity to an open database on the Web that allowed anyone to query up-to-the-minute mSpy records for both customer transactions at mSpy’s site and for mobile phone data collected by mSpy’s software. The database required no authentication.

Oops.

I can’t say that I’m terribly surprised by this. Companies that make software aimed at allowing family members to spy on one another already have, at least in my opinion, a pretty flexible moral framework. I wouldn’t be surprised if all of the data collected by mSpy was stored in plaintext in order to make it easily accessible to other buyers.

You Are Responsible for Your Own Security

One of the advertised advantages of Apple’s iOS platform is that all software loaded onto iOS devices has to be verified by Apple. This so-called walled garden is meant to keep the bad guys out. However, anybody who studies military history quickly learns that sitting behind a wall is usually a death sentence. Eventually the enemy breaches the wall. Enemies have breached Apple’s walls before and they continue to do so:

In a blog post entitled “Location Monetization in iOS Apps,” the Guardian team detailed 24 applications from the Apple iOS App Store that pushed data to 12 different “location-data monetization firms”—companies that collect precise location data from application users for profit. The 24 identified applications were found in a random sampling of the App Store’s top free applications, so there are likely many more apps for iOS surreptitiously selling user location data. Additionally, the Guardian team confirmed that one data-mining service was connected with apps from over 100 local broadcasters owned by companies such as Sinclair, Tribune Broadcasting, Fox, and Nexstar Media.

iOS has a good permission system and users can prevent apps from accessing location information but far too many people are willing to grant access to their location information to any application that asks. If a walled garden were perfectly secure, users wouldn’t have to worry about granting unnecessary permissions because the wall guards wouldn’t allow anything malicious inside. Unfortunately, the wall guards aren’t perfect and malicious stuff does get through, which brings me to my second point.

What happens when a malicious app manages to breach Apple’s walled garden? Ideally it should be immediately removed but the universe isn’t ideal:

Adware Doctor is a top app in Apple’s Mac App Store, sitting at number five in the list of top paid apps and leading the list of top utilities apps, as of writing. It says it’s meant to prevent “malware and malicious files from infecting your Mac” and claims to be one of the best apps to do so, but unbeknownst to its users, it’s also stealing their browser history and downloading it to servers in China.

In fairness to Apple, the company did eventually remove Adware Doctor from its app store. Eventually is the keyword though. How many other malicious apps have breached Apple’s walled garden? How long do they manage to hide inside of the garden until they are discovered and how quickly do the guards remove them once they are discovered? Apparently Apple’s guards can be a bit slow to react.

Even in a walled garden you are responsible for your own security. You need to know how to defend yourself in case a bad guy manages to get inside of the defensive walls.

Another Day, Another Exploit Discovered in Intel Processors

The last couple of years have not been kind to processor manufacturers. Ever since the Meltdown and Specter attacks were discovered, the speculative execution feature that is present on most modern processors has opened the door to a world of new exploits. However, Intel has been hit especially hard. The latest attack, given the fancy name Foreshadow, exploits the speculative execution feature on Intel processors to bypass security features meant to keep sensitive data out of the hands of unauthorized processes:

Foreshadow is a speculative execution attack on Intel processors which allows an attacker to steal sensitive information stored inside personal computers or third party clouds. Foreshadow has two versions, the original attack designed to extract data from SGX enclaves and a Next-Generation version which affects Virtual Machines (VMs), hypervisors (VMM), operating system (OS) kernel memory, and System Management Mode (SMM) memory.

It should be noted that, as the site says, this exploit is not known to work against ARM or AMD processors. However, it would be wise to keep an eye on this site. The researchers are still performing research on other processors and it may turn out that this attack works on processors not made by Intel as well.

As annoying as these hardware attacks are, I’m glad that the security industry is focusing more heavily on hardware. Software exploits can be devastating but if you can’t trust the hardware that the software is running on, no amount of effort to secure the software matters.

The Body Camera Didn’t Record the Summary Execution Because It Was Hacked

The aftermath of DEF CON when the high profile exploits discussed at the event hit the headlines is always fun. Most of the headlines have focused on the complete lack of security that exists on electronic voting machines. I haven’t touch on that because it’s an exercise in beating a dead horse at this point. A story that I found far more interesting due to its likely consequences is the news about the exploits found in popular law enforcer body cameras:

At Def Con this weekend, Josh Mitchell, a cybersecurity consultant with Nuix, showed how various models of body cameras can be hacked, tracked and manipulated. Mitchell looked at devices produced by five companies — Vievu, Patrol Eyes, Fire Cam, Digital Ally and CeeSc — and found that they all had major security flaws, Wired reports. In four of the models, the flaws could allow an attacker to download footage, edit it and upload it again without evidence of any of those changes having occurred.

I assume that these exploits are a feature, not a bug.

Law enforcers already have a problem with “malfunctioning” body cameras. There are numerous instances where multiple law enforcers involved in a shooting with highly questionable circumstances all claimed that their body cameras malfunctioned simultaneously. What has been missing up until this point is a justification for those malfunctions. I won’t be surprised if we start seeing law enforcers claim that their body cameras were hacked in the aftermath of these kinds of shootings. Moreover, the ability of unauthorized individuals to download, edit, and upload footage is another great feature because footage that reflects poorly on law enforcers can be edited and if the edit is discovered, officials can claim that it must have been edited by evil hackers.

Nothing But the Best

What’s the worst that could happen if the programmer for your pacemaker accepts software updates that aren’t digitally signed or delivered via a security connection? It could accept a malicious software update that when pushed to your pacemaker could literally kill you. With stakes so high you might expect the manufacturer of such a device to have a vested interest in fixing it. After all, people keeling over dead because you didn’t implement basic security features on your product isn’t going to make for good headlines. But it turns out that that isn’t the case:

At the Black Hat security conference in Las Vegas, researchers Billy Rios and Jonathan Butts said they first alerted medical device maker Medtronic to the hacking vulnerabilities in January 2017. So far, they said, the proof-of-concept attacks they developed still work. The duo on Thursday demonstrated one hack that compromised a CareLink 2090 programmer, a device doctors use to control pacemakers after they’re implanted in patients.

Because updates for the programmer aren’t delivered over an encrypted HTTPS connection and firmware isn’t digitally signed, the researchers were able to force it to run malicious firmware that would be hard for most doctors to detect. From there, the researchers said, the compromised machine could cause implanted pacemakers to make life-threatening changes in therapies, such as increasing the number of shocks delivered to patients.

Killing people through computer hacks has been a mainstay of Hollywood for a long time. When Hollywood first used that plot point, it was unlikely. Today software is integrated into so many critical systems that that plot point is feasible. Security needs to be taken far more seriously, especially by manufacturers to develop such critical products.

Another Bang Up Job

Legacy cellular protocols contained numerous gaping security holes, which is why attention was paid to security when Long-Term Evolution (LTE) was being designed. Unfortunately, one can pay attention to something and still ignore it or fuck it up:

The attacks work because of weaknesses built into the LTE standard itself. The most crucial weakness is a form of encryption that doesn’t protect the integrity of the data. The lack of data authentication makes it possible for an attacker to surreptitiously manipulate the IP addresses within an encrypted packet. Dubbed aLTEr, the researchers’ attack causes mobile devices to use a malicious domain name system server that, in turn, redirects the user to a malicious server masquerading as Hotmail. The other two weaknesses involve the way LTE maps users across a cellular network and leaks sensitive information about the data passing between base stations and end users.

Encrypting data is only one part of the puzzle. Once data is encrypted the integrity of the data must be protected as well. This is because encrypted data looks like gibberish until it is decrypted. The only way to know whether the encrypted data you’ve received hasn’t been tampered with is if some kind of cryptographic integrity verification has been implemented and used.

How can you protect yourself form this kind of attack? Using a Virtual Private Network (VPN) tunnel is probably your best bet. The OpenVPN protocol is used by numerous VPN providers that provide clients for both iOS and Android (as well as other major operating systems such as Windows, Linux, and macOS). OpenVPN, unlike LTE, verifies the integrity of encrypted data and rejects any data that appears to have been tampered with. While using a VPN tunnel may not prevent a malicious attacker from redirecting your LTE traffic, it will ensure that the attacker can’t see your data as a malicious VPN tunnel will fail to provide data that passes your client’s integrity checker and thus your client will cease receiving or transmitting data.

Another Processor Vulnerability

Hardware has received far less scrutiny in the past than software when it comes to security. That has changed in recent times and, not surprisingly, the previous lack of scrutiny has resulted in a lot of major vulnerabilities being discovered. The latest vulnerability relates to a feature found in Intel processors referred to as Hyperthreading:

Last week, developers on OpenBSD—the open source operating system that prioritizes security—disabled hyperthreading on Intel processors. Project leader Theo de Raadt said that a research paper due to be presented at Black Hat in August prompted the change, but he would not elaborate further.

The situation has since become a little clearer. The Register reported on Friday that researchers at Vrije Universiteit Amsterdam in the Netherlands have found a new side-channel vulnerability on hyperthreaded processors that’s been dubbed TLBleed. The vulnerability means that processes that share a physical core—but which are using different logical cores—can inadvertently leak information to each other.

In a proof of concept, researchers ran a program calculating cryptographic signatures using the Curve 25519 EdDSA algorithm implemented in libgcrypt on one logical core and their attack program on the other logical core. The attack program could determine the 256-bit encryption key used to calculate the signature with a combination of two milliseconds of observation, followed by 17 seconds of machine-learning-driven guessing and a final fraction of a second of brute-force guessing.

Like the last slew of processor vulnerabilities, the software workaround for this vulnerability involves a performance hit. Unfortunately, the long term fix to these vulnerabilities involves redesigning hardware, which could destroy an assumptions on which modern software development relies: hardware will continue to become faster.

This assumption has been at risk for a while because chip designers are running into transistor size limitations, which could finally do away with Moore’s Law. But designing secure hardware may also require surrendering a bit on the performance front. It’s possible that the next generation of processors won’t have the same raw performance as the current generation of processors. What would this mean? Probably not much for most users. However, it could impact software developers to some extent. Many software development practices are based on the assumption that the next generation of hardware will be faster and it is therefore unnecessary to focus on writing performant code. If the next generation of processors have the same performance as the current generation or, even worse, less performance, an investment in performant code could pay dividends.

Obviously this is pure speculation on my behalf but it’s an interesting scenario to consider.

Avoid E-Mail for Security Communications

The Pretty Good Privacy (PGP) protocol was created to provide a means to securely communicate via e-mail. Unfortunately, it was a bandage applied to a protocol that has only increased significantly in complexity since PGP was released. The ad-hoc nature of PGP combined with the increasing complexity of e-mail itself has lead to rather unfortunate implementation failures that have left PGP users vulnerable. A newly released attack enables attackers to spoof PGP signatures:

Digital signatures are used to prove the source of an encrypted message, data backup, or software update. Typically, the source must use a private encryption key to cause an application to show that a message or file is signed. But a series of vulnerabilities dubbed SigSpoof makes it possible in certain cases for attackers to fake signatures with nothing more than someone’s public key or key ID, both of which are often published online. The spoofed email shown at the top of this post can’t be detected as malicious without doing forensic analysis that’s beyond the ability of many users.

[…]

The spoofing works by hiding metadata in an encrypted email or other message in a way that causes applications to treat it as if it were the result of a signature-verification operation. Applications such as Enigmail and GPGTools then cause email clients such as Thunderbird or Apple Mail to falsely show that an email was cryptographically signed by someone chosen by the attacker. All that’s required to spoof a signature is to have a public key or key ID.

The good news is that many PGP plugins have been updated to patch this vulnerability. The bad news is that this is the second major vulnerability found in PGP in the span of about a month. It’s likely that other major vulnerabilities will be discovered in the near future since the protocol appears to be receiving a lot of attention.

PGP is suffering from the same fate as most attempts to bolt security onto insecure protocols. This is why I urge people to utilize secure communication technology that was designed from the start to be secure and has been audited. While there are no guarantees in life, protocols that were designed from the ground up with security in mind tend to fair better than protocols that were bolted on after the fact. Of course designs can be garbage, which is where an audit comes in. The reason you want to rely on a secure communication tool only after it has been audited is because an audit by an independent third-party can verify that the tool is well designed and provides effective security. And audit isn’t a magic bullet, unfortunately those don’t exist, but it allows you to be reasonably sure that the tool you’re using isn’t complete garbage.

When Your Smart Lock Isn’t Smart

My biggest gripe with so-called smart products is that they tend to not be very smart. For example, the idea of a padlock that can be unlocked with your phone isn’t a bad idea in of itself. It would certainly be convenient since most people carry a smartphone these days. However, if it’s designed by people who paid no attention to security, the lock quickly because convenient for unauthorized parties as well:

Yes. The only thing we need to unlock the lock is to know the BLE MAC address. The BLE MAC address that is broadcast by the lock.

I was so astounded by how bad the security was that I ordered another and emailed Tapplock to check the lock and app were genuine.

I scripted the attack up to scan for Tapplocks and unlock them. You can just walk up to any Tapplock and unlock it in under 2s. It requires no skill or knowledge to do this.

I wish that this was one of those findings that is so rare that it’s newsworthy. Unfortunately, a total lack of interest in security seems to be a defining characteristic for developers of “smart” products. While this lack of awareness isn’t unexpected for a company developing, say, a smart thermostat (after all, I wouldn’t expect somebody who is knowledgeable about thermostats to necessarily be an expert in security as well), it’s an entirely different matter when the product being developed is itself a security product.

The problem with this attack is how trivial it is to perform. The author of the article notes that they’re porting the script they developed to unlock these “smart” locks to Android. Once the attack is available for smartphones, anybody can potentially unlock any of these locks with a literal tap of a button. This makes them even easier to bypass than those cheap Masterlock padlocks that are notorious for being insecure.

You Must Guard Your Own Privacy

People often make the mistake of believing that they can control the privacy for content they post online. It’s easy to see why they fall into this trap. Facebook and YouTube both offer privacy controls. Facebook along with Twitter also provide private messaging. However, online privacy settings are only as good as the provider makes them:

Facebook disclosed a new privacy blunder on Thursday in a statement that said the site accidentally made the posts of 14 million users public even when they designated the posts to be shared with only a limited number of contacts.

The mixup was the result of a bug that automatically suggested posts be set to public, meaning the posts could be viewed by anyone, including people not logged on to Facebook. As a result, from May 18 to May 27, as many as 14 million users who intended posts to be available only to select individuals were, in fact, accessible to anyone on the Internet.

Oops.

Slip ups like this are more common than most people probably realize. Writing software is hard. Writing complex software used by billions of people is really hard. Then after the software is written, it must be administered. Administering complex software used by billions of people is also extremely difficult. Programmers and administrators are bound to make mistakes. When they do, the “confidential” content you posted online can quickly become publicly accessible.

Privacy is like anything else, if you want the job done well, you need to do it yourself. The reason services like Facebook can accidentally make your “private” content public is because they have complete access to your content. If you want to have some semblance of control over your privacy, your content must only be accessible to you. If you want that content to be available to others, you must post it in such a way where only you and them can access it.

This is the problem that public key cryptography attempts to solve. With public key cryptography each person has a private and public key. Anything encrypted with the public key can only be decrypted with the private key. Needless to say, as the names implies, you can post your public key to the Internet but must guard the security of your private key. When you want to make material available to somebody else, you encrypt it with their public key so hey can decrypted it with their private key. Likewise, when they want to make content available to you they must encrypt it with your public key so you can decrypt it with your private key. This setup gives you the best ability to enforce privacy controls because, assuming no party’s private key has been compromised, only specifically authorized parties have access to content. Granted, there are still a lot of ways for this setup to fall apart but a simple bad configuration isn’t going to suddenly make millions of people’s content publicly accessible.