A Geek With Guns

Chronicling the depravities of the State.

Archive for the ‘Security’ tag

Another Day, Another Exploit Discovered in Intel Processors

without comments

The last couple of years have not been kind to processor manufacturers. Ever since the Meltdown and Specter attacks were discovered, the speculative execution feature that is present on most modern processors has opened the door to a world of new exploits. However, Intel has been hit especially hard. The latest attack, given the fancy name Foreshadow, exploits the speculative execution feature on Intel processors to bypass security features meant to keep sensitive data out of the hands of unauthorized processes:

Foreshadow is a speculative execution attack on Intel processors which allows an attacker to steal sensitive information stored inside personal computers or third party clouds. Foreshadow has two versions, the original attack designed to extract data from SGX enclaves and a Next-Generation version which affects Virtual Machines (VMs), hypervisors (VMM), operating system (OS) kernel memory, and System Management Mode (SMM) memory.

It should be noted that, as the site says, this exploit is not known to work against ARM or AMD processors. However, it would be wise to keep an eye on this site. The researchers are still performing research on other processors and it may turn out that this attack works on processors not made by Intel as well.

As annoying as these hardware attacks are, I’m glad that the security industry is focusing more heavily on hardware. Software exploits can be devastating but if you can’t trust the hardware that the software is running on, no amount of effort to secure the software matters.

Written by Christopher Burg

August 15th, 2018 at 10:30 am

Posted in Technology

Tagged with ,

The Body Camera Didn’t Record the Summary Execution Because It Was Hacked

without comments

The aftermath of DEF CON when the high profile exploits discussed at the event hit the headlines is always fun. Most of the headlines have focused on the complete lack of security that exists on electronic voting machines. I haven’t touch on that because it’s an exercise in beating a dead horse at this point. A story that I found far more interesting due to its likely consequences is the news about the exploits found in popular law enforcer body cameras:

At Def Con this weekend, Josh Mitchell, a cybersecurity consultant with Nuix, showed how various models of body cameras can be hacked, tracked and manipulated. Mitchell looked at devices produced by five companies — Vievu, Patrol Eyes, Fire Cam, Digital Ally and CeeSc — and found that they all had major security flaws, Wired reports. In four of the models, the flaws could allow an attacker to download footage, edit it and upload it again without evidence of any of those changes having occurred.

I assume that these exploits are a feature, not a bug.

Law enforcers already have a problem with “malfunctioning” body cameras. There are numerous instances where multiple law enforcers involved in a shooting with highly questionable circumstances all claimed that their body cameras malfunctioned simultaneously. What has been missing up until this point is a justification for those malfunctions. I won’t be surprised if we start seeing law enforcers claim that their body cameras were hacked in the aftermath of these kinds of shootings. Moreover, the ability of unauthorized individuals to download, edit, and upload footage is another great feature because footage that reflects poorly on law enforcers can be edited and if the edit is discovered, officials can claim that it must have been edited by evil hackers.

Written by Christopher Burg

August 14th, 2018 at 11:00 am

Nothing But the Best

with one comment

What’s the worst that could happen if the programmer for your pacemaker accepts software updates that aren’t digitally signed or delivered via a security connection? It could accept a malicious software update that when pushed to your pacemaker could literally kill you. With stakes so high you might expect the manufacturer of such a device to have a vested interest in fixing it. After all, people keeling over dead because you didn’t implement basic security features on your product isn’t going to make for good headlines. But it turns out that that isn’t the case:

At the Black Hat security conference in Las Vegas, researchers Billy Rios and Jonathan Butts said they first alerted medical device maker Medtronic to the hacking vulnerabilities in January 2017. So far, they said, the proof-of-concept attacks they developed still work. The duo on Thursday demonstrated one hack that compromised a CareLink 2090 programmer, a device doctors use to control pacemakers after they’re implanted in patients.

Because updates for the programmer aren’t delivered over an encrypted HTTPS connection and firmware isn’t digitally signed, the researchers were able to force it to run malicious firmware that would be hard for most doctors to detect. From there, the researchers said, the compromised machine could cause implanted pacemakers to make life-threatening changes in therapies, such as increasing the number of shocks delivered to patients.

Killing people through computer hacks has been a mainstay of Hollywood for a long time. When Hollywood first used that plot point, it was unlikely. Today software is integrated into so many critical systems that that plot point is feasible. Security needs to be taken far more seriously, especially by manufacturers to develop such critical products.

Written by Christopher Burg

August 10th, 2018 at 10:00 am

Another Bang Up Job

with 2 comments

Legacy cellular protocols contained numerous gaping security holes, which is why attention was paid to security when Long-Term Evolution (LTE) was being designed. Unfortunately, one can pay attention to something and still ignore it or fuck it up:

The attacks work because of weaknesses built into the LTE standard itself. The most crucial weakness is a form of encryption that doesn’t protect the integrity of the data. The lack of data authentication makes it possible for an attacker to surreptitiously manipulate the IP addresses within an encrypted packet. Dubbed aLTEr, the researchers’ attack causes mobile devices to use a malicious domain name system server that, in turn, redirects the user to a malicious server masquerading as Hotmail. The other two weaknesses involve the way LTE maps users across a cellular network and leaks sensitive information about the data passing between base stations and end users.

Encrypting data is only one part of the puzzle. Once data is encrypted the integrity of the data must be protected as well. This is because encrypted data looks like gibberish until it is decrypted. The only way to know whether the encrypted data you’ve received hasn’t been tampered with is if some kind of cryptographic integrity verification has been implemented and used.

How can you protect yourself form this kind of attack? Using a Virtual Private Network (VPN) tunnel is probably your best bet. The OpenVPN protocol is used by numerous VPN providers that provide clients for both iOS and Android (as well as other major operating systems such as Windows, Linux, and macOS). OpenVPN, unlike LTE, verifies the integrity of encrypted data and rejects any data that appears to have been tampered with. While using a VPN tunnel may not prevent a malicious attacker from redirecting your LTE traffic, it will ensure that the attacker can’t see your data as a malicious VPN tunnel will fail to provide data that passes your client’s integrity checker and thus your client will cease receiving or transmitting data.

Written by Christopher Burg

July 3rd, 2018 at 11:00 am

Another Processor Vulnerability

without comments

Hardware has received far less scrutiny in the past than software when it comes to security. That has changed in recent times and, not surprisingly, the previous lack of scrutiny has resulted in a lot of major vulnerabilities being discovered. The latest vulnerability relates to a feature found in Intel processors referred to as Hyperthreading:

Last week, developers on OpenBSD—the open source operating system that prioritizes security—disabled hyperthreading on Intel processors. Project leader Theo de Raadt said that a research paper due to be presented at Black Hat in August prompted the change, but he would not elaborate further.

The situation has since become a little clearer. The Register reported on Friday that researchers at Vrije Universiteit Amsterdam in the Netherlands have found a new side-channel vulnerability on hyperthreaded processors that’s been dubbed TLBleed. The vulnerability means that processes that share a physical core—but which are using different logical cores—can inadvertently leak information to each other.

In a proof of concept, researchers ran a program calculating cryptographic signatures using the Curve 25519 EdDSA algorithm implemented in libgcrypt on one logical core and their attack program on the other logical core. The attack program could determine the 256-bit encryption key used to calculate the signature with a combination of two milliseconds of observation, followed by 17 seconds of machine-learning-driven guessing and a final fraction of a second of brute-force guessing.

Like the last slew of processor vulnerabilities, the software workaround for this vulnerability involves a performance hit. Unfortunately, the long term fix to these vulnerabilities involves redesigning hardware, which could destroy an assumptions on which modern software development relies: hardware will continue to become faster.

This assumption has been at risk for a while because chip designers are running into transistor size limitations, which could finally do away with Moore’s Law. But designing secure hardware may also require surrendering a bit on the performance front. It’s possible that the next generation of processors won’t have the same raw performance as the current generation of processors. What would this mean? Probably not much for most users. However, it could impact software developers to some extent. Many software development practices are based on the assumption that the next generation of hardware will be faster and it is therefore unnecessary to focus on writing performant code. If the next generation of processors have the same performance as the current generation or, even worse, less performance, an investment in performant code could pay dividends.

Obviously this is pure speculation on my behalf but it’s an interesting scenario to consider.

Written by Christopher Burg

June 27th, 2018 at 11:00 am

Avoid E-Mail for Security Communications

with one comment

The Pretty Good Privacy (PGP) protocol was created to provide a means to securely communicate via e-mail. Unfortunately, it was a bandage applied to a protocol that has only increased significantly in complexity since PGP was released. The ad-hoc nature of PGP combined with the increasing complexity of e-mail itself has lead to rather unfortunate implementation failures that have left PGP users vulnerable. A newly released attack enables attackers to spoof PGP signatures:

Digital signatures are used to prove the source of an encrypted message, data backup, or software update. Typically, the source must use a private encryption key to cause an application to show that a message or file is signed. But a series of vulnerabilities dubbed SigSpoof makes it possible in certain cases for attackers to fake signatures with nothing more than someone’s public key or key ID, both of which are often published online. The spoofed email shown at the top of this post can’t be detected as malicious without doing forensic analysis that’s beyond the ability of many users.

[…]

The spoofing works by hiding metadata in an encrypted email or other message in a way that causes applications to treat it as if it were the result of a signature-verification operation. Applications such as Enigmail and GPGTools then cause email clients such as Thunderbird or Apple Mail to falsely show that an email was cryptographically signed by someone chosen by the attacker. All that’s required to spoof a signature is to have a public key or key ID.

The good news is that many PGP plugins have been updated to patch this vulnerability. The bad news is that this is the second major vulnerability found in PGP in the span of about a month. It’s likely that other major vulnerabilities will be discovered in the near future since the protocol appears to be receiving a lot of attention.

PGP is suffering from the same fate as most attempts to bolt security onto insecure protocols. This is why I urge people to utilize secure communication technology that was designed from the start to be secure and has been audited. While there are no guarantees in life, protocols that were designed from the ground up with security in mind tend to fair better than protocols that were bolted on after the fact. Of course designs can be garbage, which is where an audit comes in. The reason you want to rely on a secure communication tool only after it has been audited is because an audit by an independent third-party can verify that the tool is well designed and provides effective security. And audit isn’t a magic bullet, unfortunately those don’t exist, but it allows you to be reasonably sure that the tool you’re using isn’t complete garbage.

Written by Christopher Burg

June 15th, 2018 at 10:00 am

When Your Smart Lock Isn’t Smart

without comments

My biggest gripe with so-called smart products is that they tend to not be very smart. For example, the idea of a padlock that can be unlocked with your phone isn’t a bad idea in of itself. It would certainly be convenient since most people carry a smartphone these days. However, if it’s designed by people who paid no attention to security, the lock quickly because convenient for unauthorized parties as well:

Yes. The only thing we need to unlock the lock is to know the BLE MAC address. The BLE MAC address that is broadcast by the lock.

I was so astounded by how bad the security was that I ordered another and emailed Tapplock to check the lock and app were genuine.

I scripted the attack up to scan for Tapplocks and unlock them. You can just walk up to any Tapplock and unlock it in under 2s. It requires no skill or knowledge to do this.

I wish that this was one of those findings that is so rare that it’s newsworthy. Unfortunately, a total lack of interest in security seems to be a defining characteristic for developers of “smart” products. While this lack of awareness isn’t unexpected for a company developing, say, a smart thermostat (after all, I wouldn’t expect somebody who is knowledgeable about thermostats to necessarily be an expert in security as well), it’s an entirely different matter when the product being developed is itself a security product.

The problem with this attack is how trivial it is to perform. The author of the article notes that they’re porting the script they developed to unlock these “smart” locks to Android. Once the attack is available for smartphones, anybody can potentially unlock any of these locks with a literal tap of a button. This makes them even easier to bypass than those cheap Masterlock padlocks that are notorious for being insecure.

Written by Christopher Burg

June 14th, 2018 at 11:00 am

You Must Guard Your Own Privacy

without comments

People often make the mistake of believing that they can control the privacy for content they post online. It’s easy to see why they fall into this trap. Facebook and YouTube both offer privacy controls. Facebook along with Twitter also provide private messaging. However, online privacy settings are only as good as the provider makes them:

Facebook disclosed a new privacy blunder on Thursday in a statement that said the site accidentally made the posts of 14 million users public even when they designated the posts to be shared with only a limited number of contacts.

The mixup was the result of a bug that automatically suggested posts be set to public, meaning the posts could be viewed by anyone, including people not logged on to Facebook. As a result, from May 18 to May 27, as many as 14 million users who intended posts to be available only to select individuals were, in fact, accessible to anyone on the Internet.

Oops.

Slip ups like this are more common than most people probably realize. Writing software is hard. Writing complex software used by billions of people is really hard. Then after the software is written, it must be administered. Administering complex software used by billions of people is also extremely difficult. Programmers and administrators are bound to make mistakes. When they do, the “confidential” content you posted online can quickly become publicly accessible.

Privacy is like anything else, if you want the job done well, you need to do it yourself. The reason services like Facebook can accidentally make your “private” content public is because they have complete access to your content. If you want to have some semblance of control over your privacy, your content must only be accessible to you. If you want that content to be available to others, you must post it in such a way where only you and them can access it.

This is the problem that public key cryptography attempts to solve. With public key cryptography each person has a private and public key. Anything encrypted with the public key can only be decrypted with the private key. Needless to say, as the names implies, you can post your public key to the Internet but must guard the security of your private key. When you want to make material available to somebody else, you encrypt it with their public key so hey can decrypted it with their private key. Likewise, when they want to make content available to you they must encrypt it with your public key so you can decrypt it with your private key. This setup gives you the best ability to enforce privacy controls because, assuming no party’s private key has been compromised, only specifically authorized parties have access to content. Granted, there are still a lot of ways for this setup to fall apart but a simple bad configuration isn’t going to suddenly make millions of people’s content publicly accessible.

Written by Christopher Burg

June 8th, 2018 at 10:30 am

EFAIL

without comments

A vulnerability was announced yesterday that affects both OpenPGP and S/MIME encrypted e-mails. While this was initially being passed off as an apocalyptic discovery, I don’t think that it’s scope is quite as bad as many are claiming. First, like all good modern vulnerabilities, it has a name, EFAIL, and a dedicated website:

The EFAIL attacks exploit vulnerabilities in the OpenPGP and S/MIME standards to reveal the plaintext of encrypted emails. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.

The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim’s email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.

The weakness isn’t in the OpenPGP or S/MIME encryption algorithms themselves but in how mail clients interact with encrypted e-mails. If your e-mail client is configured to automatically decrypt encrypted e-mails and allows HTML content to be displayed, the encrypted potion of your e-mail could be exfiltrated by a malicious attacker.

I generally recommend against using e-mail for secure communications in any capacity. OpenPGP and S/MIME are bandages applied to an insecure protocol. Due to their nature as a bolted on feature added after the fact, they are unable to encrypt a lot of data in your e-mail (the only thing they can encrypt is the body). However, if you are going to use it, I generally recommend against allowing your client to automatically decrypt your encrypted e-mails. Instead at least require that your enter a password to decrypt your private key (this wouldn’t defend against this attack if your client is configured to display HTML e-mail content but it would prevent malicious e-mails from automatically exfiltrating encrypted content). Better yet, have your system setup in such a manner where you actually copy the encrypted contents of an e-mail into a separate decryption program, such as the OpenPGP command line tools, to view the secure contents. Finally, I would recommend disabling the ability to display HTML e-mails in your client if you are at all concerned about security.

If you perform the above practices, you can mitigate this attack… on your system. The real problem is, as always, other people’s systems. While you may perform the above practices, you can’t guarantee that everybody with whom you communicate will as well. If an attacker can exploit one party, they will generally get the e-mails sent by all parties. This is why I’d recommend using a communication tool that was designed to be secure from the beginning, such as Signal, over e-mail with OpenPGP or S/MIME. While tools like Signal aren’t bulletproof, they are designed to be secure by default, which makes them less susceptible to vulnerabilities created by an improper configuration.

Written by Christopher Burg

May 15th, 2018 at 11:00 am

Set a Strong Password on Your Phone

without comments

My girlfriend and I had to take our cat to the emergency vet last night so I didn’t have an opportunity to prepare much material for today. However, I will leave you with a security tip. You should set a strong password on your phone:

How long is your iPhone PIN? If you still use one that’s only made by six numbers (or worse, four!), you may want to change that.

Cops all over the United States are racing to buy a new and relatively cheap technology called GrayKey to unlock iPhones. GrayShift, the company that develops it, promises to crack any iPhone, regardless of the passcode that’s on it. GrayKey is able to unlock some iPhones in two hours, or three days for phones with six digit passcodes, according to an anonymous source who provided security firm Malwarebytes with pictures of the cracking device and some information about how it works.

The article goes on to explain that you should use a password with lowercase and upper case letters, numbers, and symbols. Frankly, I think such advice is antiquated and prefer the advice given in this XKCD comic. You can create more bits of entropy if you have a longer password that is easier to remember. Instead of having something like “Sup3r53cretP@5sw0rd” you could have “garish-bethel-perry-best-finale.” The second is easier to remember and is actually longer. Moreover, you can increase your security by tacking on additional words. If you want a randomly generated password, you can use a Diceware program such as this one (which I used to generate the latter of the two passwords.

Written by Christopher Burg

April 19th, 2018 at 10:00 am