Paranoia I Appreciate

My first Apple product was a PowerBook G4 that I purchased back in college. At the time I was looking for a laptop that could run a Unix operating system. Back then (as is still the case today albeit to a lesser extent) running Linux on a laptop meant you had to usually give up sleep mode, Wi-Fi, the additional function buttons most manufacturers added on their keyboards, and a slew of power management features that made the already pathetic battery life even worse. Since OS X was (and still is) Unix based and didn’t involved the headaches of trying to get Linux to run on a laptop the PowerBook fit my needs perfectly.

Fast forward to today. Between then and now I’ve lost confidence in a lot of companies whose products I used to love. Apple on the other hand has continued to impress me. In recent times my preference for Apple products has been influenced in part by the fact that it doesn’t rely on selling my personal information to make money and displays a healthy level of paranoia:

Apple has begun designing its own servers partly because of suspicions that hardware is being intercepted before it gets delivered to Apple, according to a report yesterday from The Information.

“Apple has long suspected that servers it ordered from the traditional supply chain were intercepted during shipping, with additional chips and firmware added to them by unknown third parties in order to make them vulnerable to infiltration, according to a person familiar with the matter,” the report said. “At one point, Apple even assigned people to take photographs of motherboards and annotate the function of each chip, explaining why it was supposed to be there. Building its own servers with motherboards it designed would be the most surefire way for Apple to prevent unauthorized snooping via extra chips.”

Anybody who has been paying attention the the leaks released by Edward Snowden knows that concerns about surveillance hardware being added to off-the-shelf products isn’t unfounded. In fact some companies such as Cisco have taken measure to mitigate such threats.

Apple has a lot of hardware manufacturing capacity and it appears that the company will be using it to further protect itself against surveillance by manufacturing its own servers.

This is a level of paranoia I can appreciate. Years ago I brought a lot of my infrastructure in house. My e-mail, calendar and contact syncing, and even this website are all being hosted on servers running in my dwelling. Although part of the reason I did this was for the experience another reason was to guard against certain forms of surveillance. National Security Letters (NSL), for example, require service providers to surrender customer information to the State and legally prohibit them from informing the targeted customer. Since my servers are sitting in my dwelling any NSL would necessarily require me to inform myself of receiving it.

How The State Makes Us Less Secure Part MLVI

Statists often claim that the State is necessary for the common defense. If this were the case I would expect it to do what it can to make everybody safer. Instead it does the opposite. In its pursuit of power the State continues to take actions that make everybody under its rule less safe.

The latest chapter in this ongoing saga revolves around the iPhone of Syed Farook. After trying to get a court to force Apple to write a custom firmware for Farook’s iPhone that would allow the Federal Bureau of Investigations (FBI) to brute force the passcode, the agency postponed the hearing because it claimed to have found another method to get the data it wants. That method appears to be an exploit of some sort but the Justice Department has classified the matter so we may never know:

A new method to crack open locked iPhones is so promising that US government officials have classified it, the Guardian has learned.

The Justice Department made headlines on Monday when it postponed a federal court hearing in California. It had been due to confront Apple over an order that would have forced it to write software that would make it easier for investigators to guess the passcode for an iPhone used by San Bernardino gunman Syed Farook.

The government now says it may have figured out a way to get into the phone without Apple’s help. But it wants that discovery to remain secret, in an effort to prevent criminals, security researchers and even Apple itself from reengineering smartphones so that the tactic would no longer work.

By classifying this method the Justice Department is putting, at minimum, every iPhone 5C user running the same firmware as Farook’s phone at risk. But the exploit likely reaches further and may even put every user of every iOS device at risk.

Since Farook’s iPhone is in the State’s possession there is no risk of its firmware being upgraded. That being the case, there’s no reason for the Justice Department not to disclose the vulnerability its exploiting. Even if the exploit is disclosed the agency will still be able to use it to gain access to the data on Farook’s phone (assuming the exploit works as implied). But disclosing it would allow Apple to patch it so it couldn’t be used against the millions of innocent people using iOS devices.

There is a conflict of interest inherent in statism. The State is supposed to provide for the common defense of those within its territory. At the same time it’s charged with investigating crimes and dispensing justice. In order to fulfill the latter goal it must be able to gain access to whatever information it deems pertinent to an investigation. Ensuring that access is available conflicts with providing for a common defense since an effective defense against foreign aggressors, especially as it relates to protecting data, is also an effective defense against the State.

iOS 9.3 With iMessage Fix Is Out

In the ongoing security arms race researchers from John Hopkins discovered a vulnerability in Apple’s iMessage:

Green suspected there might be a flaw in iMessage last year after he read an Apple security guide describing the encryption process and it struck him as weak. He said he alerted the firm’s engineers to his concern. When a few months passed and the flaw remained, he and his graduate students decided to mount an attack to show that they could pierce the encryption on photos or videos sent through iMessage.

It took a few months, but they succeeded, targeting phones that were not using the latest operating system on iMessage, which launched in 2011.

To intercept a file, the researchers wrote software to mimic an Apple server. The encrypted transmission they targeted contained a link to the photo stored in Apple’s iCloud server as well as a 64-digit key to decrypt the photo.

Although the students could not see the key’s digits, they guessed at them by a repetitive process of changing a digit or a letter in the key and sending it back to the target phone. Each time they guessed a digit correctly, the phone accepted it. They probed the phone in this way thousands of times.

“And we kept doing that,” Green said, “until we had the key.”

A modified version of the attack would also work on later operating systems, Green said, adding that it would likely have taken the hacking skills of a nation-state.

With the key, the team was able to retrieve the photo from Apple’s server. If it had been a true attack, the user would not have known.

There are several things to note about this vulnerability. First, Apple did response quickly by including a fix for it in iOS 9.3. Second, security is very difficult to get right so it often turns into an arms race. Third, designing secure software, even if you’re a large company with a lot of talented employees, is hard.

Christopher Soghoian also made a good point in the article:

Christopher Soghoian, principal technologist at the American Civil Liberties Union, said that Green’s attack highlights the danger of companies building their own encryption without independent review. “The cryptographic history books are filled with examples of crypto-algorithms designed behind closed doors that failed spectacularly,” he said.

The better approach, he said, is open design. He pointed to encryption protocols created by researchers at Open Whisper Systems, who developed Signal, an instant message platform. They publish their code and their designs, but the keys, which are generated by the sender and user, remain secret.

Open source isn’t a magic bullet but it does allow independent third party verification of your code. This advantage often goes unrealized as even very popular open source projects like OpenSSL have contained numerous notable security vulnerabilities for years without anybody being the wiser. But it’s unlikely something like iMessage would have been ignored so thoroughly.

The project would likely attracted a lot of developers interested in writing iMessage clients for Android, Windows, and Linux. Since iOS, and therefore by extension iMessage, is so popular in the public eye it’s likely a lot of security researchers would have looked through the iMessage code hoping to be the first to find a vulnerability and enjoy the publicity that would almost certainly entail. So open sourcing iMessage would likely have gained Apple a lot of third party verification.

In fact this is why I recommend applications like Signal over iMessage. Not only is Signal compatible with Android and iOS but it’s also open source so it’s available for third party verification.

Let Me Emphasize That Ad-Blockers Are Security Tools

Once again ad networks have been utilized to serve up malware:

According to a just-published post from Malwarebytes, a flurry of malvertising appeared over the weekend, almost out of the blue. It hit some of the biggest publishers in the business, including msn.com, nytimes.com, bbc.com, aol.com, my.xfinity.com, nfl.com, realtor.com, theweathernetwork.com, thehill.com, and newsweek.com. Affected networks included those owned by Google, AppNexis, AOL, and Rubicon. The attacks are flowing from two suspicious domains, including trackmytraffic[c],buz and talk915[.]pw.

The ads are also spreading on sites including answers.com, zerohedge.com, and infolinks.com, according to SpiderLabs. Legitimate mainstream sites receive the malware from domain names that are associated with compromised ad networks. The most widely seen domain name in the current campaign is brentsmedia[.]com. Whois records show it was owned by an online marketer until January 1, when the address expired. It was snapped up by its current owner on March 6, a day before the malicious ad onslaught started.

In this case the attacks appear to be originated from domains of ad networks that had been allowed to expire. After being allowed to expire the domains were snapped up by malware distributors. This allowed them to distribute malware to visitors of sites that still allowed ads from those expired domains.

Ad networks have become an appealing target for malware distributors. By compromising a single ad network a malware distributor can successfully target users across many websites. It offers a much better return on investment than compromising a single large website such as the New York Times and the BBC. Compromising ad networks is often easier than compromising large websites as well since operators of large websites often have skilled administrators on hand that keep things fairly locked down. The fact that advertising companies come and go with notable frequency also makes life difficult for site administrators. In this case the purchased domains likely were legitimate ad networks at one time and simply vanished without anybody noticing. Since nobody noticed they weren’t removed from any of the ad distribution networks and could therefore still serve up ads to legitimate sites.

This event, if nothing else, should serve as a reminder that ad blockers are security tools.

One Step Forward, Two Steps Back

Were I asked I would summarize the Internet of Things as taking one step forward and two steps back. While integrating computers into everyday objects offers some potential the way manufacturers are going about it is all wrong.

Consider the standard light switch. A light switch usually has two states. One state, which closes the circuit, turns the lights on while the other state, which opens the circuit, turns the lights off. It’s simple enough but has some notable limitations. First, it cannot be controlled remotely. Having a remotely controlled light switch would be useful, especially if you’re away from home and want to make it appear as though somebody is there to discourage burglars. It would also be nice to verify if you turned all your lights off when you left to reduce the electric bill. Of course remotely operated switches also introduce the potential for remotely accessible vulnerabilities.

What happens when you take the worst aspects of connected light switches, namely vulnerabilities, and don’t even offer the positives? This:

Garrett, who’s also a member of the Free Software Foundation board of directors, was in London last week attending a conference, and found that his hotel room has Android tablets instead of light switches.

“One was embedded in the wall, but the two next to the bed had convenient looking ethernet cables plugged into the wall,” he noted. So, he got ahold of a couple of ethernet adapters, set up a transparent bridge, and put his laptop between the tablet and the wall.

He discovered that the traffic to and from the tablet is going through the Modbus serial communications protocol over TCP.

“Modbus is a pretty trivial protocol, and notably has no authentication whatsoever,” he noted. “Tcpdump showed that traffic was being sent to 172.16.207.14, and pymodbus let me start controlling my lights, turning the TV on and off and even making my curtains open and close.”

He then noticed that the last three digits of the IP address he was communicating with were those of his room, and successfully tested his theory:

“It’s basically as bad as it could be – once I’d figured out the gateway, I could access the control systems on every floor and query other rooms to figure out whether the lights were on or not, which strongly implies that I could control them as well.”

As far as I can tell the only reason the hotel swapped out mechanical light switches with Android tablets was to attempt to look impressive. What they ended up with was a setup that may look impressive to the layman but is every trolls dream come true.

I can’t wait to read a story about a 14 year-old turning off the lights to every room in a hotel.

Illustrating Cryptographic Backdoors With Mechanical Backdoors

A lot of people don’t understand the concept of cryptographic backdoors. This isn’t surprising because cryptography and security are very complex fields of study. But it does lead to a great deal of misunderstanding, especially amongst those who tend to trust what government agents say.

I’ve been asked by quite a few people why Apple doesn’t comply with the demands of the Federal Bureau of Investigations (FBI). They’ve fallen for the FBI’s claims that the compromised firmware would only be used on that single iPhone and Apple would be allowed to maintain total control over the firmware at all times. However, as Jonathan Zdziarski explained, the burden of forensic methodology would require the firmware to exchange hands several times:

Once the tool is ready, it must be tested and validated by a third party. In this case, it would be NIST/NIJ (which is where my own tools were validated). NIST has a mobile forensics testing and validation process by which Apple would need to provide a copy of the tool (which would have to work on all of their test devices) for NIST to verify.

[…]

If evidence from a device ever leads to a case in a court room, the defense attorney will (and should) request a copy of the tool to have independent third party verification performed, at which point the software will need to be made to work on another set of test devices. Apple will need to work with defense experts to instruct them on how to use the tool to provide predictable and consistent results.

If Apple creates what the FBI is demanding the firmware would almost certainly end up in the hands of NIST, the defense attorney, and another third party hired by the defense attorney to verify the firmware. As Benjamin Franklin said, “Three can keep a secret, if two of them are dead.” With the firmware exchanging so many hands it will almost certainly end up leaked to the public.

After pointing this out a common followup question is, “So what? How much damage could this firmware cause?” To illustrate this I will use an example from the physical world.

The Transportation Security Administration (TSA) worked with several lock manufacturers to create TSA recognized locks. These are special locks that TSA agents can bypass using master keys. To many this doesn’t sound bad. After all, the TSA tightly guards these master keys, right? Although I’m not familiar with the TSA’s internal policies regarding the management of their master keys I do know the key patterns were leaked to the Internet and 3D printer models were created shortly thereafter. And those models produce keys that work.

The keys were leaked, likely unintentionally, by a TSA agent posting a photograph of them online. With that single leak every TSA recognized lock was rendered entirely useless. Now anybody can obtain the keys to open any TSA recognized lock.

It only takes one person to leak a master key, either intentionally or unintentionally, to render every lock that key unlocks entirely useless. Leaking a compromised version of iOS could happen in many ways. The defendant’s attorney, who may not be well versed in proper security practices, could accidentally transfer the firmware to a third party in an unsecured manner. If that transfer is being monitored the person monitoring it would have a copy of the firmware. An employee of NIST could accidentally insert a USB drive with the firmware on it into an infected computer and unknowingly provide it to a malicious actor. Somebody working for the defendant’s third party verifier could intentionally leak a copy of the firmware. There are so many ways the firmware could make its way to the Internet that the question isn’t really a matter of if, but when.

Once the firmware is leaked to the Internet it would be available to anybody. While Apple could design the firmware to check the identity of the phone to guard against it working on any phone besides the one the FBI wants unlocked, it could be possible to spoof those identifies to make any iPhone 5C look like the one the FBI wants unlocked. It’s also possible that a method to disable a fully updated iPhone 5C’s signature verification will be found. If that happens a modified version of the compromised firmware, which would contain an invalid signature, that doesn’t check the phone’s identifiers could be installed.

The bottom line is that the mere existence of a compromised firmware, a master key if you will, puts every iPhone 5C at risk just as the existence of TSA master keys put everything secured with a TSA recognized lock at risk.

Another Day, Another Attack Against Cryptography Made Possible By Government Meddling

This week another vulnerability was discovered in the OpenSSL library. The vulnerability, given the idiotic marketing name Decrypting RSA with Obsolete and Weakened eNcryption (DROWN), allows an attacker to discover a server’s TLS session keys if it has SSLv2 enabled. Like FREAK and Logjam before it, DROWN was made possible by government meddling in cryptography:

For the third time in less than a year, security researchers have found a method to attack encrypted Web communications, a direct result of weaknesses that were mandated two decades ago by the U.S. government.

These new attacks show the dangers of deliberately weakening security protocols by introducing backdoors or other access mechanisms like those that law enforcement agencies and the intelligence community are calling for today.

[…]

Dubbed DROWN, this attack can be used to decrypt TLS connections between a user and a server if that server supports the old SSL version 2 protocol or shares its private key with another server that does. The attack is possible because of a fundamental weakness in the SSLv2 protocol that also relates to export-grade cryptography.

The U.S. government deliberately weakened three kinds of cryptographic primitives in the 1990s — RSA encryption, Diffie-Hellman key exchange, and symmetric ciphers — and all three have put the security of the Internet at risk decades later, the researchers who developed DROWN said on a website that explains the attack.

We’d all be safer if the government didn’t meddle in mathematical affairs.

This exploit also shows the dangers of supporting legacy protocols. While there may exist users that have software so old it doesn’t support TLS or even SSLv3, supporting them creates a hazard to every other user. There’s a point where you have to tell that user of ancient software to either upgrade to modern software or stop using the service. From a business standpoint, potentially losing one customer due to not having legacy support is far better than losing a lot of customers due to their trust in your company being lost because of a major security compromise.

Amazon Disabled Device Encryption In Fire OS 5

While Apple and, to a lesser extent, Google are working to improve the security on their devices Amazon has decided on a different strategy:

While Apple continues to resist a court order requiring it to help the FBI access a terrorist’s phone, another major tech company just took a strange and unexpected step away from encryption.

Amazon has removed device encryption from the operating system that powers its Kindle e-reader, Fire Phone, Fire Tablet, and Fire TV devices.

The change, which took effect in Fire OS 5, affects millions of users.

Traditionally firmware updates deliver (or at least attempt to) security enhancements. I’m not sure why Amazon chose to move away from that tradition but it should cause users of Fire OS devices concern. By delivering a firmware update that removes a major security feature Amazon has violated the trust of its users.

Unless Amazon fixes this I would recommend avoiding Fire OS based devices. Fortunately other phone and table manufacturers exist and are willing to provide you devices that offer good security features.

FBI Asks Apple, “What If We Do What We’re Planning To Do?”

On Tuesday there was a congressional hearing regarding encryption. I didn’t watch it because I had better shit to do. But I’ve been reading through some of the highlights and the hearing was like most hearings. A handful of competent individuals were brought in to testify in front of a group of clueless idiots who are somehow allowed to pass policies. What was especially funny to me was a comment made by the director of the Federal Bureau of Investigations (FBI), James Comey (which should really be spelled James Commie):

When Florida Congressman Ted Deutch asked Comey if the potential repercussions of such a back door falling into the wrongs hands were of valid concern, Comey responded by posing a hypothetical situation in which Apple’s own engineers were kidnapped.

“Slippery slope arguments are always attractive, but I suppose you could say, ‘Well, Apple’s engineers have this in their head, what if they’re kidnapped and forced to write software?'” Comey said before the committee. “That’s where the judge has to sort this out, between good lawyers on both sides making all reasonable arguments.”

Comey likely made the comment to highlight how Apple is capable of creating a back door to break the iPhone’s encryption, a fact the company has admitted.

Comey should have said, “Well, Apple’s engineers have this in their head, what will happen when my agency kidnaps them and forces them to write the backdoor?” Because that’s exactly what his agency is trying to accomplish in the San Bernardino case. The FBI wants the court to order Apple to write a custom version of iOS that would bypass several security features and brute force the encryption key. If the court does issue such an order and Apple doesn’t obey some federal goons will kidnap members of Apple (likely Tim Cook). Of course, the FBI couches its criminal activities in euphemisms such as “arrest” to make them appear legitimate.

But what would happen? As it turns out, not much. Kidnapping one of Apple’s engineers wouldn’t give access to the company’s software signing key. Without that key any software the engineer was forced to write wouldn’t load onto an iOS device.

Argh, Pirates Be A Hackin’ The High Seas

The biggest threat to computer security may be the average person’s lack of creativity. Imagine if you asked a random person on the streets what the possible ramifications of poor computer security at a shipping company could be. I would wager a bet that you’d get a lot of blank stares and variations of, “Uh, nothing.” But if you ask a creative person, say a pirate, the same question you will likely hear some pretty interesting ideas:

Tech-savvy pirates once breached the servers of a global shipping company to locate the exact vessel and cargo containers they wanted to plunder, according to a new report from Verizon’s cybersecurity team.

“They’d board the vessel, locate by bar code specific sought-after crates containing valuables, steal the contents of that crate — and that crate only — and then depart the vessel without further incident,” says the report, Verizon’s Data Breach Digest.

Just because you can’t think of a reason security is important doesn’t mean somebody else can’t. This is especially important to keep in mind if you’re one of those “I’ve got nothing to hide,” people. You might not be able to think of any reason but somebody who means you ill almost certain can.

When you’re assessing your own security, whether it be on a person or organizational level, it’s wise to bring in some outsiders, perhaps people with experience in breaching networks for malicious purposes, and pay them a little something to provide you with ideas you haven’t thought of yet. You will likely be surprised at how many things you simply failed to think of.