Today’s Browser Vulnerability is Brought to You By the State and the Letters F, R, E, A, and K

People often mock libertarians by claiming they blame everything on the state. But the recently revealed Factoring Attack on RSA-EXPORT Keys (FREAK) that leaves Android and Apple users vulnerable was actually the fault of the state. How so? Because of its futile attempts in the 1990s to control the export of strong encryption technology:

The weak 512-bit keys are a vestige of the 1990s, when the Clinton administration required weak keys to be used in any software or hardware that was exported out of the US. To satisfy the requirement, many manufacturers designed products that offered commercial-grade keys when used in the US and export-grade keys when used elsewhere. Many engineers abandoned the regimen once the export restrictions were dropped, but somehow the ciphers have managed to live on a select but significant number of end-user devices and servers. A list of vulnerable websites is here. Matthew Green, an encryption expert at Johns Hopkins University, told Ars the vulnerable devices included virtually all Android devices, as well as iPhones and Macs.

This is yet another example of how state regulations make us all vulnerable. In the state’s lust to control everything it often puts regulations in place that prevent its subject from utilizing the best available defensive technologies. From restrictions on encryption technology to body armor the state’s vested interest in spying on your and killing you far outweighs whatever concerns it may have about your safety.

We’re in the midst of a second crypto war but the state isn’t using its failed regulatory red tape this time. Instead it is trying to convince companies to implement back doors, actively exploiting encryption technology without disclosing the vulnerabilities to developers, and surveilling whatever data connections it can get its taps into. Even though the strategy has change the end goal remains the same; leave the people vulnerable to malicious actors so the state can ensure its capability to spy on us and kill us remain intact.

Google Backs Away from Encrypting Android 5.0 Device By Default

When Snowden leaked the National Security Agency’s (NSA) dirty laundry a lot of companies’ faces were red. The leaks showed that they were either complacent in the NSA’s surveillance apparatus or helpless to stop the agency from exploiting their systems. In an attempt to rebuild customer confidence many technology companies scrambled to improve the security on their devices. Apple, being the manufacturer of very popular handsets, announced several major security improvements in iOS 8, including disabling its ability to bypass a user’s set passcode. Much to the approval of Android users Google announced that Android 5.0, also known as Lollipop, would ship with device encryption enabled by default.

But some bad news appeared yesterday. Google has backed down from enabling encryption by default in Lollipop:

Last year, Google made headlines when it revealed that its next version of Android would require full-disk encryption on all new phones. Older versions of Android had supported optional disk encryption, but Android 5.0 Lollipop would make it a standard feature.

But we’re starting to see new Lollipop phones from Google’s partners, and they aren’t encrypted by default, contradicting Google’s previous statements. At some point between the original announcement in September of 2014 and the publication of the Android 5.0 hardware requirements in January of 2015, Google apparently decided to relax the requirement, pushing it off to some future version of Android. Here’s the timeline of events.

This, in my seldom humble opinion, is a very bad idea. The justification appears to be performance related. Namely the performance of many Android devices without hardware cryptography acceleration support tend to take a huge performance dive when device encryption is enabled.

If a user wants to disable device encryption that’s their choice but I firmly believe that this option should be enabled by default even if performance noticeably suffers on some devices. We’ve seen too many stories where abusive spouse, police officers, and federal agents have retrieved data from unencrypted devices without the consent of the owner or, in the case of law enforcement, warrants. With the amount of personal data people store on their mobile devices it’s far too risky to leave that data unprotected from prying eyes. Especially when we live in a surveillance state.

Google Stops Supporting Old Unsupported Code

I give software companies a lot of shit for failing to keep their customers secure but I also acknowledge that the task is really difficult. This is especially true when your customers are running old versions of your software and either refuse to or cannot upgrade. Microsoft continued supporting Windows XP for a decade, which is probably a century in software terms. When it cut off support many people still running Windows XP complained that they were being put at unnecessary risk. But software companies can’t support every version of every software product they’ve released. Google recently announced that it was no longer going to support Android WebView and now people are complaining that they’re being put at unnecessary risk because they’re running a old version of Android:

Owning a smartphone running Android 4.3 Jelly Bean or an earlier versions of Android operating system ?? Then you are at a great risk, and may be this will never end.

Yes, you heard right. If you are also one of millions of users still running Android 4.3 Jelly Bean or earlier versions of the operating system, you will not get any security updates for WebView as Google has decided to end support for older versions of Android WebView – a default web browser on Android devices.

WebView is the core component used to render web pages on an Android device, but it was replaced on Android 4.4 KitKat with a more recent Chromium-based version of WebView that is also used in the Chrome web browser.

Admittedly only supporting the latest version of Android is pretty shoddy but who is really to blame? Google has released a new version of Android, 4.4, and is supporting it so why aren’t customers upgrade? Because device manufacturers and carriers are standing in the way.

The smartest thing Apple did with the iPhone is cut the carriers out of the update cycle. When Apple wants to release an update it just released an update. Furthermore it has been doing an OK, albeit not great, job of supporting older devices.

Most devices require the device manufacturer to release an update and each carrier to sign off on it before it gets pushed to customers. Android device manufacturers have also been stopping updates for older devices at breakneck speed. Oftentimes you’re fortune to have your device supported with updates by the manufacturer for the entirety of your two year contract. And even if the manufacturer does a good job of supporting your device the carrier through inaction many prevent the update from being released to its customers.

I don’t think Google should bear most of the blame here. The real culprit are the companies that have prevented their customers from upgrading to the latest version of Android. Unless mobile handsets move to a model similar to desktops and laptops, where customers are free to install whatever operating system version they desire, we’re going to continue seeing instances where software developers drop support for legacy products and leave massive numbers of users without needed support.

North Korea’s Web Browser

North Korea has its own operating system called Red Star OS. Not surprisingly it’s a distribution of Linux. What makes it interesting is that it’s the official operating system of one of the most closed nations on Earth. Recently it leaked onto the Internet and people have been playing with it. So far the most interesting article I’ve found involves the operating system’s web browser:

If you want to send a request to a web address across the country, you need to have a hostname or an IP address. Hostnames convert to IP addresses through something called DNS. So if I want to contact www.whitehatsec.com DNS will tell me to go to 63.128.163.3. But there are certain addresses, like those that start in “10.”, “192.168.” and a few others that are reserved and meant only for internal networks – not designed to be routable on the Internet. This is sometimes a security mechanism to allow local machines to talk to one another when you don’t want them to traverse the Internet to do so.

Here’s where things start to go off the rails: what this means is that all of the DPRK’s national network is non-routable IP space. You heard me; they’re treating their entire country like some small to medium business might treat their corporate office. The entire country of North Korea is sitting on one class A network (16,777,216 addresses). I was always under the impression they were just pretending that they owned large blocks of public IP space from a networking perspective, blocking everything and selectively turning on outbound traffic via access control lists. Apparently not!

Yup, the entire country is apparently treated as one giant intranet. The zany doesn’t stop there though. Check out the article because North Korea certainly made some intriguing design decisions.

Anarcho-Robots Care Not For Your Laws

I was out late helping plan a local CryptoParty so this will be all the content you will get today. But I’m giving you some gold. Science fiction often explores the ideas of artificial entities breaking laws. Usually these entities take the form of artificial intelligences that are capable of thinking and acting on their own. Under such circumstances it’s easy to see how human law can be applied to artificial intelligences. But what happens when the artificial law breaker isn’t intelligent? That’s exactly what this story is making use address:

The Random Darknet Shopper, an automated online shopping bot with a budget of $100 a week in Bitcoin, is programmed to do a very specific task: go to one particular marketplace on the Deep Web and make one random purchase a week with the provided allowance. The purchases have all been compiled for an art show in Zurich, Switzerland titled The Darknet: From Memes to Onionland, which runs through January 11.

The concept would be all gravy if not for one thing: the programmers came home one day to find a shipment of 10 ecstasy pills, followed by an apparently very legit falsified Hungarian passport– developments which have left some observers of the bot’s blog a little uneasy.

If this bot was shipping to the U.S., asks Forbes contributor and University of Washington law professor contributor Ryan Calo, who would be legally responsible for purchasing the goodies? The coders? Or the bot itself?

This case is another example of the legal system being unable to keep up with the advancement of technology. The article goes on to explain that the laws apply to people knowingly purchasing illicit merchandise. Because of the bot’s random nature the author could not know that they would receive illegal merchandise. But the bot also didn’t know what it was doing since its actions were random and it is incapable of thinking (as far as we know, those AIs can be pretty sly).

In all probability politicians will scramble to debate this issue, write a law, and pass it. By the time they’re done the next technological advancement will be created that acts outside of the boundaries imagined by the politicians who passed the law that was supposed to deal with the last situation. Eventually we will have to address more severe crimes such as assault or murder. At some point when machines are intelligent enough to create new machines we’ll have to deal with the idea of whether or not an artificial author is responsible for the actions of its creation’s crime. Property crimes will also be interesting once the offenses are committed by machines instead of humans.

The legal system is incredibly slow moving while technological advancements happen at a rapid pace. There will likely come a day when intelligent machines become responsible for most technological advancements. What will happen then? Will we have to put the legal system into the hands of machines as well? Will people accept that? It’s an interesting thought exercise.

Touch ID

When I was young I was an early adopter. I had to have every new gadget as soon as it was released. Because of that I was also a beta tester. Now that I’m older and don’t have the time to dick around with buggy products I wait until early adopters have played with a device for a while before purchasing it. The beta testers for the iPhone 6 have done a fantastic job as far as I can see so I finally upgrade to one.

I’m not too thrilled about the increased size but it’s not so big as to be difficult to use (unlike the iPhone 6 Plus, which combines all of the worst features of a phone and tablet into one big mistake). Other than the size it’s basically like previous iPhones but with added processing power and storage. Since I was upgrading from an iPhone 5 I also gained access to Touch ID, Apple’s finger print authentication system.

Let me preface what I’m about to say with an acknowledgement of how poor fingerprints are as a security token. When you use your fingerprint for authentication you are literally leaving your authentication token on everything you touch. That means a threat can not only get your authentication token but can do so at their leisure. Once a threat has your fingerprint there’s nothing you can do to change it.

With that disclaimer out of the way I must admit that I really like Touch ID. Fingerprints may not be the best authentication method in existence but all of us make security tradeoffs of some sort every day (since the only truly secure computer is one that cannot be used). Security and convenience are mutually exclusive. This is probably the biggest reason so many people are apathetic about computer security. But I think Touch ID does a good job of finding that balance between security and convenience.

Until Apple implemented Touch ID the only two options you had for security your iPhone were a four digit PIN or a more complex password. A phone is a device you pull out and check numerous times throughout the day and usually those checks are a desire to find some small bit of information quickly. That makes complex passwords, especially on a touchscreen keyboard, a pain in the ass. Most people, if they have any form of security on their phone at all, opt for a four digit PIN. Four digit PINs keep out only the most apathetic attackers. If you want to be secure against a threat that is willing to put some work into cracking your device you need something more secure.

Touch ID works as a secondary method of authentication. You still need to have a four digit PIN or a password on the device. That, in my opinion, is the trick to Touch ID being useful. If you reboot your phone you will need to authenticate with your four digit PIN or password. Until that first authentication after boot up Touch ID is not available. Another way to make Touch ID unavailable is not to log into your phone for 48 hours.

The Fifth Amendment does not protect you from surrendering your fingerprint to the police. That means law enforcers can compel you to give your fingerprint so they can unlock your phone. Whether passwords are protected by the Fifth Amendment is a topic still being fought in the courts. If you’re arrested a password is going to be a better method of securing your device from the state than your fingerprint. Because of how Touch ID works you can thwart law enforcement’s ability to take your fingerprint by simply powering off the phone.

Only you can decide if Touch ID is an appropriate security mechanism for you. I’m really enjoying it because now I can have a complex password on my phone without having to type it in every time I pull it out of my pocket. But I also admit that fingerprints are poor authentication mechanisms. Tradeoffs are a pain in the ass but they’re the only things that make our electronic devices usable.

Nothing Says Secure Communications Like a Backdoor

Since Snowden released the National Security Agency’s (NSA) dirty laundry security conscious people have been scrambling to find more secure means of communication. Most of the companies called out in the leaked documents have been desperately trying to regain the confidence of their customers. Google and Apple have enabled full device encryption on their mobile operating systems by default, many websites have either added HTTPS communications or have gone to exclusive HTTPS communications, and many apps have been released claiming to enable communications free from the prying eyes of Big Brother. Verizon decided to jump on the bandwagon but failed miserably:

Verizon Voice Cypher, the product introduced on Thursday with the encryption company Cellcrypt, offers business and government customers end-to-end encryption for voice calls on iOS, Android, or BlackBerry devices equipped with a special app. The encryption software provides secure communications for people speaking on devices with the app, regardless of their wireless carrier, and it can also connect to an organization’s secure phone system.

Cellcrypt and Verizon both say that law enforcement agencies will be able to access communications that take place over Voice Cypher, so long as they’re able to prove that there’s a legitimate law enforcement reason for doing so.

Security is an all or nothing thing. If you implement a method for law enforcement to access communications you also allow everybody else to access communications. Backdoors are purposely built weaknesses in the security capabilities of a software package. While developers will often claim that only authorized entities can gain access using a backdoor in reality anybody with the knowledge of how the backdoor works can use it.

Matters are made worse by the fact that law enforcement access is the problem everybody is trying to fix. The NSA was surveilling the American people in secret. A lot of people have also been questioning the amount of surveillance being performed by local law enforcement agencies. Since there is a complete absence of oversight and transparency nobody knows how pervasive the problem is, which means we must assume the worst case and act as if local departments are spying on everything they can. Tools like the one just released by Verizon don’t improve the situation at all.

Maintaining Backwards Compatibility Isn’t Doing Users Any Favors

I’m kind of at a loss as to why so many major websites have failed to disables legacy protocols and ciphers on their websites. The only two reasons I can think of is that those companies employ lazy administrators or, more likely, they’re trying to maintain backwards compatibility.

The Internet has become more integrated into our lives since the 1990s. Since then a lot of software has ceased being maintained by developers. Windows XP, for example, is no longer supported by Microsoft. Old versions of Internet Explorer have also fallen to the wayside. But many websites still maintain backwards compatibility with old versions of Windows and Internet Explorer because, sadly, they’re still used in a lot of places. While maintaining backwards compatibility seems like a service to the customer it’s really a disservice.

The idea seems simple enough, customers don’t want to invest money in new systems so backwards compatibility should be maintained to cater to them. However this puts them, and customers with modern systems, at risk. Consider Windows XP with Internet Explorer 6, the classic example. They’re both ancient. Internet Explorer 6 is not only old and not only unmaintained but it has a history of being Swiss cheese when it comes to security. If your customers or employees are using Internet Explorer 6 then they’re at risk of having their systems compromised and their data stolen. Obviously that is a more extreme example but I believe it makes my point.

Currently TLS is at version 1.2 but many older browsers including Internet Explorer 7 through 10 only support TLS 1.0 and that is most likely the next protocol to fall. Once (because it’s a matter of when, not if) that happens anybody using Internet Explorer 7 through 10 will be vulnerable. Any website that keeps TLS 1.0 available after that point will be putting the data of those users at risk.

It’s 2014. Backwards compatibility needs to be discarded if it involves decreasing security. While this does inconvenience customers to some extent it isn’t nearly as inconvenient as identify theft or account highjackings. As a general rule I operate until the principle that I won’t support it if the manufacturer doesn’t support it. And if the manufacturer does support it but not well then I will stop bothering to support it as soon as that support requires decreasing security.

And as an aside we can dump unsecured HTTP connections now. Seriously. There is no purpose for them.

Everybody is In On the Surveillance Game

This has been a bad week for my laptop. Last week my battery gave up the ghost. On Sunday the hard drive died. Finally on Monday the spare hard drive I swapped into the laptop committed seppuku. Since the hard drive I dropped in on Sunday night was my last spare drive I had to make a trip to the local computer parts emporium to acquire another one. While searching through the hard drives I came across something rather funny:

western-digital-surveillance

That must be Western Digital’s National Security Agency (NSA) edition hard drive.

Also, as a side note, when it comes time to choose a name for your laptop don’t choose Loki. Just throwing that out there.

You May Not Be Free But Encryption Works

The feds have been throwing a hissy fit since Apple and Google both announced that device encryption will be enabled by default on all of their mobile devices. Members of the Department of Justice have even gone so far as to imply that Apple (and, likely, Google) are marketing their devices to criminals and will ultimately be responsible for the death of a child (when all else fails just think of the children). But many people still wonder if these public tantrums are just for show. Do the feds have magical super-quantum-hyperdrive-computers that can crack any form of encryption ever?

Further evidence indicates they do not. Courts documents have been found showing how desperate the feds are getting in order to break device encryption:

OAKLAND, CA—Newly discovered court documents from two federal criminal cases in New York and California that remain otherwise sealed suggest that the Department of Justice (DOJ) is pursuing an unusual legal strategy to compel cellphone makers to assist investigations.

In both cases, the seized phones—one of which is an iPhone 5S—are encrypted and cannot be cracked by federal authorities. Prosecutors have now invoked the All Writs Act, an 18th-century federal law that simply allows courts to issue a writ, or order, which compels a person or company to do something.

A magical piece of paper that can compel you to do work for the state? Obviously we live in the freest country on Earth! While this story is further evidence that we’re little more than serfs in the eyes of the state it also shows that encryption works.

I know a lot of conspiracy theorists believe that the feds have magical computers that can break any form of encryption by utilizing subspace frequencies or some sort of bullshit like that. If that is true then the state must either be trying to keep it hush hush by not utilizing it (which would make it useless) or it costs a small fortune to operate (which makes it almost useless) because coercing people with the court system is terribly inefficient. So I think these court documents are a good indication that device encryption works pretty well and that’s reassuring.

Obviously rubber-hose cryptanalysis, which issuing legal threats is certainly a form of, is very effective so the question will become whether or not Apple is technically capable of bypassing the iPhone 5S’s encryption. Hopefully it is not.