Today’s Browser Vulnerability is Brought to You By the State and the Letters F, R, E, A, and K

People often mock libertarians by claiming they blame everything on the state. But the recently revealed Factoring Attack on RSA-EXPORT Keys (FREAK) that leaves Android and Apple users vulnerable was actually the fault of the state. How so? Because of its futile attempts in the 1990s to control the export of strong encryption technology:

The weak 512-bit keys are a vestige of the 1990s, when the Clinton administration required weak keys to be used in any software or hardware that was exported out of the US. To satisfy the requirement, many manufacturers designed products that offered commercial-grade keys when used in the US and export-grade keys when used elsewhere. Many engineers abandoned the regimen once the export restrictions were dropped, but somehow the ciphers have managed to live on a select but significant number of end-user devices and servers. A list of vulnerable websites is here. Matthew Green, an encryption expert at Johns Hopkins University, told Ars the vulnerable devices included virtually all Android devices, as well as iPhones and Macs.

This is yet another example of how state regulations make us all vulnerable. In the state’s lust to control everything it often puts regulations in place that prevent its subject from utilizing the best available defensive technologies. From restrictions on encryption technology to body armor the state’s vested interest in spying on your and killing you far outweighs whatever concerns it may have about your safety.

We’re in the midst of a second crypto war but the state isn’t using its failed regulatory red tape this time. Instead it is trying to convince companies to implement back doors, actively exploiting encryption technology without disclosing the vulnerabilities to developers, and surveilling whatever data connections it can get its taps into. Even though the strategy has change the end goal remains the same; leave the people vulnerable to malicious actors so the state can ensure its capability to spy on us and kill us remain intact.

Signal for iOS Now Supports Secure Text Messaging

One of the things I try to do is find tools that enable secure communications without requiring a degree in computer science to learn. OK, few of the tools I’ve seen require a computer science degree but most people are notoriously lazy so any barrier to entry is too much. I’ve been using and recommending Wickr for a few months now because of its relative ease of use. It’s a good tool but there are two major flaws in my opinion. First, it’s not open source. Second, it requires a separate user name and password, which is a surprisingly high barrier to entry for some (I’m talking about people with little security knowledge).

For a while Android users have enjoyed Red Phone for secure phone calls and TextSecure for secure text messages. Some time ago an app called Signal was released that gave iOS users the ability to call Red Phone users but there was no app that was compatible with TextSecure. Since some of the people I talk to use Android and others use iOS I really needed a solution that was cross platform. Fortunately the developers of Signal, Red Phone, and TextSecure just released an update to Signal that enables secure text messaging.

It’s a very slick application. First of all it, along with every other project developed by Open Whisper Systems, is open source. While being open source isn’t a magic bullet it certainly does make verifying the code easier (and by easier I mean possible). The other thing I like is that it uses your phone number to register your app with Open Whisper System’s servers. That means people can see if you have the app installed by looking up your number, which is magically pulled from your contacts list, in the app. If it’s installed on your end the app will let them send you text messages or call you. There are no user names or passwords to fiddle with so the barrier to entry is about as low as you can go.

Signal isn’t a magic bullet (no secure communication tools are). For example, since it’s tied to your phone number it doesn’t preserve your anonymity. Wickr, by allowing you to use a separate user name, does a better job in that department although it’s still not as good as it could be since it doesn’t attempt to anonymize traffic through something like Tor. Messages also aren’t set to self-destruct in a set amount of time like Wickr’s messages do. But it certainly fulfills some of my requirements when talking with people who aren’t technically knowledgeable or are just plain lazy.

HealthCare.gov Sending Personal Information to Tracking Sites

The war over the Affordable Care Act (ACA) is still be waged. Democrats are pointing out that the number of people with health insurance coverage is higher than ever, which isn’t surprising since you’re not required to purchase it by law. Republicans are upset because the ACA is still called ObamaCare and they wanted everybody to call it RomneyCare. Libertarians, rightly so, are asking how a government can force you to buy a product. But there’s a problem with the ACA that has received relatively little coverage. From a privacy standpoint HealthCare.gov is a total fucking nightmare:

EFF researchers have independently confirmed that healthcare.gov is sending personal health information to at least 14 third party domains, even if the user has enabled Do Not Track. The information is sent via the referrer header which contains the URL of the page requesting a third party resource. The referrer header is an essential part of the HTTP protocol, it is sent for every request that is made on the web. The referrer header lets the requested resource know what URL the request came from, this would for example let a website know who else was linking to their pages. In this case however the referrer URL contains personal health information.

In some cases the information is also sent embedded in the request string itself, like so:

https://4037109.fls.doubleclick.net/activityi;src=4037109;
type=20142003;cat=201420;ord=7917385912018;~oref=https://www.
healthcare.gov/see-plans/85601/results/?county=04019&age=40&smoker=1&parent=&pregnant=1&mec=&zip=85601&state=AZ&income=35000&step=4?

That’s a referrer link from HealthCare.gov to DoubleClick.net that tells the advertiser that the user is 40 years old, that the user (assuming a value of 1 indicates true) smokes, that the user is not a parent, that the user is pregnant, the user’s zip code, the user’s state, and the user’s income.

You might be curious why a website paid for with taxes is sending health information about its users to an online advertiser. Usually websites only send user data to advertisers if they’re selling it. I wouldn’t be surprised if HealthCare.gov is double dipping by taking tax dollars and selling data to online advertisers. It wouldn’t be a bad money making strategy. First you force everybody to buy your product and then you sell their data.

DoubleClick.net isn’t the only site that HealthCare.gov is sending user health information to. Akamai.net, Chartbeat.net, Clicktale.net, and many more are receiving this data.

Interestingly enough both the Democrats and the Republicans seem entirely unconcerned about this. The only thing they care about is the political dick measuring contest that has been going on between then since forever. But this violation of privacy has real world ramifications, especially since the advertisers receiving this data already have a great deal of data on many Internet users.

Obama Wants Enable Abusers to Better Surveil Their Victims

Last week David Cameron, the prime minister of the United Kingdom, publicly stated that he wanted all encryption to be broken so him and his cronies could better spy on the populace. Shortly afterward Obama came out in support of Cameron’s desire:

President Barack Obama said Friday that police and spies should not be locked out of encrypted smartphones and messaging apps, taking his first public stance in a simmering battle over private communications in the digital age.

Apple, Google and Facebook have introduced encrypted products in the past half year that the companies say they could not unscramble, even if faced with a search warrant. That’s prompted vocal complaints from spy chiefs, the Federal Bureau of Investigation and, this week, British Prime Minister David Cameron.

Obama’s comments came after two days of meetings with Cameron, and with the prime minister at his side.

“If we find evidence of a terrorist plot… and despite having a phone number, despite having a social media address or email address, we can’t penetrate that, that’s a problem,” Obama said. He said he believes Silicon Valley companies also want to solve the problem. “They’re patriots.”

Every time a politician tells us that we need to surrender security they always sell it with fear. They tell us that they must be able to read all of our communications otherwise terrorists will kill us, pedophiles will kidnap and rape children, abusers will continue to abuse their victims, and murderers will be able to kill with impunity. I think it’s about time to bring this conversation full circle. Every one of those arguments can be flipped around.

Without having a means of anonymously and privately individuals become much easier for terrorists to target. Imagine an individual inside of a terrorist cell that wants to communicate the cell’s plans to counter-terrorists. Unless he is able to do this anonymously and privately he will likely be killed. The problem with breaking cryptographic tools so the government can bypass them is that anybody who knows about that weakness can also bypass them.

Then we have the children. Everything attack against our privacy is “for the children”. But cryptographic tools can also protect children from predators. Imagine a school setting where an instructor is planning to abduct one of the pupils. He’s obviously not going to do it on school grounds because the likelihood of him being caught is high. However if his target coordinates plans with other schoolmates via electronic communications and those communications are not secure the predator can view them and wait for them to go somewhere more isolated.

Abusers love to surveil their victims. Keeping tabs on where their victims go, what they spend, who they’re talking with, and what they’re talking about allows abusers to wield a great deal of psychological power. This ability to surveil also makes it less likely that their victims will seek help. When the chances of getting caught seeking help are high and the consequences are physical abuse then a victim is more likely to do what maintains to status quo.

Murders, like terrorists, would benefit greatly from broken cryptography. Like terrorists, murderers need to identify and track their target. If somebody is trying to murder a specific individual they may know where that individual works and lives. Businesses and neighborhoods often have too many witnesses around so a smart murderer is going to suveil their target and use the information he uncovered to strike at a more opportune time.

It’s time we start calling the politicians on their bullshit fear mongering. Whenever they bring up terrorists, pedophiles, abusers, or murderers we need to point out that those threats are also good arguments for strong cryptography.

Google Stops Supporting Old Unsupported Code

I give software companies a lot of shit for failing to keep their customers secure but I also acknowledge that the task is really difficult. This is especially true when your customers are running old versions of your software and either refuse to or cannot upgrade. Microsoft continued supporting Windows XP for a decade, which is probably a century in software terms. When it cut off support many people still running Windows XP complained that they were being put at unnecessary risk. But software companies can’t support every version of every software product they’ve released. Google recently announced that it was no longer going to support Android WebView and now people are complaining that they’re being put at unnecessary risk because they’re running a old version of Android:

Owning a smartphone running Android 4.3 Jelly Bean or an earlier versions of Android operating system ?? Then you are at a great risk, and may be this will never end.

Yes, you heard right. If you are also one of millions of users still running Android 4.3 Jelly Bean or earlier versions of the operating system, you will not get any security updates for WebView as Google has decided to end support for older versions of Android WebView – a default web browser on Android devices.

WebView is the core component used to render web pages on an Android device, but it was replaced on Android 4.4 KitKat with a more recent Chromium-based version of WebView that is also used in the Chrome web browser.

Admittedly only supporting the latest version of Android is pretty shoddy but who is really to blame? Google has released a new version of Android, 4.4, and is supporting it so why aren’t customers upgrade? Because device manufacturers and carriers are standing in the way.

The smartest thing Apple did with the iPhone is cut the carriers out of the update cycle. When Apple wants to release an update it just released an update. Furthermore it has been doing an OK, albeit not great, job of supporting older devices.

Most devices require the device manufacturer to release an update and each carrier to sign off on it before it gets pushed to customers. Android device manufacturers have also been stopping updates for older devices at breakneck speed. Oftentimes you’re fortune to have your device supported with updates by the manufacturer for the entirety of your two year contract. And even if the manufacturer does a good job of supporting your device the carrier through inaction many prevent the update from being released to its customers.

I don’t think Google should bear most of the blame here. The real culprit are the companies that have prevented their customers from upgrading to the latest version of Android. Unless mobile handsets move to a model similar to desktops and laptops, where customers are free to install whatever operating system version they desire, we’re going to continue seeing instances where software developers drop support for legacy products and leave massive numbers of users without needed support.

Fingerprints Still Suck as Authenticators

I do find Touch ID to be convenience but fingerprints are still terrible authenticators. This is, in part, because you leave them everywhere. Another problem is once an attacker as obtained your fingerprint there’s no way for you to change it. As technology improves the ability to obtain a target’s fingerprint becomes easier. The Chaos Computer Club demonstrated that this week when one of its members explained how he was able to replicate a politician’s fingerprint from a photograph:

Jan Krissler says he replicated the fingerprint of defence minister Ursula von der Leyen using pictures taken with a “standard photo camera”.

Mr Krissler had no physical print from Ms von der Leyen.

[…]

He told the audience he had obtained a close-up of a photo of Ms von der Leyen’s thumb and had also used other pictures taken at different angles during a press event that the minister had spoken at in October.

Biometric technology often wins favor due to its cool factor. Seeing a device unlock from a fingerprint reader or a retinal scanner is very neat to witness. But cool factor does not equal secure. If fingerprints can be replicated from standard photography today it won’t be long until they can also replication retinal patterns.

Touch ID

When I was young I was an early adopter. I had to have every new gadget as soon as it was released. Because of that I was also a beta tester. Now that I’m older and don’t have the time to dick around with buggy products I wait until early adopters have played with a device for a while before purchasing it. The beta testers for the iPhone 6 have done a fantastic job as far as I can see so I finally upgrade to one.

I’m not too thrilled about the increased size but it’s not so big as to be difficult to use (unlike the iPhone 6 Plus, which combines all of the worst features of a phone and tablet into one big mistake). Other than the size it’s basically like previous iPhones but with added processing power and storage. Since I was upgrading from an iPhone 5 I also gained access to Touch ID, Apple’s finger print authentication system.

Let me preface what I’m about to say with an acknowledgement of how poor fingerprints are as a security token. When you use your fingerprint for authentication you are literally leaving your authentication token on everything you touch. That means a threat can not only get your authentication token but can do so at their leisure. Once a threat has your fingerprint there’s nothing you can do to change it.

With that disclaimer out of the way I must admit that I really like Touch ID. Fingerprints may not be the best authentication method in existence but all of us make security tradeoffs of some sort every day (since the only truly secure computer is one that cannot be used). Security and convenience are mutually exclusive. This is probably the biggest reason so many people are apathetic about computer security. But I think Touch ID does a good job of finding that balance between security and convenience.

Until Apple implemented Touch ID the only two options you had for security your iPhone were a four digit PIN or a more complex password. A phone is a device you pull out and check numerous times throughout the day and usually those checks are a desire to find some small bit of information quickly. That makes complex passwords, especially on a touchscreen keyboard, a pain in the ass. Most people, if they have any form of security on their phone at all, opt for a four digit PIN. Four digit PINs keep out only the most apathetic attackers. If you want to be secure against a threat that is willing to put some work into cracking your device you need something more secure.

Touch ID works as a secondary method of authentication. You still need to have a four digit PIN or a password on the device. That, in my opinion, is the trick to Touch ID being useful. If you reboot your phone you will need to authenticate with your four digit PIN or password. Until that first authentication after boot up Touch ID is not available. Another way to make Touch ID unavailable is not to log into your phone for 48 hours.

The Fifth Amendment does not protect you from surrendering your fingerprint to the police. That means law enforcers can compel you to give your fingerprint so they can unlock your phone. Whether passwords are protected by the Fifth Amendment is a topic still being fought in the courts. If you’re arrested a password is going to be a better method of securing your device from the state than your fingerprint. Because of how Touch ID works you can thwart law enforcement’s ability to take your fingerprint by simply powering off the phone.

Only you can decide if Touch ID is an appropriate security mechanism for you. I’m really enjoying it because now I can have a complex password on my phone without having to type it in every time I pull it out of my pocket. But I also admit that fingerprints are poor authentication mechanisms. Tradeoffs are a pain in the ass but they’re the only things that make our electronic devices usable.

Encryption Works Except When It Doesn’t

People are still debating whether Edward Snowden is a traitor deserving a cage next to Chelsey Manning or a hero deserving praise (hint, unless you believe the latter you’re wrong). But a benefit nobody can deny is the overall improvement to computer security his actions have lead to. In addition to more people using cryptographic tools we are also getting a better idea of what tools work and what tools don’t work:

The NSA also has “major” problems with Truecrypt, a program for encrypting files on computers. Truecrypt’s developers stopped their work on the program last May, prompting speculation about pressures from government agencies. A protocol called Off-the-Record (OTR) for encrypting instant messaging in an end-to-end encryption process also seems to cause the NSA major problems. Both are programs whose source code can be viewed, modified, shared and used by anyone. Experts agree it is far more difficult for intelligence agencies to manipulate open source software programs than many of the closed systems developed by companies like Apple and Microsoft. Since anyone can view free and open source software, it becomes difficult to insert secret back doors without it being noticed. Transcripts of intercepted chats using OTR encryption handed over to the intelligence agency by a partner in Prism — an NSA program that accesses data from at least nine American internet companies such as Google, Facebook and Apple — show that the NSA’s efforts appear to have been thwarted in these cases: “No decrypt available for this OTR message.” This shows that OTR at least sometimes makes communications impossible to read for the NSA.

Things become “catastrophic” for the NSA at level five – when, for example, a subject uses a combination of Tor, another anonymization service, the instant messaging system CSpace and a system for Internet telephony (voice over IP) called ZRTP. This type of combination results in a “near-total loss/lack of insight to target communications, presence,” the NSA document states.

[…]

Also, the “Z” in ZRTP stands for one of its developers, Phil Zimmermann, the same man who created Pretty Good Privacy, which is still the most common encryption program for emails and documents in use today. PGP is more than 20 years old, but apparently it remains too robust for the NSA spies to crack. “No decrypt available for this PGP encrypted message,” a further document viewed by SPIEGEL states of emails the NSA obtained from Yahoo.

So TrueCrypt, OTR, PGP, and ZRTP are all solid protocols to utilize if you want to make the National Security Agency’s (NSA) job of spying on you more difficult. It’s actually fascinating to see that PGP has held up so long. The fact that TrueCrypt is giving the NSA trouble makes the statement of its insecurity issued by the developers more questionable. And people can finally stop claiming that Tor isn’t secure due to the fact it started off as a government project. But all is not well in the world of security. There are some things the NSA has little trouble bypassing:

Even more vulnerable than VPN systems are the supposedly secure connections ordinary Internet users must rely on all the time for Web applications like financial services, e-commerce or accessing webmail accounts. A lay user can recognize these allegedly secure connections by looking at the address bar in his or her Web browser: With these connections, the first letters of the address there are not just http — for Hypertext Transfer Protocol — but https. The “s” stands for “secure”. The problem is that there isn’t really anything secure about them.

[…]

One example is virtual private networks (VPN), which are often used by companies and institutions operating from multiple offices and locations. A VPN theoretically creates a secure tunnel between two points on the Internet. All data is channeled through that tunnel, protected by cryptography. When it comes to the level of privacy offered here, virtual is the right word, too. This is because the NSA operates a large-scale VPN exploitation project to crack large numbers of connections, allowing it to intercept the data exchanged inside the VPN — including, for example, the Greek government’s use of VPNs. The team responsible for the exploitation of those Greek VPN communications consisted of 12 people, according to an NSA document SPIEGEL has seen.

How the NSA is able to bypass VPN and HTTPS is still in question. I’m guessing the NSA’s ability to break HTTPS depends on how it’s implemented. Many sites, including ones such as Paypal, fail to implement HTTPS in a secure manner. This may be an attempt to maintain backward compatibility with older systems or it may be incompetence. Either way they certainly make the NSA’s job easier. VPN, likewise, may be implementation dependent. Most VPN software is fairly complex, which makes configuring it in a secure manner difficult. Like HTTPS, it’s easy to put up a VPN server that’s not secure.

The ultimate result of this information is that the tools we rely on will become more secure as people address the weaknesses being exploited by the NSA. Tools that cannot be improved will be replaced. Regardless of your personal feelins about Edward Snowden’s actions you must admit that they are making the Internet more secure.

Abusers Installing Spyware on Their Victims’ Computers

Last month I briefly mentioned the importance of full disk encryption. Namely it prevents the contents of the hard drive from being altered unless one knows the decryption key. I had to deal with a friend’s significant other installing spyware on her system in order to keep tabs on who she was talking to and what she was doing. Her significant other didn’t know her login credentials but since her hard drive wasn’t encrypted he was able to install the spyware with a boot disk. This threat model isn’t out of the ordinary. In fact it is becoming worryingly common:

Helplines and women’s refuge charities have reported a dramatic rise in the use of spyware apps to eavesdrop on the victims of domestic violence via their mobiles and other electronic devices, enabling abusers clandestinely to read texts, record calls and view or listen in on victims in real time without their knowledge.

The Independent has established that one device offering the ability to spy on phones is being sold by a major British high-street retailer via its website. The proliferation of software packages, many of which are openly marketed as tools for covertly tracking a “cheating wife or girlfriend” and cost less than £50, has prompted concern that police and the criminal justice system in Britain are failing to understand the extent of the problem and tackle offenders.

A survey by Women’s Aid, the domestic violence charity, found that 41 per cent of domestic violence victims it helped had been tracked or harassed using electronic devices. A second study this year by the Digital Trust, which helps victims of online stalking, found that more than 50 per cent of abusive partners used spyware or some other form of electronic surveillance to stalk their victims.

As a general rule security is assumed to be broken when an adversary has physical access. But that isn’t always the case. It really depends on how technically capable a threat is. Oftentimes in cases of domestic abuse the abuser is not technically savvy and relies on easy to procure and use tools to perform monitoring.

Full disk encryption, while not a magic bullet, is pretty effective at keeping less technically capable threats from altering a drive’s contents without the owner’s knowledge. When encrypting the contents of a hard drive is not possible, either due to technical limitations or the threat of physical violence, the Tails Linux live distribution is a good tool. Tails is being developed to maintain user anonymity and leave a few traces as possible that it was used. All Internet traffic on Tails is pumped through Tor, which prevents a threat monitoring your network from seeing what you’re looking at or who you’re talking to (but does not disguise the fact that you’re using Tor). That can enable a victim to communicate securely with an individual or group that can help. Since Tails boots from a USB stick or CD it can be easily removed and concealed.

As monitoring tools becomes easier to use, cheaper, and more readily available the need to learn computer security will become even greater. After all, the National Security Agency (NSA) isn’t the only threat your computer environment may be facing. Domestic abusers, corrupt (or “legitimate”) law enforcers, land lords, bosses, and any number of other people may with to spy on you for various reasons.

The Scope of the North Korea Internet Outage

I’m sure many of you are aware of the Internet outage in North Korea. An entire country’s Internet service disrupted? On paper this may sound impressive, it may even sound like retaliation by another nation state for a hack North Korea had nothing to do with. But the outage isn’t nearly as impressive as it sounds:

Chris Nicholson, a spokesman for Akamai, an Internet content delivery company, said it was difficult to pinpoint the origin of the failure, given that the company typically sees only a trickle of Internet connectivity from North Korea. The country has only 1,024 official Internet protocol addresses, though the actual number may be a little higher. That is fewer than many city blocks in New York have. The United States, by comparison, has billions of addresses.

1,024 official Internet protocol addresses for an entire nation? Damn. Obviously there aren’t a lot of connected people in that country (shocker, I know). According to Bloomberg the attack is directed at North Korea’s domain name service servers, which is cheap enough pretty much anybody could do it:

Such attacks flood Internet servers with traffic to knock infrastructure offline. In North Korea’s case, the attack appears to be aimed at the country’s domain-name service system, preventing websites from being able to resolve Internet addresses, Holden said.

It’s unlikely the attack is being carried out by the U.S., as any hacker could probably spend $200 to do it, Holden said.

This is most likely an attack being carried out by a bored teenager with a small botnet than a nation state. Then again with Sony’s recent behavior it wouldn’t surprise me a whole lot if it was doing this.