You Can’t Rely On Others For Your Defense

I shift around a lot of electrons talking about self-defense. When it comes to self-defense the thing that should always be kept in mind is that you can only rely on yourself. Sure, somebody may come to your aid but you can’t rely on the assumption that somebody will because very often nobody will:

What happened to Kevin Joseph Sutherland was horrific beyond imagining. On July 4, in front of about 10 witnesses on the Washington, D.C., Metro, an assailant punched him, stomped on him, kicked him in the head, and stabbed him at least 30 times. No one attempted to stop Sutherland’s killer.

What happened to me in November was vastly different, and I do not intend to equate the two events. Like Sutherland, I was attacked on a Saturday afternoon on the D.C. Metro. And as in Sutherland’s case, despite my screams and pleas, almost none of my fellow passengers on the crowded train car did anything to help.

This is why I keep myself in relatively good shape, carry a firearm, and train in martial arts (in that order of precedence) and urge you to do so as well. It’s harder to kill somebody in even decent shape than somebody who isn’t at all in shape and physical fitness improves your ability to run away, which should always been your first instinct when you feel like a situation is about to go bad. A firearm gives you the best odds against an aggressor and takes physical disparity out of the equation. Martial arts give you an option for dealing with an aggressor even in situations where you’re unarmed.

Both stories mentioned in the link article involved a person being attacked while multiple witnesses did nothing. One could blame the witnesses for not involving themselves, and a writer for the Federalist did exactly that, but it’s also unreasonable to expect somebody to risk their life to aid a complete stranger. That doesn’t make somebody a “beta male,” as the Federalist writer claims, it simply means they’re individuals who performed a risk-benefit calculations and concluded involving themselves was riskier than the potential benefit. That’s a very logical conclusion. Involving yourself in a physical confrontation is always risky. You don’t know if the situation is a gang of violent individuals beating a random innocent person to death or a inter-gang war playing itself out. It’s also impossible to know if the attackers are carrying armaments in addition to whatever is currently in their hands or if they have more friends nearby. Generally speaking the safe option for a person witnessing a physical confrontation is to do everything in their power to not involve themselves. That doesn’t necessarily mean it’s the moral choice but it is a logical choice.

But that logical choice also means you have to be prepared to fend for yourself.

Gun Control And Cryptography Control: Same Idea With The Same Outcome

Crypto War II is heating up. David Cameron has vowed to make effective cryptography illegal in the Britain, the Federal Bureau of Investigations (FBI) has been uging Congress to pass a ban on effective cryptography, and Australia has been ahead of the curve by not just prohibiting the use of strong cryptography but also learning about it. I’ve spent a good deal of time fighting against attempts to restrict or prohibit gun ownership. From my experience there I can say that attempts to restrict or prohibit effective cryptography is the exact same thing with the same outcome.

First, let’s consider what restricting or prohibiting gun ownership does. Gun restriction laws prohibit non-state individuals from having legal access to certain types of firearms and what they can do with their firearms. The National Firearms Act (NFA), for example, places heavy restrictions on purchasing machine guns, suppressors, and several other categories of firearms. Adding to the NFA’s restrictions on machine guns the Hughes Amendment to the Firearm Owners Protection Act outright prohibited non-state entities from legally owning machine guns manufactured after 1986. In addition to these restrictions the Gun Control Act of 1968 also created a list of individuals prohibited from owning any type of firearm. The list includes anybody who has been labeled a felon, which means simply failing to abide by the entire tax code could make it illegal for you to own a firearm. Most states have laws restricting individuals from lawfully carrying a firearm without state permission. In other words most states restrict individuals’ options for self-defense. Those laws, like all laws, only apply to individuals acting within the law. Criminals, by definition, do not have to abide by these restrictions and prohibitions so the ultimate outcome is that non-state individuals can be outgunned by violent criminals (both the state and non-state variety).

Now let’s consider what restricting or prohibiting effective cryptography does. Restrictions against effective cryptography create a legal requirement that all cryptographic systems be weakened in such a way that they can be easily bypassed by the state. In reality cryptographic systems cannot be weakened in such to allow only one entity to bypass them without also allowing other entities to bypass them. We learned this lesson during the Clipper chip fiasco. When you purposely introduce weaknesses into cryptographic systems those weaknesses can be targeted by anybody, including run of the mill criminals and foreign states. In the case of key escrow, the system being proposed where all encrypted data can be decrypted by a key held by the state, the focus would likely be in either creating or stealing a copy of the state’s key. Once that happened, and it would only be a matter of time until it did happen, the encrypted data would be available to anybody with a copy of the key to read. Imagine the day, and it would happen, where that master key was widely distributed across the Internet. Suddenly everything that was lawfully encrypted would be easily decrypted by anybody. Your personal information, including credit card and Social Security numbers, would be accessible to every identify thief in the world. Any communications you had that could imply you were participating in an unlawful activity, even if you weren’t, would suddenly be accessible not only to law enforcement agents but also individuals interested in blackmailing you. All future communications with online stores would be vulnerable, which means your credit card and shipping information could be snapped up by anybody surveiling the network you’re using. Any information you entered into state and federal online tax systems would be viewable to anybody with a copy of the master key. Effectively everything you communicated would be transmitted in plaintext and viewable to anybody.

Cryptography, like a firearm, is a means of self-defense. Where firearms are used to defend your physical self cryptography is used to defend your data. If your phone or laptop is stolen encryption can defend all of the information stored on it from the thief. When you make a purchase online encryption defends your credit card number and shipping address from identify thieves. Your Social Security number is also defended against identify thieves by encryption when you fill out your taxes online. There are a lot of bad individuals who want to steal personal information about you and the only thing you have to defend against them is effective cryptography. Any restriction against effective cryptography necessarily inhibits the ability of individuals to defend themselves.

The fight against restricting cryptography is the same fight against restricting firearm ownership. Both fights are against attempts by the state to restrict the ability of individuals to protect themselves from harm.

Another Reason To Run An Ad Blocker

Ad blockers are marvelous web browser plugins. In addition to saving users from dealing with ceaseless pop-ups, audio that plays automatically, and other annoyances ad blockers also protect users from malware. A recent study [PDF] published by the Simon Fraser University shows another reason to run an ad blocker: they can significantly reduce the data usage of your network:

A Canadian university claims to have saved between 25 and 40 percent of its network bandwidth by deploying Adblock Plus across its internal network.

The study tested the ability of the Adblock Plus browser extension in reducing IP traffic when installed in a large enterprise network environment, and found that huge amounts of bandwidth was saved by blocking web-based advertisements and video trailers.

This is especially important when you’re dealing with a service that requires you pay by usage, such as most modern cellular data plans, or building a network that will see heavy usage from numerous individuals, such as university networks.

Ad blockers are not well received by website operators who rely on them. It’s understandable because ad blockers directly cut into their profits. But it’s also unwise to rely on a revenue source that requires users to put themselves at risk of being infected with malware and pay more for bandwidth usage. If ad blockers are a threat to your revenue model then you should consider looking into other avenues to make profits.

I mention Feedbin periodically when the issue of website revenue comes up because it’s a great example of how a website can make money without relying on advertisements. Feedbin charges customers $3.00 per month or $30.00 per year to use its service. People are more than willing to pay money for a quality online service as demonstrated by Feedbin, Netflix, and Spotify. The advantage of a subscription model is that it’s a predicable cost, unlike the potential bandwidth costs incurred by serving advertisements, and greatly reduces the risks of malware infection.

Finding alternative revenue sources is going to become increasingly important as more people utilize ad blockers for security, reducing network congestion, and lowering bandwidth costs. Instead of expecting customers to face more risks and costs website operators need to being researching ways to stay afloat without ads. As with any market online services are constantly evolving and those who want to continue participating in it need to evolve as well.

The Founding Fathers Did Use Encryption

One of the arguments that have been made for prohibiting strong encryption is that the Founding Fathers couldn’t have envisioned a world where law enforcers were unable to read communications. Why the Founding Fathers needed to be clairvoyant to justify something today is beyond me but the Electronic Frontier Foundation (EFF) had a great rebuttal to the argument. If you head over to the Library Of Congress’s website you can read about how James Madison encrypted his messages to prevent law enforcers from reading them:

As a Virginia delegate to the Continental Congress, while secretary of state, and in his personal correspondence with Thomas Jefferson, James Madison feared constantly that unauthorized people would seek to read his private and public correspondence. To deter such intrusions, he resorted to a variety of codes and ciphers.

Most of the early ciphers that Madison used were keyword polyalphabetic code systems involving a complex interaction of a keyword with alphabets and numbers in a preestablished pattern. The codes were designed by James Lovell, a Massachusetts delegate to the Continental Congress and an expert on ciphers. On July 5, 1782, Edmund Randolph wrote to James Madison: “I wish, that on future occasions of speaking of individuals we may use the cypher, which we were taught by Mr. Lovell. Let the keyword be the name of the negro boy, who used to wait on our common friend.” Madison noted at the bottom of Randolph’s letter, “Probably CUPID.” He added, “I have been in some pain from the danger incident to the cypher we now use. The enemy I am told have in some instances published their intercepted cyphers.”

What’s interesting here is that Madison not only encrypted his messages when he was in the Continental Congress but also after he became secretary of state and in his personal correspondences. He wasn’t just hiding his communications from British law enforcers but continued to hide them even after they had been replaced by United States law enforcers. That only makes sense because if you only encrypt important messages the simple fact you used encryption indicates to spies that the message is important and resources should be put into decrypting it.

Arguing that the Founding Fathers couldn’t have predicted pervasive encryption is idiotic because they themselves used it. There’s also no evidence that they provided either British or United States law enforcers with any keys to allow them to rapidly decrypt the communications if needed.

Hacking Team Demonstrates It Doesn’t Know What Words Mean

Hacking Team has finally released a response to the attack it incurred. Much like the company’s internal network security the response it posted should have people concerned. In addition to not following basic security practices, such as not storing login credentials in plaintext files, the company also doesn’t have a strong grasp of the English language:

Before the attack, HackingTeam could control who had access to the technology which was sold exclusively to governments and government agencies.

If Hacking Team could control who had access to the technology before the attack the attack wouldn’t have been successful. The fact the attack was successful proves that Hacking Team didn’t have control over its technology. Apparently whoever is doing public relations for the company doesn’t know what the meaning of control is.

The next two sentences, especially combined with the above sentence, are especially laughable to me:

Now, because of the work of criminals, that ability to control who uses the technology has been lost. Terrorists, extortionists and others can deploy this technology at will if they have the technical ability to do so.

Instead of governments and government agencies having exclusive use of Hacking Team’s technology now terrorists, extortionists, and others have access to its technology? What exactly is the difference between a government and an extortionist? None. Governments by their very nature are extortionists. They do tend to use nice sounding euphemisms like taxes, license fees, and citations but in reality government are in the business of forcefully taking wealth from the populace.

Looking a bit deeper we must asking how some of the governments and agencies Hacking Team sold to; such as Sudan, Ethiopia, and the Drug Enforcement Agency; differ in any notable way from other terrorist organizations. With the exception Hacking Team has accepted money from them there is no notable difference. Simply calling something by a different name doesn’t change what it is. Admittedly this is a problem many people have with the English language.

Outside of the failure to utilize the English language the Hacking Team response contains this gem:

HackingTeam is evaluating if it is possibile to mitigate the danger.

How could a company that discovers previously unknown vulnerabilities help mitigate danger to people? For actual security companies the answer is to work with developers to fix the vulnerabilities before they can be actively exploited. Hacking Team, on the other hand, sat on those vulnerabilities so it could sell tools for the sole purpose of exploiting them. Its entire business model relied on people being in danger. Had it actually cared about helping mitigate danger it wouldn’t have sold the tools it did, especially to the customers it did.

This Hacking Team breach just gets better by the day. Between the company’s scummy practices, source code getting open sourced, and complete failure at handling public relations this breach is the gift that keeps on giving.

Company That Provides Spyware To Oppressive Regimes Gets Hacked; LULZ Follow

Yesterday might as well have been Christmas for the information security industry. Hacking Team, a company known for selling surveillance malware to oppressive regimes, was hacked an 400GB of its data was released to the Internet. A hacker going by the name PhineasFisher, who made a reputation for themselves when they hacked the spyware provider Gamma International, has supposedly claimed responsibility. If that’s true then we all own them a bear.

Remember what I said about Hacking Team having a reputation for selling software to oppressive regimes? Documents in the leaked data reveal some of the company’s customers. From that information it appears that the company will deal with anybody willing to throw cash at it:

One document pulled from the breached files, for instance, appears to be a list of Hacking Team customers along with the length of their contracts. These customers include Azerbaijan, Bahrain, Egypt, Ethiopia, Kazakhstan, Morocco, Nigeria, Oman, Saudi Arabia, Sudan, and several United States agencies including the DEA, FBI and Department of Defense. Other documents show that Hacking Team issued an invoice to Ethiopia’s Information Network Security Agency (the spy agency of a country known to surveil and censor its journalists and political dissidents) for licensing its Remote Control System, a spyware tool. For Sudan, a country that’s the subject of a UN embargo, the documents show a $480,000 invoice to its National Intelligence and Security Services for the same software.

Nigeria, Saudi Arabia, Sudan, and the Drug Enforcement Agency (DEA)? Talk about some nasty buyers. If I owned a company that had entities like these as customers I would shut my doors and label myself as the biggest failure in business. But Hacking Team apparently has not moral issues with selling to such scum and are even willing to bypass a United Nations embargo for $480,000! The bottom line is if you have the cash Hacking Team will sell to you.

Another interesting revelation that has come from this breach is just how terrible Hacking Team’s own internal security was. When you think of shady surveillance software providers you probably imagine some of the tightest network security in the business, right? As it turns out not so much:

The data released Sunday night and through to today not only contains a large number of emails, none of which have proven too embarrassing so far, but also a number of the firms’ internal passwords, which appear to be worryingly insecure for a company that deals in exposing others’ security. These include credentials belonging to Christian Pozzi, security engineer at Hacking Team, stored in a file called login.txt. His chosen logins include easily-crackable variations on the word “password” and the name of an X-Men character all in lower-case and with no numbers or symbols.

A file directly linked to Pozzi also included images believed to show RCS grabbing screenshots.

Apparently the head of a malware provider isn’t aware of password managers. Had he been he wouldn’t have needed to use insecure passwords stored in plain text files. This just goes to show that being smart enough to write exploits doesn’t mean you’re skilled enough to defend against even the most basic of them.

Now that I’ve had a little fun at Hacking Team’s expense let’s get down to the nitty gritty. What does this hack mean? Since the company’s exploitation software was just open sourced (not by its choice) a lot more good than simply revealing the immoral actions of a scummy company can come of this. The software security holes Hacking Team’s malware relied on can now be discovered and fixed. Malware producers, like government surveillance agencies, cause a lot of damage simply by keeping the exploits they discover secret. Instead of being helpful members of the security community by assisting companies in fixing their security flaws they write software that exploits them and sell it to anybody willing to pay. Ironically breaking into these companies’ networks and releasing their source code to the world makes everybody safer.

I’ll post more interesting information as it is revealed. But if you want real-time updates of what is being discovered I urge you to follow #HackingTeam on Twitter. There you’ll find such entertaining tidbits as the supposed Transport Layer Security (TLS) private key for support.hackingteam.com and the Hacking Team’s owner’s really shitty passwords.

CryptoPartyMN Website is Up Again

You probably noticed that posting has been sparse this week. That’s because I’ve been focusing my efforts on setting up the new website for CryptoPartyMN. For those of you who haven’t heard of CryptoPartyMN, it’s a group of us in the Twin Cities region that are organizing periodic meetups with the intention of teaching people who to utilize string crypto to protect online anonymity and security communications. We hosted a CryptoParty at The Hack Factory on May 9th and B-Sides MSP and are planning more in the future.

Admittedly the website is pretty bland right now. Unfortunately the theme we were using was on the other server that I don’t have access to. It’ll be improved in time. Likewise now that the site is up and will remain up regularly we’ll make sure to post meetup notifications on it (we usually meet every other Tuesday). Add it to your RSS feed if you want to know when the next CryptoParty event is.

VPN Isn’t A Magic Bullet

I really like virtual private networks (VPN) and a lot of people utilize them for various reasons including protecting anonymity, thwarting region locks on services, and bypassing filters put in place by Internet service providers (ISP). However it’s important to note that there are no magic bullets and VPN is not exception.

We’re in the midst of a transition from IPv4 to IPv6. A lot of software still either doesn’t support or isn’t properly configured to handle IPv6 yet. In fact my ISP, Comcast, still doesn’t give business customers IPv6 addresses so I can’t setup my services to properly work with the new fangled Internet addressing scheme (and Comcast happens to be the only option in my area, good thing for Comcast the government exists to protect monopolies). That means my VPN server, like many others, may very well leak personal information through IPv6:

The study of fourteen popular VPN providers found that eleven of them leaked information about the user because of a vulnerability known as ‘IPv6 leakage’. The leaked information ranged from the websites a user is accessing to the actual content of user communications, for example comments being posted on forums. Interactions with websites running HTTPS encryption, which includes financial transactions, were not leaked.

The leakage occurs because network operators are increasingly deploying a new version of the protocol used to run the Internet called IPv6. IPv6 replaces the previous IPv4, but many VPNs only protect user’s IPv4 traffic. The researchers tested their ideas by choosing fourteen of the most famous VPN providers and connecting various devices to a WiFi access point which was designed to mimic the attacks hackers might use.

This is why I recommend doing things that absolutely need to remain private through a dedicated anonymity tool such as the Tor Browser. VPNs aren’t great for preserving anonymity anyways since the server administrator knows the IP address of connect clients whereas Tor exit nodes only know the IP address of the relays directly connected to it. The Tor developers also focus on anonymity first, which means they’re far more likely to find and fix leaks that could reveal personally identifiable information. However VPNs still work well for establishing connections to remote networks in a secure manner and will still do a good job of bypassing filters and region locks.

It’s also worth nothing that as we continue to transition to IPv6 we’re going to keep running into issues like this. Change is never completely smooth, especially when some ISPs, such as Comcast, still don’t provider customers the tools needed to utilize IPv6.

OpenVPN

After getting my business Internet account the first thing I did was setup a virtual private network (VPN) server. VPN servers have a million and one uses but the most important feature they offer me is the ability to have a secure tunnel when connected to networks that aren’t mine. I settled on L2TP/IPSec since that was the more secure of the two options offered by OS X Server (as you can tell the running theme with my network has been migrating away from OS X Server).

L2TP/IPSec served its purpose, it gave me a secure tunnel to my home network, but there were several notable downsides. The biggest of which was the way it was handled by iOS. iOS disconnects from an L2TP/IPSec VPN server when the device is turned off and doesn’t automatically reconnect when it is turned on again. That means I had to go into the settings and manually turn it on whenever I wanted to use it (which is often). I know, first world problems.

Last week I began setting up a replacement VPN server, this one using OpenVPN. This ended up being a phenomenal leap forward. OpenVPN uses OpenSSL for encryption and authentication. That gives you a lot of options. For my purposes I restricted my OpenVPN server to only use TLSv1.2 (the latest), forward secrecy, and known strong encryption and authentication algorithms. Instead of using a pre-shared key, which is an option, I’m using certificates. Using certificates offers several advantages but the most important one to me is that iOS will automatically reconnect to a VPN server if authentication is performed with certificates. OpenVPN has a great, albeit ugly as sin, client for iOS that can import OpenVPN profiles. Best of all the app doesn’t need to be running for the VPN connection to remain connected (so you don’t have to worry about the tunnel closing after 10 minutes since that’s the longest amount of time an app can run in the background on iOS). Now when I turn my phone on it automatically connects to my VPN server.

Since OpenVPN utilizes TLS it’s supposedly difficult to distinguish from HTTPS traffic, which means it’s less likely a network filter will block you from connecting to your VPN server. I don’t have access to a network that hostile so I can speak to the effectiveness of this but it’s something to keep in mind if you regularly find yourself connecting devices to a heavily filtered network.

If you’re interested in setting up a VPN server I highly recommend OpenVPN. It’s fairly simple to setup and clients are available for most operating systems.

Why Everybody Should Use Encryption

Using encryption requires individuals to put forth the effort to learn. Because people tend to be lazy they usually spend more time coming up with excuses for not learning encryption than they do learning how to use it. Ultimately the excuse they end up settling on is that they have nothing to hide. This is bullshit, of course. If they truly didn’t have anything to hide they would put Internet accessible cameras and microphones in every room of their house and allow anybody to check in on what they’re doing at any time. But they don’t.

Besides the fact that we all have something to hide there is another reason why the “nothing to hide” excuse doesn’t work. To quote Bruce Schneier:

Encryption should be enabled for everything by default, not a feature you turn on only if you’re doing something you consider worth protecting.

This is important. If we only use encryption when we’re working with important data, then encryption signals that data’s importance. If only dissidents use encryption in a country, that country’s authorities have an easy way of identifying them. But if everyone uses it all of the time, encryption ceases to be a signal. No one can distinguish simple chatting from deeply private conversation. The government can’t tell the dissidents from the rest of the population. Every time you use encryption, you’re protecting someone who needs to use it to stay alive.

By not using encryption you are putting lives in danger. Specifically the lives of people who need encryption to stay alive. So long as a majority of people utilize unencrypted forms of communication the presence of encryption becomes a signal that indicates to a snoop that the captured data is important. If all data, from e-mails wishing grandma a happy birthday to plans for protesting the latest act of police brutality, is encrypted then the spies can’t use it to indicate what is and isn’t important. At that point their costs skyrocket because the only way for them to learn what is and isn’t important is to decrypt everything, which isn’t feasible for any organization.

So stop making excuses and learn how to encrypt your data. There are plenty of people out there, including myself, willing to help you. If you don’t then you’re contributing to a problem that puts real lives in danger.