The Seedier Side of the Internet isn’t as Seedy as You Think

Due to the popularity of Silk Road the mainstream media has been busily reporting about the “dark” web. If you take the news stories about the “dark” web literally it is a place where child pornography is readily available, hitmen can be hired for a handful of Bitcoin, and terrorists commonly hold secret meetings to discuss their plan blow up the next elementary school. Reality, as is often the case with mainstream media portrayals, is quite different:

Read nearly any article about the dark web, and you’ll get the sense that its name connotes not just its secrecy but also the low-down dirty content of its shadowy realms. You’ll be told that it is home to several nefarious things: stolen data, terrorist sites, and child porn. Now while those things may be among what’s available on the dark web, all also are available on the normal web, and are easily accessible to anyone, right now, without the need for any fancy encryption software.

[…]

Despite reports, there are only shreds of evidence that the Islamic State is using the dark web. One apparent fund-raising site highlighted by the Washington Post had managed to garner exactly 0 bitcoins at the time of writing, and this was also the case with another I discovered recently. It’s worth pointing out that both of those sites simply claimed to be funneling the cash to the terrorist group, and could easily have been fakes. The one Islamic extremist dark web site to actually generate any revenue mustered only $1,200 earlier this year. Even it doesn’t explicitly mention the Islamic State.

And yes, child porn is accessible on the normal web. In fact, it is rampant when compared with what’s available from hidden sites. Last year, the Internet Watch Foundation, a charity that collates child sexual abuse websites and works with law enforcement and hosting providers to have the content removed, found 31,266 URLs that contained child porn images. Of those URLs, only 51 of them, or 0.2 percent, were hosted on the dark web.

In other words the big scary “dark” web is basically a smaller regular Internet. What you find on hidden sites, which is the correct term for the “dark” web, is also far more widely available on the regular Internet. Why do sites go through the hassle of requiring visitors to utilize something like the Tor browser then? Because maintaining anonymity for both themselves and their visitors is valuable.

In the case of Silk Road, for example, it was much easier to build user trust by using a hidden site since there was a barrier between the service and the identity of its users. Not only did that barrier protect users from potentially being revealed to law enforcement agents by the site’s administrators but it also prevented buyers and sellers from being able to identify each other. Silk Road was an example of anonymity making things safer for everybody involved.

If you’re of the opinion that buying and selling drugs should result in men with guns kicking down doors at oh dark thirty and therefore what I said above is not a valid justification for hidden sites don’t worry, I have another. Journalists often find themselves in positions where sources demand anonymity before revealing important information. That is why services such as Onionshare, were created:

That’s exactly the sort of ordeal Micah Lee, the staff technologist and resident crypto expert at Greenwald’s investigative news site The Intercept, hopes to render obsolete. On Tuesday he released Onionshare—simple, free software designed to let anyone send files securely and anonymously. After reading about Greenwald’s file transfer problem in Greenwald’s new book, Lee created the program as a way of sharing big data dumps via a direct channel encrypted and protected by the anonymity software Tor, making it far more difficult for eavesdroppers to determine who is sending what to whom.

Whistle blowers are an example of individuals who are less likely to talk to journalists, and therefore blow the whistle, unless their identify can be protected. This is especially true when the whistle blower is revealing unlawful government activities. With access to legal coercive powers it is possible for the state to compel a journalist to reveal a source of information damning to it. If the journalist doesn’t know the identity of the whistle blower, as would be the case if the data was sent via a hidden service, they cannot reveal it to the state no matter what court orders it issues or torture it performs. That protection makes the likelihood of a whistle blower to come forward much higher.

The “dark” web is little more than a layer of anonymity bolted onto the existing Internet. Anything available on the former is available in far larger quantities on the latter. What the “dark” web offers is protection for people often needing it. Like any tool it can be used for both good and bad but that doesn’t justify attempting to wipe it out. And because much of the world is ruled by even more insane states than the ones that dominate the so-called first world I would argue the good of protecting people far outweighs the bad that was happening and still is happening on the regular Internet.

History of Crypto War I

In its zeal to preserve the power to spy on its citizens members of the United States government have begun pushing to prohibit civilians from using strong cryptography. While proponents of this prohibition try to scare you with words such as terrorists, drug cartels, and pedophiles let’s take a moment to remember the last time this war was waged:

Encryption is a method by which two parties can communicate securely. Although it has been used for centuries by the military and intelligence communities to send sensitive messages, the debate over the public’s right to use encryption began after the discovery of “public key cryptography” in 1976. In a seminal paper on the subject, two researchers named Whitfield Diffie and Martin Hellman demonstrated how ordinary individuals and businesses could securely communicate data over modern communications networks, challenging the government’s longstanding domestic monopoly on the use of electronic ciphers and its ability to prevent encryption from spreading around the world. By the late 1970s, individuals within the U.S. government were already discussing how to solve the “problem” of the growing individual and commercial use of strong encryption. War was coming.

The act that truly launched the Crypto Wars was the White House’s introduction of the “Clipper Chip” in 1993. The Clipper Chip was a state-of-the-art microchip developed by government engineers which could be inserted into consumer hardware telephones, providing the public with strong cryptographic tools without sacrificing the ability of law enforcement and intelligence agencies to access unencrypted versions of those communications. The technology relied on a system of “key escrow,” in which a copy of each chip’s unique encryption key would be stored by the government. Although White House officials mobilized both political and technical allies in support of the proposal, it faced immediate backlash from technical experts, privacy advocates, and industry leaders, who were concerned about the security and economic impact of the technology in addition to obvious civil liberties concerns. As the battle wore on throughout 1993 and into 1994, leaders from across the political spectrum joined the fray, supported by a broad coalition that opposed the Clipper Chip. When computer scientist Matt Blaze discovered a flaw in the system in May 1994, it proved to be the final death blow: the Clipper Chip was dead.

The battlefield today reflects the battlefield of Crypto War I. Members of the government are again arguing that all civilian cryptography should be weakened by mandating the use of key escrow that allows the government to gain access to any device at any time. As with the last war, where the government proposed Clipper Chip was proven to be completely insecure, this war must be looked at through the eye of government security practices or, more specifically, lack of security practices. It was only last week that we learned some of the government’s networks are not secure, which lead to the leaking of every federal employee’s personal information. How long do you think it would take before a hack of a government network lead to the leaking of every escrow key? I’d imagine it would take less than a week. After that happened every device would be rendered entirely insecure by anybody who downloaded the leaked escrow keys.

What everybody should take away from this is that the government is willing to put each and every one of us at risk just so it can maintain the power to spy on use with impunity. But its failure to win Crypto War I proved that the world wouldn’t come to an end if the government couldn’t spy on us with impunity. Since Crypto War I the power of law enforcement agents to acquire evidence of wrongdoing (according to the state) didn’t suddenly stop, terrorist attacks didn’t suddenly become a nightly occurrence, and children being abducted by pedophiles didn’t suddenly become a fact of everyday life.

Crypto War II is likely inevitable but it can be won just as the last one was. The first step to victory is not allowing yourself to be suckered by government lies.

The Sorry State of E-Mail

As I briefly mentioned last week I’ve been spending time setting up a new e-mail server. For years I’ve been using OS X Server to run my e-mail server because it was easy to setup. But there are a lot of things I dislike about OS X Server. The biggest problem was with the change from 10.6 to 10.7. With that update OS X Server went from being a fairly serious piece of server software that a small business could use to being almost completely broken. Apple slowly improved things in later released of OS X but its server software remains amateur hour. Another thing that I dislike about OS X Server is how unstable it becomes the moment you open a config file and make some manual changes. The graphical tool really doesn’t like that but it also don’t give you the options necessary to fine tune your security settings.

My e-mail server has grown up and now runs on CentOS. I’ve tried to tighten up security as much as possible but I’ve quickly learned how sorry of a state e-mail is in. One of my goals was to disable broken Transport Layer Security (TLS) settings. However this presents a sizable problem because there are a lot of improperly configured e-mail servers out there. Unlike web servers where you can usually safely assume clients will be able to establish a connection with a sever using properly configured TLS no such assumptions can be made with e-mail servers. Some e-mail servers don’t support any version of TLS or Secure Socket Layer (SSL) and those that do often have invalid (expired, self-signed, etc.) certificates. In other words you can’t disable unsecured connections without being unable to communicate with a large number of e-mail servers out there. Let me just say that as much as I hate how everybody uses Google because it makes the government’s surveillance apparatus cheaper to implement I appreciate that the company actually has properly configured e-mail servers.

Another problem with securing e-mail servers is that they rely on the STARTTLS protocol. I say this is a problem because the first part of establishing a secure connection via STARTTLS is asking the server if it supports it through an unsecured connection. This has allowed certain unscrupulous Internet service providers (ISPs) to intercept and edit out the mention of STARTTLS support from a server’s reply, which causes the client to revert to an unsecured connection for the entire communication. This wouldn’t be a problem if we could safely assume all e-mail servers support TLS because then you could configure servers to only use TLS.

What’s the answer? Ultimately I would say it is to move away from e-mail as we currently know it. But that’s easier said than done so I will continue to strong urge people to utilize Pretty Good Privacy (PGP) to encrypt and sign their e-mails. Even if a PGP encrypted e-mail is transmitted over an unsecured connection the amount of data a snoop can collect on you is far less (but since PGP can only really encrypt the contents of the e-mail a great deal of metadata is still available to anybody observing the communication between e-mail servers).

I also urge people to learn how to setup their own e-mail servers and to do it. Ars Technica and Sealed Abstract have good guides on how to setup a pretty secure e-mail server. However there is the problem that many ISPs block the ports used by e-mail server on their residential packages. So implementing an e-mail server out of your home could require getting a business account (as well as a static Internet protocol (IP) address). A slightly less optimal (because your e-mail won’t be stored on a system you physically control) option of setting up your e-mail server on a third-party host is a way to bypass this problem. Unless people stop relying on improperly configured e-mail servers there isn’t a lot of hope for salvaging e-mail as a form of secure communication (this should give people involved in professions that require confidentiality, such as lawyers, a great deal of concern).

Many people will probably become discouraged after reading this post and tell themselves that securing themselves is impossible. That’s not what you should take away from this post. What you should take away from this post is that the problem requires us to roll up our sleeves, further our knowledge, and fix it ourselves. Securing e-mail isn’t hopeless, it just requires us to actually do something about it. For my part I am willing to answer questions you have regarding setting up an e-mail server. Admittedly I won’t know the answer to every question but I will do my best to provide you with the knowledge you need to secure yourself.

Is Your App a Benedict Arnold

Most smartphone users rely on apps to access much of their online data. This can be problematic though since many app developers have little or no knowledge about security. A research project has unveiled a number of Android apps, many of which are developed by companies with deep enough pockets to hire dedicated security personnel, that communicate user credentials over plaintext:

Researchers have unearthed dozens of Android apps in the official Google Play store that expose user passwords because the apps fail to properly implement HTTPS encryption during logins or don’t use it at all.

The roster of faulty apps have more than 200 million collective downloads from Google Play and have remained vulnerable even after developers were alerted to the defects. The apps include the official titles from the National Basketball Association, the Match.com dating service, the Safeway supermarket chain, and the PizzaHut restaurant chain. They were uncovered by AppBugs, a developer of a free Android app that spots dangerous apps installed on users’ handsets.

By communicating your credentials over plaintext these apps are betraying your account security to anybody listening on the network. What makes this particular problem especially worrisome is that it’s difficult for the average user to detect. How many users are going to connect their phone to their wireless network, open up Wireshark, and ensure all of their apps are communicating over HTTPS?

Developers should be expected to understand HTTPS if they’re communicating user credentials back to a server. But the real source of this problem is the fact plaintext is still allowed at all. We’re well beyond the point where HTTP should be deprecated, in fact Mozilla is planning to do exactly that, in favor of HTTPS only. If HTTP is no longer allowed then we don’t have to worry about apps communicating data over it (we still have to worry about improperly configured HTTPS but that’s something we have to worry about currently).

Government Networks Are too Old to Secure

The quest for answers regarding the recent breach that put every federal employee’s personal information at risk has begun. As with most government investigations into government screw ups this one is taking the form of public questionings of mid-level federal employees. Buried within the extensive waste of time that was the most recent public hearing were a few nuggets of pure gold. For starters the Office of Personnel Management (OPM) Director, Katherine Archuleta, let some information slip that should be very concerning to everybody:

During testimony today in a grueling two-hour hearing before the House Oversight and Government Reform Committee, Office of Personnel Management (OPM) Director Katherine Archuleta claimed that she had recognized huge problems with the agency’s computer security when she assumed her post 18 months ago. But when pressed on why systems had not been protected with encryption, she said, “It is not feasible to implement on networks that are too old.” She added that the agency is now working to encrypt data within its networks.

Apparently government networks are too old to secure. The only conclusion one could draw from this is that involved the government networks are running on unsupported software. Perhaps most of the computers in its networks are still running Windows XP or something older. Perhaps the hardware they’re using is so ancient that it cannot actually encrypt and decrypt data without a noticeable performance hit. What is clear is that somebody really screwed up. Whether it was network administrators failing to update software and hardware or bean counters failing to set aside funding for modernization the network that holds the personal information for every federal employee was not properly maintained. And this is the same organization that has a great deal of personal information about every American citizen. The federal government has your name, address, phone number, Social Security Number, date of birth, and more sitting in its janky-ass network. Think about that for a moment while you contemplate the importance of privacy from the government.

But old networks aren’t the only problem with the government’s networks:

But even if the systems had been encrypted, it would have likely not mattered. Department of Homeland Security Assistant Secretary for Cybersecurity Dr. Andy Ozment testified that encryption would “not have helped in this case” because the attackers had gained valid user credentials to the systems that they attacked—likely through social engineering. And because of the lack of multifactor authentication on these systems, the attackers would have been able to use those credentials at will to access systems from within and potentially even from outside the network.

Gaining valid user credentials shouldn’t allow one to obtain personal information on every government employee. This admission indicates that every user on the network must either have administrative rights or the data isn’t protected in any way against unauthorized access from internal users. Any network administrator worth a damn knows that you only give users the privileges they require. Developers of systems that handle sensitive personal information should know that any access to said information would require approval from one or more higher ups. If I’m a user and want to access somebody’s Social Security Number there should be some kind of overseer that must approve the request.

Many network administrators haven’t implemented multifactor authentication but this omission is inexcusable for a network that contained so much personal information. Relying on user names and passwords to protect massive databases of personal information is gross negligence. With options such as YubiKey, RSA Secure ID, and Google Authenticator there is no excuse for not implementing multifactor authentication on networks with so much sensitive information.

Well all know governments love oversight and this is no exception. The systems in question were inspected by a government overseer, were deemed to not be properly secure, and nothing was done about it:

He referred to OPM’s own inspector general reports and hammered Seymour in particular for the eleven major systems out of 47 that had not been properly certified as secure—which were not contractor systems but systems operated by OPM’s own IT department. “They were in your office, which is a horrible example to be setting,” Chaffetz told Seymour. In total, 65 percent of OPM’s data was stored on those uncertified systems.

Chaffetz pointed out in his opening statement that for the past eight years, according to OPM’s own Inspector General reports, “OPM’s data security posture was akin to leaving all your doors and windows unlocked and hoping nobody would walk in and take the information.”

Here we see one of the biggest failures with government oversight, the lack of enforcement. When an inspector deems systems to be unfit those systems should be made fit. If they’re not made fit people charged with maintaining them should be replaced. There is no point in oversight without follow through.

When people claim they have nothing to hide from the government they seldom stop to consider who can gain access to its data. It’s not just the law enforcers. Due to general incompetence when it comes to security it’s potentially anybody with valid user credentials. And valid user credentials are obtainable by exploiting the weakest link in any computer network, the user. According to Dr. Andy Ozment the credentials were likely obtained through social engineering, which is something most people can fall prey to. Because of the lack of multifactor authentication that means anybody who can social engineer user credentials from a government employee potentially has access to all of the data collected by the government on yourself. Is that something you’re honestly OK with? Do you really want a government this incompetent at protecting the personal data of its own employees holding a lot of personal data about you?

Lazy Libertarians

This weekend several of my friends and I had the privilege of running the CryptoParty for B-Sides MSP. It wasn’t the first CryptoParty I’ve either hosted or helped host but all of the previous ones were for various libertarian groups. I cannot properly express the difference between being a part of a CryptoParty with security professionals versus libertarians. Unlike the libertarian CryptoParties I’ve been involved with, none of the people at B-Sides MSP went on a tirade about how the otherwise entirely incompetent government can magically crack all crypto instantly.

Libertarians like to consider themselves the paragons of personal responsibility. However, time and again, I see that a lot of libertarians putting more effort into making excuses for their laziness than doing anything productive. Using secure communication tools is one of these areas where supposedly responsible libertarians like to be entirely irresponsible. This is kind of ironic because libertarians tend to be the ones bitching about government surveillance the loudest.

It was during the CryptoParty at B-Sides MSP that I made a decision. From now on I’m going to call out lazy libertarians. Whenever I host or otherwise participate in a CryptoParty for libertarians and one of them goes off about the incompetent government suddenly being incredibly competent I’m just going to tell them to shut the fuck up so the adults can continue talking. If you are a libertarian and you sincerely oppose government surveillance then prove your sincerity by utilizing the really awesome and very effective tools we have available to secure our communications. Use Pretty Good Privacy (PGP) to encrypt your e-mails, call people with Red Phone or Signal, send text messages with TextSecure or Signal, and encrypt your computer and mobile device’s storage. Unless you’re doing these things I can’t take any claims you make about hating government surveillance seriously. If you want to be lazy and make up conspiracy theories that’s your thing but I am going to call your ass out for it.

Actual security professionals, some of whom knew a hell of a lot more about cryptography than me (not that that’s very hard), took these tools seriously and so should as well. The only people claiming that the government can break all cryptography instantly are conspiracy theorists who know absolutely dick about cryptography and people wanting to justify their laziness. Don’t be either of those. Instead embrace the personal responsibility libertarians like to tout and take measures to make government surveillance more expensive.

When is Discussing Cryptography a Jailable Offense

A 17 year-old is facing 15 years in a cage because he discussed cryptography. Specifically he discussed how members of the Islamic State could utilize cryptography to further their goals:

A 17-year-old Virginia teen faces up to 15 years in prison for blog and Twitter posts about encryption and Bitcoin that were geared at assisting ISIL, which the US has designated as a terror organization.

The teen, Ali Shukri Amin, who contributed to the Coin Brief news site, pleaded guilty (PDF) Thursday to a federal charge of providing material support to the Islamic State in Iraq and the Levant.

Dana Boente, the US Attorney for the Eastern District of Virginia, said the youth’s guilty plea “demonstrates that those who use social media as a tool to provide support and resources to ISIL will be identified and prosecuted with no less vigilance than those who travel to take up arms with ISIL.”

According to the defendant’s signed “Admission of Facts” filed Thursday, Amin started the @amreekiwitness Twitter handle last June and acquired some 4,000 followers and tweeted about 7,000 times. (The Twitter handle has been suspended.) Last July, the teen tweeted a link on how jihadists could use Bitcoin “to fund their efforts.”

According to Amin’s court admission (PDF):

The article explained what Bitcoins were, how the Bitcoin system worked and suggested using Dark Wallet, a new Bitcoin wallet, which keeps the user of Bitcoins anonymous. The article included statements on how to set up an anonymous donations system to send money, using Bitcoin, to the mujahedeen.

Some may point out that this is obviously bad because it supports the “enemies of America.” But it brings up a very important question. Where is the line drawn between aiding an enemy and simply discussing cryptography? I write a lot of posts about how encryption can be used to defend against the state. That information could very well be read by members of the Islamic State and used to secure their communications against American surveillance. Have I aided the enemy? Has every cryptographer who has written about defending against government surveillance aided the enemy?

Lines get blurry when governments perform widespread surveillance like that being done by the National Security Agency (NSA). Regular people who simply want to protect their privacy, which is supposedly protected by the Constitution in this country, and military enemies of the government suddenly find themselves using the same tools and following the same privacy guides. What works, at least in regards to secure communications and anonymization, for people wanting privacy and military enemies is the same. Therefore a guide aimed at telling people how to encrypt their e-mail so it can’t be read by the NSA also tells an agent of the Islamic State how to do the same.

Where is the line drawn? Is it the language used? If you specifically mention members of the Islamic State as the intended audience are you then guilty? If that’s the case wouldn’t the obvious solution be writing generic guides that explain the same things? Wouldn’t that mean the information written by Ali Shukri Amin would have been perfectly fine if he simply didn’t tailor it for members of the Islamic State?

As the state’s use of widespread surveillance is utilized to enforce more laws the desire of regular people to secure their communications will increase (because, after all, we’re all breaking the law even if we don’t intent to or know we are doing it). They will use the same tools and guides as members of the Islamic State could use. Will every cryptographer face the same fate as Ali Shukri Amin?

Anything the Private Sector can Screw Up the Government can Screw Up Better

There have been numerous major data breaches in recent times that have compromised a lot of credit card numbers. The reaction from those breeches ranged from anger to outright demands that the government get involved to ensure another one never happens. As if trying teach that last crowd a valuable lesson fate has shown us once again that anything the private sector can screw up the government can screw up better (which is impressive because the private sector and really fuck some shit up):

A giant hack of millions of government personnel files is being treated as the work of foreign spies who could use the information to fake their way into more-secure computers and plunder U.S. secrets.

Millions of personnel files, including Social Security numbers, were acquired by an unknown attacker. This makes the compromise of credit card numbers look like amateur hour by comparison! But it gets better!

Federal employees were told in a video Friday to change all their passwords, put fraud alerts on their credit reports and watch for attempts by foreign intelligence services to exploit them. That message came from Dan Payne, a senior counterintelligence official for the Director of National Intelligence.

Emphasis mine. How in the hell is a regular low-level federal employee supposed to watch for attempts by foreign intelligence agencies trying to exploit them? Does the United States government honestly think other intelligence agencies are so inept as to have a guy with a strong foreign accent call up federal employees and say, “Hello, I’m a Nigerian prince…”? The average person has no idea how to defend themselves against a specialized spook (if they did spooks wouldn’t be very effective at their job).

Both the breach and the response are ridiculous. However this points to something more concerning. If the government can’t keep its personnel files safe or detect a major breach for months (the story notes the breach occurred in December but wasn’t discovered until this month) then why should we have any confidence in its ability to keep our personal information secure? Everything from tax records to our phone calls (thanks National Security Agency) are being held by the federal government and could be up for grabs by any competent attacker. Imagine the wealth of information that could be acquired if an attacker managed to breach one of the NSA’s databases. This is another reason why allowing the government to store personal information is so dangerous.

Full Video of the Panel Discussion with William Binney, Todd Pierce, and Myself

I said I’d post video of the panel discussion once it was available. Robin Hensel was good enough to upload the video to YouTube very quickly. There are two videos. Here’s part one:

Here’s part two:

Now if you’ll excuse me I have an e-mail server to beat with a wrench. Do you want some valuable life advice? Ubuntu Server is not a good base to build an e-mail server on. The repository still has Dovecot 2.2.9 even though the latest version is 2.2.18. I also had a hell of a time getting it to actually disable SSLv3 (I disabled it in the config file, restarted the service, and found that I could still connect via SSLv3 with openssl s_client -connect).

Thwarting Cellular Interceptors

The United States government has been using planes equipped with cell phone interceptors to surveil large areas. Recently planes have been spotted around the Twin Cities circling areas of interest for hours and it appears that they’re equipped with surveillance equipment:

The plane’s flight path, recorded by the website flightradar24.com, would eventually show that it circled downtown Minneapolis, the Mall of America and Southdale Center at low altitude for hours starting at 10:30 p.m., slipping off radar just after 3 a.m.

“I thought, ‘Holy crap,’ ” said Zimmerman.

Bearing the call sign N361DB, the plane is one of three Cessna 182T Skylanes registered to LCB Leasing of Bristow, Va., according to FAA records. The Virginia secretary of state has no record of an LCB Leasing. Virtually no other information could be learned about the company.

Zimmerman’s curiosity might have ended there if it weren’t for something he heard from his aviation network recently: A plane registered to NG Research — also located in Bristow — that circled Baltimore for hours after recent violent protests there was in fact an FBI plane that’s part of a widespread but little known surveillance program, according to a report by the Washington Post.

[…]

Zimmerman, who spotted the plane over Bloomington, said he pored through FAA records to find the call letters for each plane and then searched for images of them. He found photographs that show the planes outfitted with “external pods” that could house imagery equipment. He also found some of the planes modified with noise-muffling capability. That’s not common for a small plane, he said.

[…]

Other devices known as “dirtboxes,” “Stingrays” or “IMSI catchers” can capture cellphone data. Stanley said it’s still unclear what technologies have been used in the surveillance flights.

It’s unknown if these planes are surveillance craft or equipped with cell phone interceptors but the evidence of the former is great and the government’s program to use such craft for cell phone interception indicates the latter is likely. That being the case I feel it’s a good time to discuss a few tools you can use to communicate more securely with your cell phone.

Modern cellular protocols utilize cryptography. What many people don’t realize is that, at least in the case of Global System for Mobile (GSM), the cryptography being used is broken, which is why cell phone interceptors work. Furthermore cryptography is only used between cell phones and towers. This means your cellular provider, and therefore law enforcement agents, can listen to and read your calls and text messages.

What you really want is end-to-end encryption for your calls. Fortunately tools that do that already exist. Three tools I highly recommend are Signal, RedPhone, and TextSecure from Open Whisper Systems. Signal is an iOS application that encrypts both voice calls and text communications. RedPhone is an Android app for encrypting calls and TextSecure is an Android app for encrypting text communications. Signal, RedPhone, and TextSecure are all compatible with one another so iOS users can securely communication with Android users. All three applications are also easy to use. When you install the applications you register your number with Open Whisper System’s servers. Anybody using the applications will be able to see you have the applications installed and can therefore communicate with you securely. Since the encryption is end-to-end your cellular provider cannot listen to or read your calls and text messages. It also means cell phone interceptors, which rely on the weak algorithms used between cell phones and towers, will be unable to surveil your communications.

As the world becomes more hostile towards unencrypted communications we must make greater use of cryptographic tools. It’s the only defense we have against the surveillance state. Fortunately secure communication tools are becoming easier to use. Communicating securely with friends using iOS and Android devices is as simple as installing an app (granted, these apps won’t protect your communications if the devices themselves are compromised but that’s outside of the threat model of planes with cell phone interceptors).