Secure E-Mail is an Impossibility

A while back I wrote a handful of introductory guides on using Pretty Good Privacy (PGP) to encrypt the content of your e-mails. They were well intentioned guides. After all, everybody uses e-mail so we might as well try to secure it as much as possible, right? What I didn’t stop to consider was the fact that PGP is a dead end technology for securing e-mails not because the initial learning curve is steep but because the very implementation itself is flawed.

I recently came across a blog post by Filippo Valsorda that sums up the biggest issue with PGP:

But the real issues I realized are more subtle. I never felt confident in the security of my long term keys. The more time passed, the more I would feel uneasy about any specific key. Yubikeys would get exposed to hotel rooms. Offline keys would sit in a far away drawer or safe. Vulnerabilities would be announced. USB devices would get plugged in.

A long term key is as secure as the minimum common denominator of your security practices over its lifetime. It’s the weak link.

Worse, long term keys patterns like collecting signatures and printing fingerprints on business cards discourage practices that would otherwise be obvious hygiene: rotating keys often, having different keys for different devices, compartmentalization. It actually encourages expanding the attack surface by making backups of the key.

PGP, in fact the entire web of trust model, assumes that your private key will be more or less permanent. This assumption leads to a lot of implementation issues. What happens if you lose your private key? If you have an effective backup system you may laugh at this concern but lost private keys are the most common issue I’ve seen PGP users run into. When you lose your key you have to generate a new one and distribute it to everybody you communicate with. In addition to that, you also have to resign people’s existing keys. But worst of all, without your private key you can’t even revoke the corresponding published public key.

Another issue is that you cannot control the security practices of other PGP users. What happens when somebody who signed your key has their private key compromised? Their signature, which is used by others to decide whether or not to trust you, becomes meaningless because their private key is no longer confidential. Do you trust the security practices of your friends enough to make your own security practices reliant on them? I sure don’t.

PGP was a jury rigged solution to provide some security for e-mail. Because of that it has many limitations. For starters, while PGP can be used to encrypt the contents of a message it cannot encrypt the e-mail headers or the subject line. That means anybody snooping on the e-mail knows who the parties communicating are, what the subject is, and any other information stored in the headers. As we’ve learned from Edward Snowden’s leaks, metadata is very valuable. E-mail was never designed to be a secure means of communicating and can never be made secure. The only viable solution for secure communications is to find an alternative to e-mail.

With that said, PGP itself isn’t a bad technology. It’s still useful for signing binary packages, encrypting files for transferring between parties, and other similar tasks. But for e-mail it’s at best a bandage to a bigger problem and at worst a false sense of security.

Pebble Goes Bankrupt

Pebble was an interesting company. While the company didn’t invent the smartwatch concept, I have a Fossil smartwatch running Palm OS that came out way before the Pebble, it did popularize the market. But making a product concept popular doesn’t mean you’re going to be successful. Pebble has filed for bankruptcy and effective immediately will no longer sell products, honor warranties, or provide any support beyond the material already posted on the Pebble website.

But what really got me was how the announcement was handled. If you read the announcement you may be lead to believe that Fitbit has purchased Pebble. The post talks about this being Pebble’s “next step” and the e-mail announcement sent out yesterday even said that Pebble was joining Fitbit:

It’s no surprise that a lot of Pebble users were quite upset with Fitbit since, based on the information released by Pebble, it appeared that Fitbit had made the decision to not honor warranties, release regular software updates for current watches, and discontinue the newly announced watches. But Fitbit didn’t buy Pebble, it only bought some of its assets:

Fitbit Inc., the fitness band maker, has acquired software assets from struggling smartwatch startup Pebble Technology Corp., a move that will help it better compete with Apple Inc..

The purchase excludes Pebble’s hardware, Fitbit said in a statement Wednesday. The deal is mainly about hiring the startup’s software engineers and testers, and getting intellectual property such as the Pebble watch’s operating system, watch apps, and cloud services, people familiar with the matter said earlier.

While Fitbit didn’t disclose terms of the acquisition, the price is less than $40 million, and Pebble’s debt and other obligations exceed that, two of the people said. Fitbit is not taking on the debt, one of the people said. The rest of Pebble’s assets, including product inventory and server equipment, will be sold off separately, some of the people said.

I bring this up partially because I was a fan of Pebble’s initial offering and did enjoy the fact that the company offered a unique product (a smartwatch with an always on display that only needed to be charged every five to seven days) but mostly because I found the way Pebble handled this announcement rather dishonest. If your company is filing bankruptcy you should just straight up admit it instead of trying to make it sound like you’ve been bought out by the first company to come by and snap up some of your assets. Since you’re already liquidating the company there’s nothing to be gained by pussyfooting around the subject.

So Much for Farook’s Phone

Shortly after the attack in San Bernardino the Federal Bureau of Investigations (FBI) tried to exploit the tragedy in order to force Apple to assist it in unlocking Syed Rizwan Farook’s iPhone. According to the FBI Farook’s phone likely contained information that would allow them to find his accomplices, motives, and basically solve the case. Apple refused to give the FBI the power to unlock any iPhone 5C willy nilly but the agency eventually found a third party that had an exploit that would allow the built-in security to be bypassed.

One year later the FBI hasn’t solved the case even with access to Farook’s iPhone:

They launched an unprecedented legal battle with Apple in an effort to unlock Farook’s iPhone and deployed divers to scour a nearby lake in search of electronic equipment the couple might have dumped there.

But despite piecing together a detailed picture of the couple’s actions up to and including the massacre, federal officials acknowledge they still don’t have answers to some of the critical questions posed in the days after the Dec. 2, 2015, attack at the Inland Regional Center.

Most important, the FBI said it is still trying to determine whether anyone was aware of the couple’s plot or helped them in any way. From the beginning, agents have tried to figure out whether others might have known something about Farook and Malik’s plans, since the couple spent months gathering an arsenal of weapons and building bombs in the garage of their Redlands home.

Officials said they don’t have enough evidence to charge anyone with a crime but stressed the investigation is still open.

This shouldn’t be surprising to anybody. Anybody who had the ability to plan out an attack like the one in San Bernardino without being discovered probably had enough operational security to not use an easily surveilled device such as a cellular phone for the planning. Too many people, including those who should know better, assume only technological wizards have the knowhow to plan things without using commonly surveilled communication methods. But that’s not the case. People who are committed to pulling off a planned attack that includes coordination with third parties are usually smart enough to do their research and utilize communication methods that are unlikely to be accessible to prying eyes. It’s not wizardry, it’s a trick as old as human conflict itself.

Humans are both unpredictable and adaptable, which is what makes mass surveillance useless. When an agency such as the National Security Agency (NSA) performs mass surveillance they get an exponentially greater amount of noise than signal. We’re not even talking about a 100:1 ratio. It would probably be closer to 1,000,000,000,000:1. Furthermore, people with enough intelligence to pull off coordinated attacks are usually paranoid enough to assume the most commonly available communication mechanisms are being surveilled so they adapt. Mass surveillance works well if you want a lot of grandmothers’ recipes, Internet memes, and insults about mothers made by teenagers. But mass surveillance is useless if you’re trying to identify individuals who are a significant threat. Sure, the NSA may get lucky once in a while and catch somebody but that’s by far the exception, not the rule. The rule, when it comes to identifying and thwarting significant threats, is that old fashioned investigative techniques must be employed.

The Real Life Ramification of Software Glitches

When people think of software glitches they generally think of annoyances such as their application crashing and losing any changes since their last save, their smart thermostat causing the furnace not to kick on, or the graphics in their game displaying abnormally. But as software has become more and more integrated into our lives the real life implications of software glitches have become more severe:

OAKLAND, Calif.—Most pieces of software don’t have the power to get someone arrested—but Tyler Technologies’ Odyssey Case Manager does. This is the case management software that runs on the computers of hundreds and perhaps even thousands of court clerks and judges in county courthouses across the US. (Federal courts use an entirely different system.)

Typically, when a judge makes a ruling—for example, issuing or rescinding a warrant—those words said by a judge in court are entered into Odyssey. That information is then relied upon by law enforcement officers to coordinate arrests and releases and to issue court summons. (Most other courts, even if they don’t use Odyssey, use a similar software system from another vendor.)

But, just across the bay from San Francisco, one of Alameda County’s deputy public defenders, Jeff Chorney, says that since the county switched from a decades-old computer system to Odyssey in August, dozens of defendants have been wrongly arrested or jailed. Others have even been forced to register as sex offenders unnecessarily. “I understand that with every piece of technology, bugs have to be worked out,” he said, practically exasperated. “But we’re not talking about whether people are getting their paychecks on time. We’re talking about people being locked in cages, that’s what jail is. It’s taking a person and locking them in a cage.”

First, let me commend Jeff Chorney for stating that jails are cages. Too many people like to prevent that isn’t the case. Second, he has a point. Case management software, as we’ve seen in this case, can have severe ramifications if bugs are left in the code.

The threat of bugs causing significant real life consequences isn’t a new one. A lot of software manages a lot of equipment that can lead to people dying if there is a malfunction. In response to that many industries have gone to great lengths to select tools and come up with procedures to minimize the chances of major bugs making it into released code. The National Aeronautics and Space Administration (NASA), for example, has an extensive history of writing code where malfunctions can cost millions of dollars or even kill people and its programmers have developed tools and standards to minimize their risks. Most industrial equipment manufacturers also spend a significant amount of time developing tools and standards to minimize code errors because their software mistakes can lead to millions of dollars being lost of people dying.

Software developers working on products that can have severe real life consequences need to focus on developing reliable code. Case management software isn’t Facebook. When a bug exists in Facebook the consequences are annoying to users but nobody is harmed. When a bug exists in case management software innocent people can end up in cages of on a sex offender registry, which can ruin their entire lives.

Likewise, people purchasing and use critical software needs to thoroughly test it before putting it in production. Do you think there are many companies that buy multi-million dollar pieces of equipment and don’t test them thoroughly before putting it on the assembly line? That would be foolish and any company that did that would end up facing millions of dollars of downtime or even bankruptcy if the machine didn’t perform as needed. The governments that are using the Odyssey Case Management software should have thoroughly tested the product before using it in any court. But since the governments themselves don’t face any risks from bad case management software they likely did, at best, basic testing before rushing the product into production.

Take Care of Yourself

Anybody who has worked in system administration, software development, or information security is probably familiar with the stereotypical “rockstar” employee. These are the employees that are too busy to eat, work ridiculously long hours, and replace sleep with caffeine. They’re often held up on a pedestal by other “rockstars” and sometimes even admired by their fellow coworkers and managers. Unfortunately, they also the model many computer science students strive to be.

The problem with these “rockstars” is that they have a short shelf life. You can only keep up that lifestyle for so long until you start facing major health issues, which is why I was happy to see Lesley Carhart write a short post aimed at hackers offering them advice to take care of themselves. Computer science disciplines need more people discussing the importance of taking care of yourself.

I’ve never been much for the “rockstar” lifestyle. I like getting a decent amount of sleep (which is about six hours for me) each night, socializing, eating decent food, exercising (which I’ve started to take very seriously this year), and not dealing with work during my off hours. While this lifestyle hasn’t made me a millionaire I can say that my quality of life is pretty awesome.

Don’t spend every waking hour working. Take time off for lunch. Eat a decent supper. Try to workout at least a few times per week. Go out with friends and do something not related to work. Go to bed at a decent hour so you can get some actual sleep. Not only will your qualify of life improve but your ability to handle stress, such as those days where you absolutely have to put in long hours at work or those days where you get sick, will be greatly improved as well.

Karma is a Bitch

A few months back Geofeedia was discovered to be buying user data on social networking sites and selling it to law enforcers. Needless to say, this didn’t go over well with anybody but law enforcers. Most of the social networking sites cut Geofeedia off. Apparently surveillance was the company’s only revenue stream because the company announced that it laid off half of its staff:

Chicago-based Geofeedia, a CIA-backed social-media monitoring platform that drew fire for enabling law enforcement surveillance, has let go 31 of its approximately 60 employees, a spokesman said Tuesday.

[…]

Geofeedia cut the jobs, mostly in sales in the Chicago office, in the third week of October, the spokesman said. It has offices in Chicago, Indianapolis and Naples, Fla. The cuts were first reported by Crain’s Chicago Business.

An emailed statement attributed to CEO Phil Harris said Geofeedia wasn’t “created to impact civil liberties,” but in the wake of the public debate over their product, they’re changing the company’s direction.

You have to love the claim that Geofeedia wasn’t created to impact civil libertarians even though the company’s only product was selling data to law enforcers. When you make yourself part of the police state you implicitly involve yourself in impacting civil liberties. I really hope the company goes completely bankrupt over this.

It’s also nice to see services like Facebook and Twitter cut off companies involved in surveillance. One of my biggest concerns is the way private surveillance becomes public surveillance. This issue is exacerbated by the fact that private surveillance companies stand to profit heavily by handing over their data to the State.

Concealing a Cellular Interceptor in a Printer

As a rule technology improves. Processors become faster, storage space becomes more plentiful, and components become smaller. We’ve seen computers go from slow machines with very little storage that were as big as a room to tiny little powerhouses with gigabytes of storage that fit in your pocket. Cellular technology is no different. Cellular inceptors, for example, can now be concealed in a printer:

Stealth Cell Tower is an antagonistic GSM base station in the form of an innocuous office printer. It brings the covert design practice of disguising cellular infrastructure as other things – like trees and lamp-posts – indoors, while mimicking technology used by police and intelligence agencies to surveil mobile phone users.

[…]

Stealth Cell Tower is a Hewlett Packard Laserjet 1320 printer modified to contain and power components required implement a GSM 900 Base Station.

These components comprise:

  • BladeRF x40
  • Raspberry Pi 3
  • 2x short GSM omnidirectional antennae with magnetic base
  • 2x SMA cable
  • Cigarette-lighter-to-USB-charger circuit (converting 12-24v to 5v)
  • 1x USB Micro cable (cut and soldered to output of USB charger)
  • 1x USB A cable (cut and soldered to printer mainboard)

The HP Laserjet 1320 was chosen not only for its surprisingly unmentionable appearance but also because it had (after much trial and error) the minimal unused interior volumes required to host the components. No cables, other than the one standard power-cord, are externally visible. More so, care has been taken to ensure the printer functions normally when connected via USB cable to the standard socket in the rear.

It’s an impressive project that illustrates a significant problem. Cellular interceptors work because the protocols used by the Global System for Mobile Communications (GSM) standard are insecure. At one time this probably wasn’t taken seriously because it was believed that very few actors had the resources necessary to build equipment that could exploit the weaknesses in GSM. Today a hobbyist can buy such equipment for a very low price and conceal it in a printer, which means inserting an interceptor into an office environment is trivial.

Fortunate, Long-Term Evolution (LTE) is a more secure protocol. Unfortunate, most cell phones don’t use LTE for phones calls and text messages. Until everything is switched over to LTE the threat posed by current cellular interceptors should not be taken lightly.

You’re the Product, Not the Customer

If you’re using an online service for free then you’re the product. I can’t drive this fact home enough. Social media sites such as Facebook and Twitter make their money by selling the information you post. And, unfortunately, they’ll sell to anybody, even violent gangs:

The FBI is using a Twitter tool called Dataminr to track criminals and terrorist groups, according to documents spotted by The Verge. In a contract document, the agency says Dataminr’s Advanced Alerting Tool allows it “to search the complete Twitter firehose, in near real-time, using customizable filters.” However, the practice seems to violate Twitter’s developer agreement, which prohibits the use of its data feed for surveillance or spying purposes.

This isn’t the first time that a company buying access to various social media feeds has been caught selling that information to law enforcers. Earlier this year Geofeedia was caught doing the same thing. Stories like this show that there’s no real divider between private and government surveillance. You should be guarding yourself against private surveillance as readily as you guard against government surveillance because the former becomes the latter with either a court order or a bit of money exchanging hands.

Will Dataminr have its access revoked like Geofeedia did? Let’s hope so. But simply cutting off Dataminr won’t fix the problem since I guarantee there are a bunch of other companies providing the same service. The only way to fix this problem is to stop using social media sites for activities you want to keep hidden from law enforcers. Don’t plan your protests on Facebook, don’t try to coordinate protest activity using Twitter, and don’t post pictures of your protest planning sessions on Instagram. Doing any of those things is a surefire way for law enforcers to catch wind of what you’re planning before you can execute your plan.

Propagandizing Against Secure Communications

It’s no secret that the State is at odds with effective cryptography. The State prefers to keep tabs on all of its subjects and that’s harder to do when they can talk confidentially amongst themselves. What makes matters worse is that the subjects like their confidentiality and seek out tools that provide that to them. So the State has to first convince its subjects that confidentiality is bad, which means it needs to put out propaganda. Fortunately, many journalists are more than happy to produce propaganda for the State:

The RCMP gave the CBC’s David Seglins and the Toronto Star’s Robert Cribb security clearance to review the details of 10 “high priority” investigations—some of which are ongoing—that show how the police is running into investigative roadblocks on everything from locked devices to encrypted chat rooms to long waits for information. The Toronto Star’s headline describes the documents as “top-secret RCMP files.”

The information sharing was stage-managed, however. Instead of handing over case files directly to the journalists, the federal police provided vetted “detailed written case summaries,” according to a statement from Seglins and Cribb. These summaries “[formed] the basis of our reporting,” they said. The journalists were given additional information on background, and allowed to ask questions, according to the statement, but “many details were withheld.”

The stories extensively quote RCMP officials, but also include comment from privacy experts who are critical of the police agency’s approach.

“On the one hand, the [RCMP] do have a serious problem,” said Jeffrey Dvorkin, former vice president of news for NPR and director of the University of Toronto Scarborough’s journalism program. “But to give information in this way to two respected media organizations does two things: it uses the media to create moral panic, and it makes the media look like police agents.”

The line between journalism and propaganda is almost nonexistent anymore. This story is an example of a more subtle form of journalist created propaganda. It’s not so much a case of a journalist writing outright propaganda as it is a journalist not questioning the information being provided by the police.

Journalists, like product reviewers, don’t like to rock the boat because it might jeopardize their access. The police, like product manufacturers, are more than happy to provide product (which is information in the case of police) to writers who show them in a good light. They are much less apt to provide product to somebody who criticizes them (which is why critics have to rely on the Freedom of Information Act). If a journalist wants to keep getting the inside scoop from the police they need to show the police in a good light, which means that they must not question the information they’re being fed too much.

Be wary of what you read in news sources. The information being printed is not always as it appears, especially when the writer wants to maintain their contacts within the State to get the inside scoop.

Too Good to be True

If something sounds like it’s too good to be true it probably is. For example, if you come across a decently specced Android phone that costs $50 chances are the manufacturer is making money on it in some other way, such as surveilling the user to sell their information:

WASHINGTON — For about $50, you can get a smartphone with a high-definition display, fast data service and, according to security contractors, a secret feature: a backdoor that sends all your text messages to China every 72 hours.

Security contractors recently discovered preinstalled software in some Android phones that monitors where users go, whom they talk to and what they write in text messages. The American authorities say it is not clear whether this represents secretive data mining for advertising purposes or a Chinese government effort to collect intelligence.

Is the data being used for advertising or for the Chinese government? Why not both? If the Chinese government is anything like the United States government it’s willing to pay a pretty penny to coax companies into spying on users. I doubt this scam is solely for intelligence gathering since it’s a high cost (manufacturing lots of handsets) strategy with no guarantee of return (how do you convince people with intelligence worth harvesting to use one of these unknown Android phones over an iPhone) but the collected data very well may be sent off to the Chinese government.

This story goes along with the There Ain’t No Such Thing as a Free Lunch (TANSTAAFL) principle. If you’re using a product or service for free then chances are that you’re the product. Likewise, if you’re using a product or service that appears to be subsidized then the provider is making money back some other way. In the case of cellular network providers subsidized phones were a convenient way to lock customers into two year contracts. In the case of handset manufacturers phones can be subsidized by collecting user data to sell to advertisers.