Degrees of Anonymity

When a service describes itself as anonymous how anonymous is it? Users of Yik Yak may soon have a chance to find out:

Yik Yak has laid 70 percent of employees amid a downturn in the app’s growth prospects, The Verge has learned. The three-year-old anonymous social network has raised $73.5 million from top-tier investors on the promise that its young, college-age network of users could one day build a company to rival Facebook. But the challenge of growing its community while moving gradually away from anonymity has so far proven to be more than the company could muster.

[…]

But growth stalled almost immediately after Sequoia’s investment. As with Secret before it, the app’s anonymous nature created a series of increasingly difficult problems for the business. Almost from the start, Yik Yak users reported incidents of bullying and harassment. Multiple schools were placed on lockdown after the app was used to make threats. Some schools even banned it. Yik Yak put tools in place designed to reduce harassment, but growth began to slow soon afterward.

Yik Yak claimed it was an anonymous social network and on the front end the data did appear anonymous. However, the backend may be an entirely different matter. How much information did Yik Yak regularly keep about its users? Internet Protocol (IP) addresses, Global Positioning System (GPS) coordinates, unique device identifiers, phone numbers, and much more can be easily collected and transmitted by an application running on your phone.

Bankruptcy is looking like a very real possibility for Yik Yak. If the company ends up filing then its assets will be liquidated. In this day and age user data is considered a valuable asset. Somebody will almost certainly end up buying Yik Yak’s user data and when they do they may discover that it wasn’t as anonymous as users may have thought.

Not all forms of anonymity are created equal. If you access a web service without using some kind of anonymity service, such as Tor or I2P, then the service has some identifiable information already such as your IP address and a browser fingerprint. If you’re access the service through a phone application then that application may have collected and transmitted your phone number, contacts list, and other identifiable information (assuming, of course, the application has permission to access all of that data, which it may not depending on your platform and privacy settings). While on the front end of the service you may appear to be anonymous the same may not hold true for the back end.

This issue becomes much larger when you consider that even if your data is currently being held by a benevolent company that does care about your privacy that may not always be the case. Your data is just a bankruptcy filing away from falling into the hands of somebody else.

Secure E-Mail is an Impossibility

A while back I wrote a handful of introductory guides on using Pretty Good Privacy (PGP) to encrypt the content of your e-mails. They were well intentioned guides. After all, everybody uses e-mail so we might as well try to secure it as much as possible, right? What I didn’t stop to consider was the fact that PGP is a dead end technology for securing e-mails not because the initial learning curve is steep but because the very implementation itself is flawed.

I recently came across a blog post by Filippo Valsorda that sums up the biggest issue with PGP:

But the real issues I realized are more subtle. I never felt confident in the security of my long term keys. The more time passed, the more I would feel uneasy about any specific key. Yubikeys would get exposed to hotel rooms. Offline keys would sit in a far away drawer or safe. Vulnerabilities would be announced. USB devices would get plugged in.

A long term key is as secure as the minimum common denominator of your security practices over its lifetime. It’s the weak link.

Worse, long term keys patterns like collecting signatures and printing fingerprints on business cards discourage practices that would otherwise be obvious hygiene: rotating keys often, having different keys for different devices, compartmentalization. It actually encourages expanding the attack surface by making backups of the key.

PGP, in fact the entire web of trust model, assumes that your private key will be more or less permanent. This assumption leads to a lot of implementation issues. What happens if you lose your private key? If you have an effective backup system you may laugh at this concern but lost private keys are the most common issue I’ve seen PGP users run into. When you lose your key you have to generate a new one and distribute it to everybody you communicate with. In addition to that, you also have to resign people’s existing keys. But worst of all, without your private key you can’t even revoke the corresponding published public key.

Another issue is that you cannot control the security practices of other PGP users. What happens when somebody who signed your key has their private key compromised? Their signature, which is used by others to decide whether or not to trust you, becomes meaningless because their private key is no longer confidential. Do you trust the security practices of your friends enough to make your own security practices reliant on them? I sure don’t.

PGP was a jury rigged solution to provide some security for e-mail. Because of that it has many limitations. For starters, while PGP can be used to encrypt the contents of a message it cannot encrypt the e-mail headers or the subject line. That means anybody snooping on the e-mail knows who the parties communicating are, what the subject is, and any other information stored in the headers. As we’ve learned from Edward Snowden’s leaks, metadata is very valuable. E-mail was never designed to be a secure means of communicating and can never be made secure. The only viable solution for secure communications is to find an alternative to e-mail.

With that said, PGP itself isn’t a bad technology. It’s still useful for signing binary packages, encrypting files for transferring between parties, and other similar tasks. But for e-mail it’s at best a bandage to a bigger problem and at worst a false sense of security.

A Beginner’s Guide to Privacy and Security

I’m always on the lookout for good guides on privacy and security for beginner’s. Ars Technica posted an excellent beginner’s guide yesterday. It covers the basics; such as installing operating system and browser updates, enabling two-factor authentication, and using a password manager to enable you to use strong and unique passwords for your accounts; that even less computer savvy users can follow to improve their security.

If you’re not sure where to begin when it comes to security and privacy take a look at Ars’ guide.

The Surveillance State Hidden in Plain Sight

Everybody should have been suspicious of the giant unadorned building in New York City that looks like something ripped right out of the 1984 movie. As it turns out the building’s appearance betrays its purpose as it is part of the Orwellian surveillance state:

THEY CALLED IT Project X. It was an unusually audacious, highly sensitive assignment: to build a massive skyscraper, capable of withstanding an atomic blast, in the middle of New York City. It would have no windows, 29 floors with three basement levels, and enough food to last 1,500 people two weeks in the event of a catastrophe.

But the building’s primary purpose would not be to protect humans from toxic radiation amid nuclear war. Rather, the fortified skyscraper would safeguard powerful computers, cables, and switchboards. It would house one of the most important telecommunications hubs in the United States — the world’s largest center for processing long-distance phone calls, operated by the New York Telephone Company, a subsidiary of AT&T.

[…]

Documents obtained by The Intercept from the NSA whistleblower Edward Snowden do not explicitly name 33 Thomas Street as a surveillance facility. However — taken together with architectural plans, public records, and interviews with former AT&T employees conducted for this article — they provide compelling evidence that 33 Thomas Street has served as an NSA surveillance site, code-named TITANPOINTE.

Inside 33 Thomas Street there is a major international “gateway switch,” according to a former AT&T engineer, which routes phone calls between the United States and countries across the world. A series of top-secret NSA memos suggest that the agency has tapped into these calls from a secure facility within the AT&T building. The Manhattan skyscraper appears to be a core location used for a controversial NSA surveillance program that has targeted the communications of the United Nations, the International Monetary Fund, the World Bank, and at least 38 countries, including close U.S. allies such as Germany, Japan, and France.

TITANPOINTE? Again, we have a National Security Agency (NSA) codename that sounds really stupid. Considering how obvious they were trying to be with the building design and such were I the NSA I’d have just called the project BIGBROTHER.

TITANPOINTE appears to be another example of the public-private surveillance partnership I periodically bring up. While all of the cellular providers are in bed with the State to some extent, AT&T appears to have a very special relationship with the NSA. From Room 641A to 33 Thomas Street we have seen AT&T grant the NSA complete access to its services. This means that any surveillance performed by AT&T, which is often considering “safe” surveillance by many libertarians because it’s done by a private entity, becomes NSA surveillance without so much as a court order. Since your phone calls and text messages are available to AT&T they’re also available to the NSA.

Fortunately, you can take some measures to reduce the information available to AT&T and the NSA. While standard phone calls and text messages are insecure, there are several secure communication tools available to you. Apple’s iMessage is end-to-end encrypted (but if you backup to iCloud your messages are backed up in plaintext and therefore available to Apple) as are WhatsApp and Signal. I generally recommend Signal for secure messaging because it’s easy to use, the developers are focused on providing a secure service, and it has a desktop application so you can use it from your computer. None of these applications are magic bullets that will fix all of your privacy woes but they will reduce the amount of information AT&T and the NSA can harvest from their position in the communication routing system.

Propagandizing Against Secure Communications

It’s no secret that the State is at odds with effective cryptography. The State prefers to keep tabs on all of its subjects and that’s harder to do when they can talk confidentially amongst themselves. What makes matters worse is that the subjects like their confidentiality and seek out tools that provide that to them. So the State has to first convince its subjects that confidentiality is bad, which means it needs to put out propaganda. Fortunately, many journalists are more than happy to produce propaganda for the State:

The RCMP gave the CBC’s David Seglins and the Toronto Star’s Robert Cribb security clearance to review the details of 10 “high priority” investigations—some of which are ongoing—that show how the police is running into investigative roadblocks on everything from locked devices to encrypted chat rooms to long waits for information. The Toronto Star’s headline describes the documents as “top-secret RCMP files.”

The information sharing was stage-managed, however. Instead of handing over case files directly to the journalists, the federal police provided vetted “detailed written case summaries,” according to a statement from Seglins and Cribb. These summaries “[formed] the basis of our reporting,” they said. The journalists were given additional information on background, and allowed to ask questions, according to the statement, but “many details were withheld.”

The stories extensively quote RCMP officials, but also include comment from privacy experts who are critical of the police agency’s approach.

“On the one hand, the [RCMP] do have a serious problem,” said Jeffrey Dvorkin, former vice president of news for NPR and director of the University of Toronto Scarborough’s journalism program. “But to give information in this way to two respected media organizations does two things: it uses the media to create moral panic, and it makes the media look like police agents.”

The line between journalism and propaganda is almost nonexistent anymore. This story is an example of a more subtle form of journalist created propaganda. It’s not so much a case of a journalist writing outright propaganda as it is a journalist not questioning the information being provided by the police.

Journalists, like product reviewers, don’t like to rock the boat because it might jeopardize their access. The police, like product manufacturers, are more than happy to provide product (which is information in the case of police) to writers who show them in a good light. They are much less apt to provide product to somebody who criticizes them (which is why critics have to rely on the Freedom of Information Act). If a journalist wants to keep getting the inside scoop from the police they need to show the police in a good light, which means that they must not question the information they’re being fed too much.

Be wary of what you read in news sources. The information being printed is not always as it appears, especially when the writer wants to maintain their contacts within the State to get the inside scoop.

LastPass Opts to Release Ad Supported “Free” Version

My hatred of using advertisements to fun “free” services is pretty well known at this point. However, it seems that a lot of people prefer the business model where they’re the product instead of the customer. Knowing that, and knowing that password reuse is still a significant security problem for most people, I feel the need to inform you that LastPass, which still remains a solid password manager despite being bought by LogMeIn, now has an ad supported “free” version:

I’m thrilled to announce that, starting today, you can use LastPass on any device, anywhere, for free. No matter where you need your passwords – on your desktop, laptop, tablet, or phone – you can rely on LastPass to sync them for you, for free. Anything you save to LastPass on one device is instantly available to you on any other device you use.

Anything that may convince more people to start using password managers is a win in my book. People who don’t utilize password managers tend to reuse the same credentials on multiple sites, which significantly increases the damage that a password database leak can cause. Furthermore, using a password manager lowers the hurdle for using strong passwords. Instead of having to use passwords that are memorizable a password manager also allows users to use long strings of pseudorandom characters, which means if a password database is breached the time it takes to unveil their password from its stored hash is significantly increased (because the attacker has to rely on brute force instead of a time saving method such as rainbow tables).

If money has been the only thing that has held you back from using a password manager you should take a look at LastPass’s “free” version. While ads are a potential vector for malware they can be blocked with an ad blocker and the risk of being infected through ads is significantly less than the risks involved in not using a password manager.

We’re All Terrorists Now

In many governmental circles I’m considered a terrorist sympathizer. Why? It’s not because I’ve sold arms to terrorists or provided them logistical support. It’s because I teach people how to use secure communication tools, which can get you arrested in certain parts of the world:

Samata Ullah, 33, was charged with six terrorism offences after being arrested in a street in Cardiff on September 22 by officers from Scotland Yard’s counter-terrorism squad.

The charge sheet includes one count of preparation of terrorism “by researching an encryption programme, developing an encrypted version of his blog site, and publishing the instructions around the use of [the] programme on his blog site.”

Ullah is also accused of knowingly providing “instruction or training in the use of encryption programmes” in relation to “the commission or preparation of acts of terrorism or for assisting the commission or preparation by others of such acts.”

He has additionally been charged with being in possession of a “Universal Serial Bus (USB) cufflink that had an operating system loaded on to it for a purpose connected with the commission, preparation, or instigation of terrorism.”

This is the nightmare Orwell alluded to in Animal Farm and Nineteen Eighty-Four. The State has become so controlling that merely providing an encrypted version of your blog, which I am currently doing since my blog is served exclusively over HTTPS, can be considered noteworthy enough to mention on a list of charges. The same goes for USB cufflinks. We are at a point that even mundane activities can be labeled criminal offenses if the State decides thrust the word terrorism upon you.

I have no doubts that this will come to the United States. The United Kingdom seems to be where new tyrannies are birthday and the United States seems to be where tyrannies go to grow up. And anybody who watched the hearings surrounding Farook’s iPhone, which the Federal Bureau of Investigations (FBI) wanted to force Apple to break into, knows that the United States government is already at war with cryptography. If it passes a law mandating all domestic encryption include a government accessible back door I’ll be a criminal for teaching people how to use secure foreign encryption.

Apparently CNC Machines Don’t Exist

Cody Wilson stirred up a lot of controversy when he released designs for the Liberator, a single shot pistol constructed with a 3D printer. Why did a pistol constructed of materials that were guaranteed to fail after firing relatively few shots and couldn’t be scaled up to a powerful caliber? Because most gun control advocates have no concept of how guns work. That leads them to fear imaginary devices such as the mythical Glock 7 from Die Hard, which lead to the passage of the Undetectable Firearms Act. Another reason is that most gun control advocates are apparently unaware that computer numerical control (CNC) machines are a thing:

Even after reading his book, I’m still not sure what he means by this. Sure, plenty of open-source zealots favor software that can be edited, freely, by anyone. However, there is a crucial distinction here: no software, until the one created by Wilson and his followers, has ever been used to create a physical device that fires lethal bullets.

The Liberator was not the first gun created using software. In fact most modern guns are initially created using computer aided design (CAD) software, frequently simulated in software before being created, and sometimes built using a CNC machine. Software has been used to create guns for a while now. What Cody Wilson did wasn’t revolutionary, it was evolutionary. He managed to make a firearm with inferior equipment and materials that provided the most basic requirements to qualify as a firearm. I don’t mean to understate his contribution to firearms manufacturing but his real revolution, in my opinion, was to illustrate how irrelevant gun control is, especially as we march into a future where home fabrication will become easier and be able to utilize better materials.

Technology has always been the death knell of centralized control. While gun control advocates cling to their belief that a powerful central government can make all of the bad things go away the rest of the world is moving on and doing what it damn well pleases. I don’t fear gun control because I realize it’s a lost cause. Cody Wilson helped illustrate that to the world with the Liberator.

Confidentiality Versus Anonymity

The Intercept has started a bit of a shit storm by pointing out that iMessage doesn’t encrypt metadata:

APPLE PROMISES THAT your iMessage conversations are safe and out of reach from anyone other than you and your friends. But according to a document obtained by The Intercept, your blue-bubbled texts do leave behind a log of which phone numbers you are poised to contact and shares this (and other potentially sensitive metadata) with law enforcement when compelled by court order.

Every time you type a number into your iPhone for a text conversation, the Messages app contacts Apple servers to determine whether to route a given message over the ubiquitous SMS system, represented in the app by those déclassé green text bubbles, or over Apple’s proprietary and more secure messaging network, represented by pleasant blue bubbles, according to the document. Apple records each query in which your phone calls home to see who’s in the iMessage system and who’s not.

Is this an affront to privacy? Is Apple showing bad faith in its promise to deliver a more security communication system? No and no. The issue at hand here is that Apple has promised confidentiality but hasn’t promised anonymity, which are two different things.

Confidentiality means that a communication isn’t accessible to unauthorized parties. In other words what was communicated is secret. Anonymity means that the parties communicating are secret. A confidential message isn’t necessarily anonymous and an anonymous message isn’t necessarily confidential.

iMessage and other secure communication applications such as WhatsApp and Signal use an identifier that are tied to your real-life persona, your phone number. Using phone numbers as identifiers allows these apps to easily scan your contacts list to see who does and doesn’t have the application. While they do keep what is being communicated secret they make no attempt to keep who is communicating secret.

Tor, on the other hand, attempts to provide anonymity but doesn’t necessarily provide confidentiality. With the exception of hidden services, every website you access through Tor goes through an exit node. Unless the site you’re accessing utilizes Transport Layer Security (TLS) the contents of the site are accessible to the exit node operator. On Tor the content being communicated isn’t necessarily confidential but the parties communicating are.

Applications such as Ricochet attempt (I use this qualifier because Ricochet is still experimental) to provide both confidentiality and anonymity. Not only are the communications themselves kept secret but the parties who are communicating is also kept secret. But since Ricochet users are anonymous be default the application can’t go through your contacts list and automatically inform you who does and doesn’t have the application.

There’s nothing sinister afoot here. Apple, WhatsApp, and Signal never claimed to deliver anonymity. Even if they didn’t use phone numbers as identifiers they still wouldn’t deliver anonymity since they make no attempt to conceal your IP address. Everybody that is freaking out about this is freaking out about the fact that Apple isn’t providing something it never claimed to provide.

There are no magic bullets. Before choosing the right tool for the job you need to develop a threat model. Unless you know what you are guarding against you can’t effectively guard against it. Confidentiality works well to protect against certain types of snoops. Law enforcers wanting to dig through the contents of messages to find evidence of illegal activities and advertisers wanting the same but to acquire information to better sell your products are threats where confidentiality is important but anonymity may not be required. Law enforcers wanting to create a social graph so it can target friends of specific individuals and censors wanting to learn who is putting out unapproved material are threats where anonymity is important but confidentiality may not be required. On the other hand, depending on your threat model, all of the above may be threats where confidentiality and anonymity are required.

Know your threats and know your tools. Make sure your tools address your threats. But don’t get upset because a tool doesn’t address your threat when it never claimed to do so.

The Signal Desktop App Now Works with iOS

The developers behind Signal, an application that allows you to send secure text messaging and make secure phone calls, released a Chrome app some time ago. The Chrome app allowed you to link your Android device with the app so you could use Signal on a desktop or laptop computer. iOS users were left out in the cold, which annoyed me because I spend more time on my laptop than on my phone (also, because I hate typing on my phone). Fortunately, Signal for iOS now supports linking with the Chrome app.

It’s simple to setup and works well. If you, like me, don’t use Chrome as your primary browser and don’t want to open it just to use Signal you can right-click on the Signal App in Chrome and create a shortcut. On macOS the shortcut will be created in your ~/Applications/Chrome Apps/ folder (I have no idea where it puts it on Windows or Linux). Once created you can drag the Signal shortcut to the dock.