What’s Your Score

Police, even more so than most people, tend to be lazy. And like other lazy people police are trying to replace everything with algorithms. But there is a difference between police relying on algorithms and private entities: algorithms in private hands seldom lead to people being killed. A higher death rate is the only outcome I can see coming from this:

FRESNO, Calif. — While officers raced to a recent 911 call about a man threatening his ex-girlfriend, a police operator in headquarters consulted software that scored the suspect’s potential for violence the way a bank might run a credit report.

The program scoured billions of data points, including arrest reports, property records, commercial databases, deep Web searches and the man’s social- media postings. It calculated his threat level as the highest of three color-coded scores: a bright red warning.

Algorithms that try to model human behavior are notoriously unreliable. Part of this is due to humanity’s lack of homogeneity and part of it is due to data limitations. An algorithm is only as good as the data it is fed. What data is fed into an algorithm is determined by the developers, which means the results often reflect their biases. In this case if the developers viewed gun owners as being prone to violence the algorithm would end up reflecting that.

Usually we don’t pay much attention when an algorithm screws up and recommends a product to us based on our previous purchasing history that we have no interest in. But an algorithm that tries to estimate a person’s threat level to police is going to carry much more dire consequences. There is already a chronic problem with police being too trigger happy. Imagine how much more trigger happy your average cop would be if they were told the suspect is rated high by the threat assessment algorithm. Chances are the officer will go for a shoot first and ask questions later approach.

Theoretically this type of algorithm wouldn’t have to result in such severe consequences but it is being utilized by individuals who are generally not held accountable for their actions. If an officer, for example, received notification that the suspect was rated is highly likely to be violent but knew gunning them down without cause would result in charges they would likely act more cautiously but still not resort to shooting without justification. But that’s not how things are this is will likely end badly for anybody facing off with an officer employed by a department that utilizes this system.

The Networks Have Ears

As a general rule I avoid local networks I don’t personally administer. If I’m at an event with free Wi-Fi I still use my cell phone’s data and tethering mode when I need to access the Internet on my laptop. For those times I cannot avoid using a local network I route my data through a Virtual Private Network (VPN) connection. Although these measures won’t stop my Internet Service Providers (ISPs) and their partners from snooping on me they do prevent malicious actors on a local network from snooping on me. Attendees at the Consumer Electronics Show (CES) who opted into the free Wi-Fi became excellent demonstrations on the lack of privacy you have when using a local Wi-Fi network without a VPN connection:

This week, more than 170,000 tech and media professionals converged on the city of Las Vegas to see the latest in technology at the Consumer Electronics Show, and––inevitably––some of them used their smart, connected devices to try to get laid.

Vector Media offered attendees free WiFi at major hotels, shuttle buses, and convention centers throughout the week in exchange for collecting anonymized app usage data. More than 1,800 people opted in, and Vector found a whopping 61 percent of attendees’ used Tinder while at CES––nearly five times more than productivity app Slack, which only 12.8 percent of attendees on Vector’s network used. Facebook Messenger came in first place with 74.3 percent, and Grindr also made an appearance on its list of apps in use, at 16 percent.

The amount of information a local network administrator can obtain about you would likely surprise most people. In addition to that the amount of attacks a malicious actor on a local network can perform is notable. If you value your privacy or security I would recommend avoiding Wi-Fi networks you don’t personally control as much as possible (granted, even your own network isn’t necessarily trustworthy but you have far more control in most cases than with other networks).

David Chaum Becomes A Quisling

Online anonymity is important. In fact it’s the difference between life and death for many political dissidents around the world. Recognizing this many developers have put their efforts into developing effective anonymity tools such as Tor and I2P. But what makes an anonymity tool effective? An effective anonymity tool is one designed in such a way where a third party cannot utilize the tool itself to discover the identity of a user (no tool, however, can be designed in such a way to stop a user from voluntarily revealing identifiable information about themselves).

One of the downsides of the current slew of popular anonymity tools is they tend to be slower than tools that don’t attempt to maintain anonymity. Accessing a website over Tor usually takes longer than accessing that same site over the regular Internet. David Chaum, a well-known and previously (I’ll get to that in a second) well-respected cryptographer is promising a new “anonymity” tool that doesn’t suffer from the performance issues of popular tools such as Tor:

With PrivaTegrity, Chaum is introducing a new kind of mix network he calls cMix, designed to be far more efficient than the layered encryption scheme he created decades ago. In his cMix setup, a smartphone communicates with PrivaTegrity’s nine servers when the app is installed to establish a series of keys that it shares with each server. When the phone sends a message, it encrypts the message’s data by multiplying it by that series of unique keys. Then the message is passed around all nine servers, with each one dividing out its secret key and multiplying the data with a random number. On a second pass through the nine servers, the message is put into a batch with other messages, and each server shuffles the batch’s order using a randomized pattern only that server knows, then multiplies the messages with another random number. Finally, the process is reversed, and as the message passes through the servers one last time, all of those random numbers are divided out and replaced with keys unique to the message’s intended recipient, who can then decrypt and read it.

Sounds good, doesn’t it? Chaum even claims PrivaTegrity is more secure than Tor. But as it turns out this “anonymity” tool isn’t effective because it allows third parties to unveil the identity of users:

On top of those security and efficiency tricks, PrivaTegrity’s nine-server architecture—with a tenth that works as a kind of “manager” without access to any secret keys—also makes possible its unique backdoor decryption feature. No single server, or even eight of the nine servers working together, can trace or decrypt a message. But when all nine cooperate, they can combine their data to reconstruct a message’s entire path and divide out the random numbers they used to encrypt it. “It’s like a backdoor with nine different padlocks on it,” Chaum says.

[…]

“It’s like the UN,” says Chaum. “I don’t think a single jurisdiction should be able to covertly surveil the planet…In this system, there’s an agreement on the rules, and then we can enforce them.”

One Key to rule them all, One Key to find them, One Key to bring them all and in the darkness spy on them.

You know who else had an agreement on the rules? The Nazis! Put down the Godwin brand pitchforks, that was purposeful hyperbole. My point is agreement on the rules is meaningless fluff just as his claim that no single jurisdiction should be able to surveil the planet. By implementing a backdoor he has made his network a single jurisdiction capable of surveilling everybody who uses it. His network is also the rule maker. The only reason I would shy away from calling PrivaTegrity a government is because it still outsources enforcement to the State by handing over identifiable information of users deemed guilty by the Nazgûl. PrivaTegrity isn’t about protecting the identity of every user, it’s about protecting the identity of favored users.

This backdoor capability also means PrivaTegrity is less secure than Tor since Tor doesn’t have a built-in method to reveal the identity of users. Every major government in the world will try to compromise PrivaTegrity if it every comes into wide usage. And due to the existence of a backdoor those efforts will bear fruit. Whether compromising the servers themselves, buying off the administrators of the servers, or by other means it will only be a matter of time until governments find a way to utilize the built-in backdoor for their own purposes. That is why the mere existence of a backdoor renders an anonymity tool ineffective.

The only upside to PrivaTegrity is that the existence of a backdoor almost guarantees nobody will adopt it and therefore when it’s compromised nobody will be put in danger.

I’m A Good Little Slave And You Should Be One Too

The Federal Aviation Administration (FAA) has decreed that anybody who owns a drone must register. Sally French, a reporter for Forbes, registered herself and wrote an opinion piece encouraging others to do the same. It’s titled “I registered my drone. Here’s why you should too” but it might as well be titled “I’m a good little slave who rolls over on command and you should too!”

I logged onto the site and entered my name, home address and email address.

There is a registration fee, so I also had to enter my credit card information. The registration fee is $5 per drone owner — the same $5 processing fee charged for any aircraft registration — but the FAA says it will refund the $5 fee for drones registered through Jan. 20 to encourage participation.

Once I hit the “next” button, I received a personal identification number and certificate to print out (though like most millennials, I don’t have a printer). I did write the identification number on a sticker, which I then pasted on my drone, an original DJI Phantom that I have been flying since early 2013.

[…]

Registration is intended to force some education upon pilots who may not have malicious intent, but also may not have read the “Know Before You Fly” guidelines included with most drone purchases in the U.S. It also means that government and law enforcement officials will be able to track down reckless drone operators — something that, until now, they haven’t been able to do.

The fool! Registration is not intended to educate drone pilots, it’s meant to rake in a little extra cash for the FAA. Although $5 per operator, a fee that’s being refunded until January 20th, doesn’t sound like much when you consider the FFA estimates one million drones will be sold this Christmas alone you can see the cash, which requires the FAA to do almost nothing, becomes a tidy sum. And anybody familiar with how government extortion works knows that the initial $5 fee is just the bait and the price will only go up. But the registration fee isn’t the real money maker. There is an up to $250,000 fine for anybody who flies a drone without registering with the FAA by February 19th. Since a lot of drone owners will likely remain unaware of the FAA regulation there a large pool of suckers the FAA is going to be able to extort some money out of.

Now let me explain why you shouldn’t register your drone. If you do your name and home address will be made publicly available:

The FAA finally confirmed this afternoon that model aircraft registrants’ names and home addresses will be public. In an email message, the FAA stated: “Until the drone registry system is modified, the FAA will not release names and address. When the drone registry system is modified to permit public searches of registration numbers, names and addresses will be revealed through those searches.”

Sounds like a public wall of shame to me. But you know this list will be abused. Most likely drone manufacturers will use it to send you unwanted advertisements via snail mail (hey, look, the registration system raises some money for the Post Office too). And anybody looking to steal a drone knows exactly where to go.

In this day and age it has become obvious that publicly releasing personal information is dangerous. The fact the FAA’s official policy is to public release the names and home addresses of every registered drone pilot is reason enough not to register. If the FAA isn’t willing to protect the privacy of its “customers” then nobody should do business with it.

So instead of being a good little slave who rolls over on command think about giving the FAA a giant middle finger.

Political Victories Are Only Temporary Victories

I hate redoing work. This is part of the reason I don’t pursue politics. Any political victory is only a temporary victory. At some future point the victory you achieved will be undone. The Cybersecurity Information Sharing Act (CISA) is just the latest example of this. If you go through the history of the bill you will see it was introduced and shutdown several times:

The Cybersecurity Information Sharing Act was introduced on July 10, 2014 during the 113th Congress, and was able to pass the Senate Intelligence Committee by a vote of 12-3. The bill did not reach a full senate vote before the end of the congressional session.

The bill was reintroduced for the 114th Congress on March 12, 2015, and the bill passed the Senate Intelligence Committee by a vote of 14-1. Senate Majority Leader Mitch McConnell, (R-Ky) attempted to attach the bill as an amendment to the annual National Defense Authorization Act, but was blocked 56-40, not reaching the necessary 60 votes to include the amendment. Mitch McConnell hoped to bring the bill to senate-wide vote during the week of August 3–7, but was unable to take up the bill before the summer recess. The Senate tentatively agreed to limit debate to 21 particular amendments and a manager’s amendment, but did not set time limits on debate. In October 2015, the US Senate took the bill back up following legislation concerning sanctuary cities.

If at first you don’t succeed, try, try again. This time the politicians attached CISA to the budget, which as we all know is a must pass bill:

Congress on Friday adopted a $1.15 trillion spending package that included a controversial cybersecurity measure that only passed because it was slipped into the US government’s budget legislation.

House Speaker Paul Ryan, a Republican of Wisconsin, inserted the Cybersecurity Information Sharing Act (CISA) into the Omnibus Appropriations Bill—which includes some $620 billion in tax breaks for business and low-income wage earners. Ryan’s move was a bid to prevent lawmakers from putting a procedural hold on the CISA bill and block it from a vote. Because CISA was tucked into the government’s overall spending package on Wednesday, it had to pass or the government likely would have had to cease operating next week.

Sen. Ron Wyden, a Democrat of Oregon, said the CISA measure, which backers say is designed to help prevent cyber threats, got even worse after it was slipped into the 2,000-page budget deal (PDF, page 1,728). He voted against the spending plan.

All those hours invested in the political process to fight CISA were instantly rendered meaningless with the passage of this bill. However, the bill can be rendered toothless. CISA removes any potential liability from private companies that share customer data with federal agencies. So long as private companies don’t have actionable information to share the provisions outlined in CISA are inconsequential. As with most privacy related issues, effective cryptography is the biggest key. Tools like Off-the-Record (OTR) messaging, OTR’s successor Multi-End Message and Object Encryption (OMEMO), Pretty Good Privacy (PGP), Transport Layer Security (TLS), Tor, and other cryptographic tools designed to keep data private and/or anonymous can go a long ways towards preventing private companies from having any usable data to give to federal agencies.

In addition to effective cryptography it’s also important to encourage businesses not to cooperate with federal agencies. The best way to do this is to buy products and services from companies that have fought attempts by federal agencies to acquire customer information and utilize cryptographic tools that prevent themselves from viewing customer data. As consumers we must make it clear that quislings will not be rewarded while those who stand with us will be.

Effective cryptography, unlike politics, offers a permanent solution to the surveillance problem. It’s wiser, in my opinion, to invest the time you’d otherwise waste with politics in learning how to properly utilize tools that protect your privacy. While your political victories may be undone nobody can take your knowledge from you.

If You Don’t Want To Be Treated Like A Criminal Don’t Buy A Blackberry

I know what you’re thinking, you weren’t planning to buy a Blackberry anyways. The company is so far behind the technological curve that it has become almost entirely irrelevant. But I know two people who purchased Blackberry phones within the last five years so I assume there may be a few other people who have been using the platform for ages and want to continue doing so. For them this post is a warning. Don’t buy a Blackberry unless you want to be treated like a criminal:

John Chen, the Blackberry chairman and CEO, is ripping Apple’s position that granting the authorities access to a suspected criminal’s mobile device would “tarnish” the iPhone maker’s image.

“We are indeed in a dark place when companies put their reputations above the greater good. At BlackBerry, we understand, arguably more than any other large tech company, the importance of our privacy commitment to product success and brand value: privacy and security form the crux of everything we do. However, our privacy commitment does not extend to criminals,” Chen wrote in a blog post titled “The encryption Debate: a Way Forward.”

What Apple has promised customers is it is unable to gain access to user data under any circumstances. In other words Apple is promising users that it utilizes cryptography that isn’t compromised in such a way to allow a third party access. Blackberry, on the other hand, is stating it will cooperate with law enforcement requests for user data. To do that it must utilize cryptography that is compromised in such a way to allow third party access. Such a scheme, if used under the auspices of giving law enforcers access to criminal data, necessarily treats all users as potential criminals.

Furthermore, what is the “greater good”? That’s such a nonsensical term. It requires the person uttering it to be so egotistical that they believe they know what’s best for everybody. I doubt anybody has knowledge so perfect that they know what is best for all seven billion people on this planet. Realistically it’s just a euphemism for what is best for the State, which is always at odds with what is best for the individual.

You don’t have to take my word for it though. The people have a voice in this matter through the market. Anybody who truly believes Apple is being detrimental to society by not cooperating with law enforcers can buy a Blackberry device. Something tells me this statement by Chen isn’t going to cause an uptick in Blackberry sales. If anything it will likely cause a drop (if it’s even possible for Blackberry sales to drop any lower) since most people don’t seem overly enthusiastic about being spied on.

Tools Of Your Subjugation

Some fools believe domestic surveillance is about fighting terrorists. Everybody else realizes it’s about subjugation. People are more easily kept in line when they believe they’re constantly being watched. Although much of the State’s surveillance capabilities are shrouded in secrecy The Intercept managed to get its hands on a rather interesting catalogue of government surveillance tools:

THE INTERCEPT HAS OBTAINED a secret, internal U.S. government catalogue of dozens of cellphone surveillance devices used by the military and by intelligence agencies. The document, thick with previously undisclosed information, also offers rare insight into the spying capabilities of federal law enforcement and local police inside the United States.

The catalogue includes details on the Stingray, a well-known brand of surveillance gear, as well as Boeing “dirt boxes” and dozens of more obscure devices that can be mounted on vehicles, drones, and piloted aircraft. Some are designed to be used at static locations, while others can be discreetly carried by an individual. They have names like Cyberhawk, Yellowstone, Blackfin, Maximus, Cyclone, and Spartacus. Within the catalogue, the NSA is listed as the vendor of one device, while another was developed for use by the CIA, and another was developed for a special forces requirement. Nearly a third of the entries focus on equipment that seems to have never been described in public before.

[…]

A few of the devices can house a “target list” of as many as 10,000 unique phone identifiers. Most can be used to geolocate people, but the documents indicate that some have more advanced capabilities, like eavesdropping on calls and spying on SMS messages. Two systems, apparently designed for use on captured phones, are touted as having the ability to extract media files, address books, and notes, and one can retrieve deleted text messages.

The catalogue is fully of very interesting gadgets. In fact it demonstrates the fact that technology in the hands of government is a bad thing. While the market has used cellular technology to bring us wonderful gadgets that improve our lives the State only sees cellular technology as another means to subjugate its people.

Lightbulbs With DRM Are Here

There’s a lot of love about this crazy future we live in but there are also some downright bizarre things. For example, how many of you thought your lightbulbs need some kind of mechanism to lock you into a particular manufacturer’s bulbs? Through the wonderful world of ZigBee-enabled bulbs Philips has made your dream a reality:

Philips just released firmware for the Philips Hue bridge that may permanently sever access to any “non-approved” ZigBee bulbs. We previously covered third party support in January 2015, when Philips indicated it was not blocked – and have since benefited.

The recent change seems to suggest any non-Philips bulbs from manufacturers such as Cree, GE, and Osram will not be supported in many situations, whereas “Friends of Hue” branded product are. At the time of publication, it’s unclear whether 3rd party bulbs will stop working immediately after the firmware update or if they may only become inaccessible after the bridge is reset. We’re also not sure if being “reset” means rebooted or factory reset. This appears to apply to both the round v1 bridge and square v2 HomeKit-compatible bridge after the latest firmware update is applied.

I’m not going to be a cranky curmudgeon and bitch about lightbulbs with new functionality. But I will bitch about how companies utilize new technology as a means of baiting and switching. Philips originally stated it would support third-party bulbs. I’m guessing the reason behind that was so it didn’t have to foot the entire bill to encourage adoption of ZigBee-enabled bulbs. Now it has changed the rules and locked out third-party manufacturers. In all likelihood this is because ZibBee-enabled bulbs are now sufficiently popular that Philips wants to enjoy all of the profits. It wouldn’t surprise me if somebody at Philips also assumed owners of third-party bulbs would rather purchase Philips’ hardware than lose the functionality offered by ZigBee-enabled bulbs.

There is an important lesson here. Never be entirely reliant on a third-party for your business. If, for example, you are utilizing a third-party’s software package for your hardware you should have an alternative standing buy in case you’re locked out. Were I one of these third-party manufacturers I would release an open source client on GitHub that works with any ZigBee-enabled bulb.

Why Magnetic Strips On Credit And Debit Cards Need To Die

I’ve been harping on backwards compatibility as it relates to computer security for a while but that’s not the only place backwards compatibility bites us in the ass. Let’s consider credit and debit cards.

Chip and pin cards have been the standard in Europe for ages now. The United States is finally thinking about getting onboard. But in true American tradition the move to improve credit and debit card security is being done in the dumbest way possible. First of all the United States is adopting chip and signature, not chip and pin. Second, and this is even worse, the old legacy system of magnetic strips is still being supported. Because of this constantly improving card skimmers are still a viable means of stealing credit and debit card information:

Virtually all European banks issue chip-and-PIN cards (also called Europay, Mastercard and Visa or EMV), which make it far more expensive for thieves to duplicate and profit from counterfeit cards. Even still, ATM skimming remains a problem for European banks mainly because several parts of the world — most notably the United States and countries in Asia and South America — have not yet adopted this standard.

For reasons of backward compatibility with ATMs that aren’t yet in line with EMV, many EMV-compliant cards issued by European banks also include a plain old magnetic stripe. The weakness here, of course, is that thieves can still steal card data from Europeans using skimmers on European ATMs, but they need not fabricate chip-and-PIN cards to withdrawal cash from the stolen accounts: They simply send the card data to co-conspirators in the United States who use it to fabricate new cards and to pull cash out of ATMs here, where the EMV standard is not yet in force.

This is another example of where a hard cutoff where all backwards compatibility is dropped should be implemented. So long as magnetic strips are still supported it’s trivial to steal credit and debit card numbers and use them to steal cash from people’s accounts.

Security, in general, does not lend itself well to backwards compatibility. Once a system is broken is should be dumped entirely. The credit card companies here in the United States should have required all banks to issue chip cards and all retailers to use readers that only support chip and PIN, Apple Pay, Android Pay, and other such modern payment methods. Instead everybody decided that the average American is too stupid to adapt to a new system and rewarded this perceived stupidity by continuing to support a completely broken standard. Because of that we’re all being put at unnecessary risk.

The Plague Of Backwards Compatibility Continues

SHA1 is a cryptographic hashing algorithm the Internet has relied on for quite some time. As things tend to go in the technology field, the old workhorse is showing its age. Attacks against it are quickly becoming more feasible so it needs to be put out to pasture.

Because of this certificates issued after 2016 will use SHA256. Although all modern browsers support SHA256 older browsers do not. Unfortunately this has convinced Facebook and CloudFlare to create a jerry rigged process to allow people running out of date browsers to access their services:

Facebook said as many as seven percent of the world’s browsers are unable to support the SHA256 function that serves as the new minimum requirement starting at the beginning of 2016. That translates into tens of millions of end users, and a disproportionate number of them are from developing countries still struggling to get online or protect themselves against repressive governments. CloudFlare, meanwhile, estimated that more than 37 million people won’t be able to access encrypted sites that rely on certificates signed with the new algorithm.

Both companies went on to unveil a controversial fallback mechanism that uses SHA1-based certificates to deliver HTTPS-encrypted webpages to people who still rely on outdated browsers. The remaining, much larger percentage of end users with modern browsers would be served HTTPS pages secured with SHA256 or an even stronger function. The mechanisms, which both companies are making available as open-source software, will allow websites to provide weaker HTTPS protection to older browsers while giving newer ones the added benefits of SHA256. Facebook is deploying the plan on most or all of the sites it operates, while CloudFlare will enable it by default for all of its customers. CloudFlare said other sites, including those run by Chinese portal Alibaba, are also implementing it.

I’m of the opinion that there needs to be a cutoff date for software. That is to say there needs to be a date where people agree that supporting it is no longer happening. After that cutoff date anybody who refuses to upgrade will just have to suffer the consequences. The reason I believe this is because continuing to support legacy software puts both users and service providers at risk.

Just this year we were all bitten in the ass by legacy support. The FREAK and Logjam exploits were the result of continued support for the old export grade cryptographic algorithms once mandated under United States law. Both exploits allowed downgrading the encryption algorithms used by clients and servers to communicate securely with one another. By downgrading the algorithms being use the communications, although encrypted, could be feasible broken.

By supporting older browsers Facebook and CloudFlare are giving users another excuse to continue using vulnerable software instead of finally upgrading to something safe. In addition to not supporting effective cryptographic algorithms, out of date browsers also contain numerous unpatched security holes that are actively exploited. Using out of date browsers is unsafe and shouldn’t be encouraged in my opinion.