Once You Post Something Online It Exists Forever

The University of California Berkeley posted 20,000 lectures online for free and there was great joy. Unfortunately, two students from another university decided to ruin the jovial atmosphere by brining a lawsuit against the university claiming that the videos weren’t accessible to everybody and therefore posting them was a violation of the Americans with Disabilities Act. The university ended up pulling the videos offline. While the two little bitches may have high-fived each other after their apparent victory, they were obviously too stupid to realize that the Internet is forever:

Today, the University of California at Berkeley has deleted 20,000 college lectures from its YouTube channel. Berkeley removed the videos because of a lawsuit brought by two students from another university under the Americans with Disabilities Act.

We copied all 20,000 and are making them permanently available for free via LBRY.

This makes the videos freely available and discoverable by all, without reliance on any one entity to provide them (even us!).

The full catalog is over 4 TB and will be synced over the next several days.

And that, ladies and gentlemen, is how the Internet works.

Let’s Encrypt

Most of you probably didn’t notice but over the weekend I changed this blog over to Let’s Encrypt. There really aren’t any changes for you but this is a project that I’ve been planning to do for a while now.

Since I changed this site over to HTTPS only, I’ve been using StartSSL certificates. However, when it was announced that StartCom, the owner of StartSSL, was bought by WoSign I was wary to renew my certificates through them. When it was later announced that StartCom and WoSign were backdating certificates to get around the SHA-1 depreciation deadline I knew it was time to move on. The good news is that Let’s Encrypt is far easier than StartSSL was. Setting it up took a bit of time because Nginx support in Let’s Encrypt is still experimental and the other options for pulling certificates without shutting down the server required some server customizations. But once everything was setup it was simple to pull certificates.

While I was changing over my certificates I also took the opportunity to implement a Content Security Policy (CSP). Now when you load my page your browser is given a whitelist of locations content can come from. This reduces the threat of potential code injection attacks. Unfortunately, due to WordPress, I had to enable some unsafe options such as executing inline JavaScript and eval() statements. I’ll be looking for ways to get rid of those in the future though.

So you can breathe easy knowing that you browsing experience is even safer now than it was before.

Vault 7 isn’t the End of Privacy

There has been a lot of bad stories and comments about Vault 7, the trove of Central Intelligence Agency (CIA) documents WikiLeaks recently posted. Claims that the CIA has broken Signal, can use any Samsung smart television to spy on people, and a whole bunch of other unsubstantiated or outright false claims have been circulating. Basically, idiots who speak before they think have been claiming that Vault 7 is proof that privacy is dead. But that’s not the case. The tools described in the Vault 7 leak appear to be aimed at targeted surveillance:

Perhaps a future cache of documents from this CIA division will change things on this front, but an admittedly cursory examination of these documents indicates that the CIA’s methods for weakening the privacy of these tools all seem to require attackers to first succeed in deeply subverting the security of the mobile device — either through a remote-access vulnerability in the underlying operating system or via physical access to the target’s phone.

As Bloomberg’s tech op-ed writer Leonid Bershidsky notes, the documentation released here shows that these attacks are “not about mass surveillance — something that should bother the vast majority of internet users — but about monitoring specific targets.”

The threats of mass surveillance and targeted government surveillance are very different. Let’s consider Signal. If the CIA had broken Signal it would be able to covertly collect Signal packets as they traveled from source to destination, decrypt the packets, and read the messages. This would enable mass surveillance like the National Security Agency (NSA) has been doing. But the CIA didn’t break Signal, it found a way to attack Android (most likely a specific version of Android). This type of attack doesn’t lend itself well to mass surveillance because it requires targeting specific devices. However, if the CIA wants to surveil a specific target then this attack works well.

Avoiding mass surveillance is much easier to deal with than defending yourself against an organization with effectively limitless funds and a massive military to back it up that specifically wants your head on a platter. But unlike mass surveillance, very few people have to actually deal with the latter. And so far the data released as part of Vault 7 indicates the surveillance tools the CIA has developed are aimed at targeted surveillance so you most likely won’t have to deal with them.

Privacy isn’t dead, at least so long as you’re not being specifically targeted by a three letter agency.

Taking Down the 911 System

911 is the go-to number for most people when there’s an emergency. But 911 is an old system and old systems are often vulnerable to distributed denial of service attacks:

For over 12 hours in late October, 911 lines across the country were ringing so much that they nearly went down. Nobody knew why this was happening, until Phoenix police discovered that 18-year-old Meetkumar Hitesbhai Desai tweeted a link that caused iPhones to repeatedly dial 911. Now, more details have emerged about how the Twitter prank spiraled out of control.

Desai claimed the attack was a joke gone wrong, telling police he only meant for the link to cause annoying pop-ups, The Wall Street Journal reports. However, he posted the wrong code. It started when, from his @SundayGavin Twitter account, he tweeted the link and wrote, “I CANT BELIEVE PEOPLE ARE THIS STUPID.” When clicked, the URL, which was condensed by Google’s link shortener, launched an iOS-based JavaScript attack that caused iPhones to dial 911 repeatedly. When users hung up, the phone would keep redialing until it was restarted.

This story touches on a lot of different topics. First, it shows how dangerous software glitches can be. Since most people only think to dial 911 when there’s an emergency, a software glitch that allows a section of the 911 system to be taken down could cost people their lives. Second, it shows why URL shorteners are a pet peeve of mine. You never know where they’re going to take you until you’ve already clicked them. Third, it shows how easily a distributed denial of service attack can be created. One tweet with a link to a malicious piece of JavaScript was enough to bring a section of the 911 system to its knees.

The lessons to take away from this story are don’t to click random links and have a backup plan in case 911 is overwhelmed.

Vault 7

WikiLeaks dropped a large archive of Central Intelligence Agency (CIA) leaks. Amongst the archive are internal communications and documents related to various exploits the CIA had or has on hand for compromising devices ranging from smartphones to smart televisions.

I haven’t had a chance to dig through the entire archive yet but there’s one thing that everybody should keep in mind.

The government that claims to protect you, that many people mistakenly believe protects them, has been hoarding vulnerabilities and that has put you directly in harm’s way. Instead of reporting discovered vulnerabilities so they could be patched, the CIA, like the NSA, kept them secret so it could exploit them. Since discovery of a vulnerability doesn’t grant a monopoly on its use, the vulnerabilities discovered by the CIA may very well have been discovered by other malicious hackers. Those malicious hackers could, for example, be exploiting those vulnerabilities to spread a botnet that can be used perform distributed denial of service attacks against websites to extort money from their operators.

Remember this the next time some clueless fuckstick tells you that the government is there to keep you safe.

While I haven’t had a chance to read through the archive, I have had a chance to read various comments and reports regarding the information in the archive. By doing this I’ve learned two things. First, the security advice posted by most random Internet denizens is reminiscent of the legal advice posted by most sovereign citizens. Second, the media remains almost entirely clueless about information security.

Case in point, a lot of comments and stories have said that the archive contains proof that the CIA has broken Signal and WhatsApp. But that’s not true:

It’s that second sentence that’s vital here: It’s not that the encryption on Signal, WhatsApp (which uses the same encryption protocol as Signal), or Telegram has been broken, it’s that the CIA may have a way to break into Android devices that are using Signal and other encrypted messaging apps, and thus be able see what users are typing and reading before it becomes encrypted.

There is a significant difference between breaking the encryption protocol used by a secure messaging app and breaking into the underlying operating system. The first would allow the CIA to sit in the middle of Signal or WhatsApp connections, collect packets being sent to and from Signal and WhatsApp clients, and decrypting the packets and reading the contents. This would allow the CIA to potentially surveil every WhatsApp and Signal user. The second would allow the CIA to target individual devices, compromise the operating system, and surveil everything the user is doing on that device. Not only would this compromise the security of Signal and WhatsApp, it would also compromise the security of virtual private networks, Tor, PGP, and every other application running on the device. But the attack would only allow the CIA to surveil specific targeted users, not every single user of an app.

The devil is in the details and a lot of random Internet denizens and journalists are getting the details wrong. It’s going to take time for people with actual technical knowhow to dig through the archive and report on the information they find. Until then, don’t panic.

Uber’s Self-Defense Strategy

Last week it was revealed that Uber developed a self-defense strategy against the State. Needless to say, this upset a lot of statists who were posting the #DeleteUber hashtag even harder than they were before. But those of us who don’t subscribe to the insanity that is statism can learn a lot from Uber’s example:

“SAN FRANCISCO — Uber has for years engaged in a worldwide program to deceive the authorities in markets where its low-cost ride-hailing service was being resisted by law enforcement or, in some instances, had been outright banned.

The program, involving a tool called Greyball, uses data collected from the Uber app and other techniques to identify and circumvent officials. Uber used these methods to evade the authorities in cities such as Boston, Paris and Las Vegas, and in countries like Australia, China, Italy and South Korea.

[…]

Uber’s use of Greyball was recorded on video in late 2014, when Erich England, a code enforcement inspector in Portland, Ore., tried to hail an Uber car downtown as part of a sting operation against the company.

[…]

But unknown to Mr. England and other authorities, some of the digital cars they saw in the app did not represent actual vehicles. And the Uber drivers they were able to hail also quickly canceled. That was because Uber had tagged Mr. England and his colleagues — essentially Greyballing them as city officials — based on data collected from the app and in other ways. The company then served up a fake version of the app populated with ghost cars, to evade capture.”

How brilliant is that? The company identified a significant threat, government goons who were working to extort the company, and then screwed with them, which made their job of extortion more difficult.

This is a strategy more companies need to adopt. Imagine a world where services such as Facebook, Gmail, Google Maps, iCloud, SoundCloud, and other online services identified government goons and refused to work for them. It would be a tremendous strike against the quality of life of many government employees. In fact, the hit might be powerful enough to convince them to seek productive employment.

Companies like Facebook and Google have built their fortunes on surveilling customers. Why not use that massive store of data for good by identifying government employees, or at least the regulators that make their lives difficult, and either screw with them or outright refusing to do business with them? There’s no reason anybody should be expected to do business with extortionists.

Is Your Child’s Toy a Snitch

The Internet of Things (IoT) should be called the idiotic attempt to connect every mundane device to the Internet whether there’s a good reason or not. I admit that my more honest version is a mouthful but I believe it would remind people about what they’re actually buying and that could avoid fiasco like this:

Since Christmas day of last year and at least until the first week of January, Spiral Toys left customer data of its CloudPets brand on a database that wasn’t behind a firewall or password-protected. The MongoDB was easy to find using Shodan, a search engine makes it easy to find unprotected websites and servers, according to several security researchers who found and inspected the data.

The exposed data included more than 800,000 emails and passwords, which are secured with the strong, and thus supposedly harder to crack, hashing function bcrypt. Unfortunately, however, a large number of these passwords were so weak that it’s possible to crack them, according to Troy Hunt, a security researcher who maintains Have I Been Pwned and has analyzed the CloudPets data.

When you buy something you should ask yourself what the benefits and costs are. People often make the mistake of thinking that the cost is purely the amount you have to pay at the store. But there are always other hidden costs. In the case of these IoT stuffed animals one of the costs is brining a surveillance apparatus into your home. Sure, most people probably aren’t too worried about toy manufacturers having a bug in their home. But another cost is the risk of the remotely accessible surveillance device being accessed by an unauthorized party, which is what happened here.

The sordid history of security failures that plagues the IoT market should be considered whenever you’re buying an IoT product.

Convenient Technology… For the Police

Axon, Tasers body camera division, has announced a new product. It’s a holster sensor that activates all nearby body cameras when an officer draws their firearm:

The Signal Sidearm, despite its slightly confusing name and provided artwork, isn’t a pricey, complex smart weapon, but rather a sensor that can be retrofitted into “most existing firearm holsters.” The sensor is powered by a coin cell battery that lasts approximately 1.5 years. It sounds like the sensor is technologically very simple, which hopefully means it’s also very reliable.

Body cams to be worn by more than 22,000 London cops after rollout delay
When a weapon is drawn from the holster, the Signal Sidearm tells any Axon camera within 30 feet to start recording. If there are multiple Axon cameras present, they all start recording, providing video footage from a variety of angles.

The sensor activates nearby body cameras after guns have been drawn so they won’t record whether the police unnecessarily escalated the situation to deadly force or not. That’s convenient.

As one of my friends commented, “Every technology deployed by the state will benefit the state, which is why we need our own technology.” If the State is willing to issue technology to police officers, such as body cameras, you know that technology will be of significant benefit to the State while being a significant detriment to you and me. Body cameras sound like a great technology for holding officers accountable but since the State controls all footage it’s trivial to disappear any inconvenient evidence while keeping evidence that allows the State to prosecute somebody.

Insurance. You Keep Using That Word, I Do Not Think It Means What You Think It Means

Some health insurance companies have started pilot programs where customers can receive a discount for wearing a fitness tracker and sharing the data with the company. This seems like a pretty straight forward idea. But according to Bloomberg it’s a sinister ploy:

Think about what that means for insurance. It’s meant to be a mechanism to pool risk — that is, to equalize the cost of protecting against unforeseen health problems. But once the big data departments of insurance companies have enough information — including about online purchases and habits — they can build a minute profile about each and every person’s current and future health. They can then steer “healthy” people to cheaper plans, while leaving people who have higher-risk profiles — often due to circumstances beyond their control — to pay increasingly unaffordable rates.

Whenever health insurance companies up I’m forced to explain what insurance is. I shouldn’t have to do this but nobody seems to know what it means.

Insurance is a way for multiple people to pool their resources for risk mitigation. Take home owner’s insurance for example. When you buy a home owner’s policy you’re donating some money to a common pool. Any paying customer can withdraw from the pool if they experience a situation, such as a house fire, that is covered by the insurance policy. Those with higher risks are more likely to withdraw from the pool so they pay a higher premium. Those with lower risks pay less.

Automobile insurance is the same way. Higher risk drivers; such as young males, people who have been found guilty of driving while intoxicated, people who have been found guilty of reckless driving, etc.; pay a higher premium because they’re more likely to withdraw from the community pool.

Most people accept that higher risk people should pay a higher premium for home owner’s or automobile insurance. But when they’re talking about health insurance they suddenly have a change of heart and think that higher risk people should pay the same as lower risk people. That makes no sense. Health insurance, like any other form of insurance, is pooled risk mitigation. If you live an unhealthy lifestyle you’re more likely to withdraw from the pool so you pay a higher premium. Oftentimes these risks are outside of your control, which sucks. However, if the pool empties, that is to say there are more withdrawals than deposits over a long enough period of time to completely drain the accounts, everybody loses their coverage. That being the case, higher risk people have to pay more to ensure the pool remains solvent even if the risks are outside of their control.

It’s not a sinister scheme, it’s exactly how insurance works.

Not All Anonymity is Created Equal

Whenever I discuss secure communications I try to hammer home the difference between confidentiality and anonymity. Most popular secure communication services such as Signal and WhatsApp provide the former but not the latter. This means unauthorized users cannot read the communications but they can find out which parties are communicating.

Another thing I try to hammer home is that not all forms of anonymity are equal. Several services are claiming to offer anonymous communications. These services don’t claim to offer confidentiality, the posts are public, but they do claim to conceal your identity. However, they tend to use a loose definition of anonymity:

On Sunday, a North Carolina man named Garrett Grimsley made a public post on Whisper that sounded an awful lot like a threat. “Salam, some of you are alright,” the message read, “don’t go to [Raleigh suburb] Cary tomorrow.”

When one user asked for more information, Grimsley (who is white) responded with more Islamic terms. “For too long the the kuffar have spit in our faces and trampled our rights,” he wrote. “This cannot continue. I cannot speak of anything. Say your dua, sleep, and watch the news tomorrow.”

Within 24 hours, Grimsley was in jail. Tipped off by the user who responded, police ordered Whisper to hand over all IP addresses linked to the account. When the company complied, the IP address led them to Time Warner, Grimsley’s ISP, which then provided Grimsley’s address.

There’s a great deal of difference between anonymity as it pertains to other users and anonymity as it pertains to service providers. Whisper’s definition of anonymity is that users of the service can’t identify other users. Whisper itself can identify users. This is different than a Tor hidden service where the user can’t identify the service provider and the service provider can’t identify the user.

When you’re looking at communication services make sure you understand what is actually being offered before relying on it.