Being A Good Skeptic

I enjoy a good conspiracy theory. While I don’t subscribe to the idea that the 9/11 attack was really perpetuated by shape-shifting lizard people cleverly using high explosive and holograms I enjoy hearing about it. But I seldom enjoy the presence of hardcore conspiracy theorists. This is because of their religious belief in questioning everything.

Questioning things is a great practice but often a futile one when you don’t know what you don’t know. A classic example is the “all natural” crowd. You know the type. They can’t help but bitch about whatever you’re eating because it’s not organic, fair-trade, all natural, non-GMO, grass-fed, and locally grown. According to them all of humanity’s problems are caused by “unnatural” foods. Unnatural, in this case, means pretty much anything that has been genetically modified. Credit is deserved for not taking the statements of geneticists at face value. After all, no geneticist takes the claims of another geneticist at face value. But most of the “all natural” crowd has almost no background in genetics or biology so they tend to base their claims on pseudoscience. Because they lack a background in genetics and biology they don’t know what they don’t know.

This is a characteristic common amongst the “question everything” crowd. More often than not they lack even a basic understanding of the science behind what they’re questioning. Because of this their attempt to question everything quickly becomes an exercise in making up an alternative explanation for commonly accepted beliefs. Meaningfully questioning things requires having an understanding of the topics being questioned.

So how does one become a good skeptic? By asking meaningful questions. How does one ask meaningful questions? By researching and experimenting. If you question whether genetically modified crops cause cancer you should research biology, namely the field of genetic modification and cancer. Without that knowledge you will likely make assumptions that subject matter experts refuted ages ago.

Being skeptical is good but there’s a world of difference between somebody whose skepticism is based on a scientific understanding of the subject matter and people who just want an alternative explanation to be true so they can feel superior to all the “sheep” who are too dumb to know the truth. Be the former. If you want to question something, which you should, spend time researching it instead of parroting some bullshit vomited out by Alex Jones.

Regel Theaters Searching Bags For Fun And Profit

I seldom go to movie theaters anymore and when I do it’s usually second-run theaters. Paying $15.00 or more to subject myself to sitting in a cramped, uncomfortable seat in a crowded theater fully of people playing with their brightly backlit smartphones for two hours doesn’t appeal to me. So Regel’s announcement that it will assume all paying customers are violent criminals doesn’t really impact me but you should probably know about it if you frequently go to theaters:

One of America’s largest cinema chains, Regal, is now searching bags of film-goers following several attacks on movie theatres across the US.

Regal’s updated policy says it wants customers and staff “to feel comfortable and safe” in its cinemas.

[…]

“Security issues have become a daily part of our lives in America,” Regal Entertainment Group’s admission policy now reads on the company’s website. The company has not yet commented publicly on the new regulations.

“To ensure the safety of our guests and employees, backpacks and bags of any kind are subject to inspection prior to admission,” it continues.

While this policy is being implemented under the guise of safety I think it has more to do with profits. Tickets aren’t the only thing expensive about going to a movie theater, the food and drink is also expensive. If you read Regel’s admittance policy you’ll see what is probably the real reason bag searches are now being performed:

Outside Food or Drink:
No outside food or drink is permitted in the theatre.

Because of the price of movie theater food and drinks a lot of people smuggle their own in. Accusing paying customers of smuggling in food and drinks probably won’t sit well but claiming the searches are for safety may sit well enough (after all, it works for sporting events).

Searching bags for weapons isn’t effective anyways. I (as well as most people I know) always carry my weapons on my person. My knives are in my pockets and my handgun is in a tuckable in-the-waistband holster. Carrying weapons in a bag that can be easily separated from my person is bad form.

So keep in mind if you’re going to go to a movie that Regel’s will treat you like a criminal in the hopes of making more money off of you.

You Have Something To Hide Even If you Don’t Do Anything Illegal

The federal government’s non-military networks are a mess, which is why attackers have been focusing their efforts on hacking them. One of the agencies bitten in the ass was the Internal Revenue Service (IRS). Personal information for 100,000 people was leaked through one of the IRS’s online services. I’m sorry, did I say 100,000? I meant 334,000:

WASHINGTON (AP) — A computer breach at the IRS in which thieves stole tax information from thousands of taxpayers is much bigger than the agency originally disclosed.

An additional 220,000 potential victims had information stolen from an IRS website as part of a sophisticated scheme to use stolen identities to claim fraudulent tax refunds, the IRS said Monday. The revelation more than doubles the total number of potential victims, to 334,000.

The breach also started earlier than investigators initially thought. The tax agency first disclosed the breach in May.

The thieves accessed a system called “Get Transcript,” where taxpayers can get tax returns and other filings from previous years. In order to access the information, the thieves cleared a security screen that required knowledge about the taxpayer, including Social Security number, date of birth, tax filing status and street address, the IRS said.

We again see why even if you have nothing to hide you have plenty to worry about. You may not have done anything wrong, although that’s highly improbable, but any data collected on you can easily wind up in the wrong hands. In this case Social Security numbers, birth dates, street addresses, and tax filing statuses for 334,000 people ended up in unknown hands. Had that data not been collected in the first place it wouldn’t have been available to steal.

Manufacturer Included Malware

When we buy a computer we are necessarily trusting the manufacturer to some extent. One of the things we trust the manufacturer to do is deliver a system free of malware. This trust isn’t always properly placed since many manufacturers include a lot of software that is indistinguishable from malware but we usually trust the manufacturer to not make that malware persistent. What happens when the manufacturer not only includes malware but also makes it so persistent that a clean installation of Windows won’t remove it?

Windows 8 and Windows 10 contain a surprising feature that many users will find unwelcome: PC OEMs can embed a Windows executable in their system firmware. Windows 8 and 10 will then extract this executable during boot time and run it automatically. In this way, the OEM can inject software onto a Windows machine even if the operating system was cleanly installed.

The good news is that most OEMs fortunately do not seem to take advantage of this feature. The bad news is that “most” is not “all.” Between October 2014 and April of this year, Lenovo used this feature to preinstall software onto certain Lenovo desktop and laptop systems, calling the feature the “Lenovo Service Engine.”

[…]

Making this rather worse is that LSE and/or OKO appear to be insecure. Security issues, including buffer overflows and insecure network connections, were reported to Lenovo and Microsoft by researcher Roel Schouwenberg in April. In response, Lenovo has stopped including LSE on new systems (the company says that systems built since June should be clean). It has provided firmware updates for affected laptops and issued instructions on how to disable the option on desktops and clean up the LSE files.

This is an example of a manufacturer using a legitimate feature for nefarious purposes. The feature, as far as Microsoft intended it, was meant to be an anti-theft measure:

And in its own awful way, it’s a feature that makes sense. The underlying mechanism is simple enough; the firmware constructs tables of system information when the machine boots. The operating system then examines these tables to, for example, learn what hardware is installed in the machine and how it is connected. This is all governed by a specification called ACPI, Advanced Configuration and Power Interface. Microsoft defined a new ACPI table, the Windows Platform Binary Table (WPBT), that contains information about a firmware-embedded executable. When it boots, Windows looks for a WPBT. If it finds one, it copies the executable onto the filesystem and runs it.

The primary purpose of WPBT is the automatic installation of anti-theft software. This kind of software typically does a couple of things that require online connectivity: it can phone home to check if it’s been reported stolen (and brick or otherwise disable itself if it has), and it can phone home to simply report where it is to aid recovery of lost or stolen hardware.

Instead Lenovo used it to ensure the pre-install software that comes with the laptop, which was insecure, would always be installed even if the user did a clean install with a Windows disc. That’s pretty scummy behavior. Fortunately Lenovo appears to have stopped doing this but trust, as far as I’m concerned, has already been breached.

There Is No Free Web

Ad blockers are wonderful plugins that save bandwidth (and therefore money for people paying by usage) and protect computers against malware. But a lot of people, namely website operators that rely on advertisements for revenue, hate them:

This is an exciting and chaotic time in digital news. Innovators like BuzzFeed and Vox are rising, old stalwarts like The New York Times and The Washington Post are finding massive new audiences online, and global online ad revenue continues to rise, reaching nearly $180 billion last year. But analysts say the rise of ad blocking threatens the entire industry—the free sites that rely exclusively on ads, as well as the paywalled outlets that rely on ads to compensate for the vast majority of internet users who refuse to pay for news.

[…]

Sean Blanchfield certainly doesn’t share Carthy’s views. He worries that ad blocking will decimate the free Web.

As the war between advertisers and ad blockers wages there’s something we need to address: the use of the phrase “free web.” There is no “free web.” There has never been a “free web.” Websites have always required servers, network connectivity, developers, content producers, and other costs. This war isn’t between a “free web” and a pay web; it’s between a revenue model where viewers are the product and a revenue model where the content is the product.

If you’re using a service and not paying for it the content isn’t the product, you are. The content exists only to get you to access the website to either increase the number of page clicks and therefore give the owners a good argument for why advertisers should advertise on their sites or hand over your personal information so it can be sold to advertisers. In exchange for being the product other costs are also pushed onto you such as bandwidth and the risk of malware infection.

Ad blockers can’t decimate the “free web” because it doesn’t exist. What they will likely do is force website operators to find alternate means of generating revenue. Several content providers have started experimenting with new revenue models. The Wall Street Journal, for example, puts a lot of article behind a paywall and the New York Times gives readers access to a certain number of articles per month for free but expects payment after that. Other content providers like Netflix charge a monthly subscription for access to any content. There are a lot of ways to make money off of content without relying on viewers as a product.

As this war continues always remember TANSTAAFL (there ain’t no such thing as a free lunch) otherwise you might get suckered into believing there is a “free web” and let that color your perception.

Peripherals Are Potentially Dangerous

Some auto insurance companies are exploring programs where customers can receive reduced rates in exchange for attaching a dongle to their vehicle’s on-board diagnostics (OBD) port. The dongles then use the diagnostics information provided by the vehicle to track your driving habits. If you’re a “good” driver you can get a discount (and if you’re a “bad” driver you’ll probably get charged more down the road). It seems like a good deal for drivers who always obey speed limits and such but the OBD port has access to everything in the vehicle, which means any dongle plugged into it could cause all sorts of havoc. Understandably auto insurance companies are unlikely to use such dongles for evil but that doesn’t mean somebody else won’t:

At the Usenix security conference today, a group of researchers from the University of California at San Diego plan to reveal a technique they could have used to wirelessly hack into any of thousands of vehicles through a tiny commercial device: A 2-inch-square gadget that’s designed to be plugged into cars’ and trucks’ dashboards and used by insurance firms and trucking fleets to monitor vehicles’ location, speed and efficiency. By sending carefully crafted SMS messages to one of those cheap dongles connected to the dashboard of a Corvette, the researchers were able to transmit commands to the car’s CAN bus—the internal network that controls its physical driving components—turning on the Corvette’s windshield wipers and even enabling or disabling its brakes.

“We acquired some of these things, reverse engineered them, and along the way found that they had a whole bunch of security deficiencies,” says Stefan Savage, the University of California at San Diego computer security professor who led the project. The result, he says, is that the dongles “provide multiple ways to remotely…control just about anything on the vehicle they were connected to.”

I guarantee any savings you get from your insurance company from attaching one of these dongles to your OBD port will be dwarfed in comparison to the cost of crashing your vehicle due to your brakes suddenly being disabled.

This is a perfect example of two entities with little experience in security compounding their failures to create a possible catastrophe. Automotive manufacturers are finally experiencing the consequences of having paid no attention to the security of their on-board systems. Insurance agencies now have a glimpse of what can happen when you fail to understand the technology you’re working with. While a dongle that tracks the driving behavior of customers seems like a really good idea if that dongle is remotely accessible and insecure it can actually be a far bigger danger than benefit.

I wouldn’t attach such a device to my vehicle because it creates a remote connection to the vehicle (if it didn’t the insurance companies would have any reliable way of acquiring the data from the unit) and that is just asking for trouble at this story shows.

The Dangers Of Centralization

Markets tend to have redundancies. We generally refer to this characteristic as “competition.” When there is demand for a good or service everybody wants a piece of the action so monopolies are almost nonexistent in free markets. Statism, on the other hand, tends towards centralization. Where markets have competitors trying to provide you with the best good or service possible states actively try to push out any competition and establish monopolies.

The problem with centralization is that when a system fails any dependent system necessarily fails along with it. The State Department, which has a monopoly on issuing visas, recently experienced, what it referred to as, a computer glitch that effectively stopped the issuance of any visas:

The State Department says it is working around the clock on a computer problem that’s having widespread impact on travel into the U.S. The glitch has practically shut down the visa application process.

Of the 50,000 visa applications received every day, only a handful of emergency visas are getting issued.

I’m sure this news made the neocons and neoliberals giddy because it meant foreign workers couldn’t enter the country. But this news should give everybody cause for concern because it gives us another glimpse into how fragile statism is. In a free market this kind of failure would be a minor annoyance as another provider would need to be sought out.

Centralization is the antithesis of robustness, which is one reason statism is so dangerous. Under statism a single failure can really hurt millions of people whereas failures in a free market environment tend to be limited in scope and only insomuch as forcing customers to seek alternate providers.

Why You Want Paranoid People To Comment On Features

When discussing security with the average person I’m usually accused of being paranoid. I carry a gun in case I have to defend myself? I must be paranoid! I only allow guests at my dwelling to use an separate network isolated from my own? I must be paranoid! I encrypt my hard drive? I must be paranoid! It probably doesn’t help that I live by the motto, just because you’re paranoid doesn’t mean they’re not out to get you.

Paranoid people aren’t given enough credit. They see things that others fail to see. Consider all of the application programming interface (API) calls the average browser has available to website developers. To the average person, and even to many engineers, the API calls available to website developers aren’t particularly threatening to user privacy. After all, what does it matter if a website can see how much charge is left in your batter? But a paranoid person would point out that such information is dangerous because it gives website developers more data to uniquely identify users:

The battery status API is currently supported in the Firefox, Opera and Chrome browsers, and was introduced by the World Wide Web Consortium (W3C, the organisation that oversees the development of the web’s standards) in 2012, with the aim of helping websites conserve users’ energy. Ideally, a website or web-app can notice when the visitor has little battery power left, and switch to a low-power mode by disabling extraneous features to eke out the most usage.

W3C’s specification explicitly frees sites from needing to ask user permission to discover they remaining battery life, arguing that “the information disclosed has minimal impact on privacy or fingerprinting, and therefore is exposed without permission grants”. But in a new paper from four French and Belgian security researchers, that assertion is questioned.

The researchers point out that the information a website receives is surprisingly specific, containing the estimated time in seconds that the battery will take to fully discharge, as well the remaining battery capacity expressed as a percentage. Those two numbers, taken together, can be in any one of around 14 million combinations, meaning that they operate as a potential ID number. What’s more, those values only update around every 30 seconds, however, meaning that for half a minute, the battery status API can be used to identify users across websites.

The people who developed the W3C specification weren’t paranoid enough. It was ignorant to claim that reporting battery information to websites would have only a minimal impact on private, especially when you combine it with all of the other uniquely identifiable data websites can obtain about users.

Uniquely identifying users becomes easier with each piece of data you can obtain. Being able to obtain battery information alone may not be terribly useful but combining it with other seemingly harmless data can quickly give a website enough data points to identify a specific user. Although that alone may not be enough to reveal their real identity it is enough to start following them around on the web until enough personal information has been tied to them to reveal who they are.

The moral of this story is paranoia isn’t properly appreciated.

Ad Blockers Are Security Tools

If you’re not already running an ad blocker I highly recommend you start. In addition to reducing bandwidth usage ad blockers also protect against ad network delivered malware. Because they span so many separate websites ad networks are common targets for malicious hackers. When they find an exploit they usually use the compromised network to deliver malware to users who access websites that rely on the ad network. Yahoo’s network is the most recent example of this scenario:

June and July have set new records for malvertising attacks. We have just uncovered a large scale attack abusing Yahoo!’s own ad network.

As soon as we detected the malicious activity, we notified Yahoo! and we are pleased to report that they took immediate action to stop the issue. The campaign is no longer active at the time of publishing this blog.

This latest campaign started on July 28th, as seen from our own telemetry. According to data from SimilarWeb, Yahoo!’s website has an estimated 6.9 Billion visits per month making this one of the largest malvertising attacks we have seen recently.

When a single ad network can see almost 7 billion visits per month it’s easy to see why malware distributors try to exploit them.

Many websites rely on advertisements for revenue so they understandably get upset when users visit their pages while using ad blockers. But their revenue model requires their users put themselves at risk so I don’t have any sympathies. If you run a website that relies on ads you should be looking at different revenue models, preferable ones that don’t put your users in harm’s way.

Security Is A Growing Threat To Security

Where a person stands on the subject of effective cryptography is a good litmus test for how technically knowledgeable they are. Although any litmus test is limited you can tell immediately that an individual doesn’t understand cryptography if they in any way support state mandated weaknesses. Mike Rogers, a former Michigan politician, expressed his ignorance of cryptography in an editorial that should demonstrate to everybody why his opinion on this matter can be safety discarded:

Back in the 1970s and ’80s, Americans asked private companies to divest from business dealings with the apartheid government of South Africa. In more recent years, federal and state law enforcement officials have asked — and required — Internet service providers to crack down on the production and distribution of child pornography.

You know where it is going when the magical words “child pornography” are being mentioned in the first paragraph.

Take another example: Many communities implement landlord responsibility ordinances to hold them liable for criminal activity on their properties. This means that landlords have certain obligations to protect nearby property owners and renters to ensure there isn’t illicit activity occurring on their property. Property management companies are typically required to screen prospective tenants.

Because of the title of the editorial I know this is supposed to be about encryption. By using the words “child pornography” I know this article is meant to argue against effective cryptography. However, I have no bloody clue how landlords play into this mess.

The point of all these examples?

There’s a point?

That state and federal laws routinely act in the interest of public safety at home and abroad. Yet now, an emerging technology poses a serious threat to Americans — and Congress and our government have failed to address it.

Oh boy, this exercise in mental gymnastics is going to be good. Rogers could be going for the gold!

Technology companies are creating encrypted communication that protects their users’ privacy in a way that prevents law enforcement, or even the companies themselves, from accessing the content. With this technology, a known ISIS bomb maker would be able to send an email from a tracked computer to a suspected radicalized individual under investigation in New York, and U.S. federal law enforcement agencies would not be able to see ISIS’s attack plans.

Child pornography and terrorism in the same editorial? He’s pulling out all the stops! Do note, however, that he was unable to cite a single instance where a terrorist attack would have been thwarted if only effective encryption hadn’t been in the picture. If you’re going to opt for fear mongering it’s best to not create hypothetical scenarios that can be shot down. Just drop the boogeyman’s name and move on otherwise you look like an even bigger fool than you would.

What could a solution look like? The most obvious one is that U.S. tech companies keep a key to that encrypted communication for legitimate law enforcement purposes. In fact, they should feel a responsibility and a moral obligation to do so, or else they risk upending the balance between privacy and safety that we have so carefully cultivated in this country.

Here is where his entire argument falls apart. First he claims “state and federal laws routinely act in the interest of public safety” and now he’s claiming that state and federal laws should work against public safety.

Let’s analyze what a hypothetical golden key would do. According to Rogers it would allow law enforcement agents to gain access to a suspect’s encrypted data. This is true. In fact it would allow anybody with a copy of that key to gain access to the encrypted data of anybody using that company’s products. Remember when Target and Home Depot’s networks were breached and all of their customers’ credit card data was compromised? Or that time Sony’s PlayStation Network was breached and its customers’ credit card data was compromised? How about the recent case of that affair website getting breached and its customers’ personal information ending up in unknown hands? And then there was the breach that exposed all of Hacking Team’s dirty secrets and many of its private keys to the Internet. These are not hypothetical scenarios cooked up by somebody trying to scare you into submission but real world examples of company networks being breached and customer data being compromised.

Imagine the same thing happening to a company that held a golden key that could decrypt any customer’s encrypted data. Suddenly a single breach would not only compromise personal information but also every device every one of the company’s customers possessed. If Apple, for example, were to implement Rogers’ proposed plan and its golden key was compromised every iOS user, which includes government employees I might add, would be vulnerable to having their encrypted data decrypted by anybody who acquired a copy of the key (and let’s not lie to ourselves, in the case of such a compromise the key would be posted publicly on the Internet).

Network breaches aren’t the only risk. Any employee with access to the golden key would be able to decrypt any customer’s device. Even if you trust law enforcement do you trust one or more random employees at a company to protect your data? A key with that sort of power would be worth a lot of money to a foreign government. Do you trust somebody to not hand a copy of the key over to the Chinese government for a few billion dollars?

There is no way a scenario involving a golden key can end well, which brings us to our next point.

Unfortunately, the tech industry argues that Americans have an absolute right to absolute privacy.

How is that unfortunate? More to the point, based on what I wrote above, we can see that the reason companies don’t implement cryptographic backdoors isn’t because they believe in some absolute right to privacy but because the risks of doing so are too great of a liability.

The only thing Rogers argued in his editorial was his complete ignorance on the subject of cryptography. Generally the opinions of people who are entirely ignorant on a topic are discarded and this should be no exception.