Security Exists As A Spectrum

When I discuss security, be it online or offline, I often mention threat models and cost-benefits analysis. Unless you understand what you’re defending against it’s impossible to develop an effective defense. And if you don’t perform a cost-benefits analysis you may end up investing far more into securing something than it’s worth. The thing with threat models and cost-benefits analysis is that they’re, like security in general, subjective. This is a fact lost on many people as Tam so eloquently explained:

People buy into safety. It’s important for people to feel safe. For some reason, people view safety as a binary state and not an ongoing process. Therefore, when something comes along to remind us that we might not be as safe as we think we are, or there’s an optional activity we could undertake to improve our safety, it rustles our jimmies and we get all upset and fling poo at that thing and wave branches at it until it goes away and we can return to feeling safe. It’s why people who ride without helmets come up with all kinds of BS excuses about hearing and wind drag rather than just admitting “Hey, I’m comfortable with the extra risk of skull fractures in order to feel the wind in my hair.”

[…]

And here’s the thing: It’s okay to not wear a helmet. It’s okay to not carry a gun. It’s okay to not like the Gadget. It’s okay to open carry and not take thirty-eleven years of BJJ and weapons retention training. It’s still (mostly) a free country… *but own the types of risk you’re assuming*. Don’t hand-wave them away and shoot the messengers who point them out. Say “Look, I’m comfortable with these risks and don’t want to make the life commitments it would take to mitigate them” and most people will totally understand that.

People often get caught up in their binary view of security. This phenomenon has lead to countless discussions that were ultimately pointless. Motorcycle helmets are a classic example of this. Before donning a helmet a motorcycle rider first does some threat modeling. Usually the threats involve large four-wheel vehicles the motorcyclist has to share the road with. After identifying potential threats they then add perceived risks of encountering those threats to the model. Then they do a cost-benefits analysis. Many feel the costs of a helmet; the lack of feeling wind on their face, for example; outweigh the benefits when applied to their threat model. You can bitch at them all you want but security is subjective.

Carrying a gun is another example. I carry a gun because the costs, to me, are lower than the benefits. My manner of dress lends itself to carrying and concealing a firearm and my setup is comfortable. The benefits, for me, are having a tool available if I should happen to be attacked. Although my threat model indicates the risk of me being attacked is very low it’s still high enough to offset the low costs of carrying a gun. Somebody else may look at their threat model, which also sees the risk of being attacked as very low, and compare it to the costs of completely changing their manner of dress to conclude carrying a gun is more costly than the benefits provided. They’re not right or wrong; security isn’t binary.

As a general rule, unless it’s asked of me, I try to avoid critiquing other people’s security plans. There’s just no point unless I known what criteria they used to develop their plans. While a lack of a home alarm system may seem incredibly stupid to some people it may be more cost than its worth to somebody who has really good theft insurance.

The Best Argument For Encryption Yet

I’ve made a lot of good arguments favoring effective encryption. Effective encryption protects at risk people from oppressors by concealing their identities and communications, ensures data integrity by preventing third parties from altering data unknowingly, provides a way to verify authenticity and the identity of content creators, etc. Ironically though Jeb Bush made have inadvertently made the best argument for effective encryption:

“If you create encryption, it makes it harder for the American government to do its job—while protecting civil liberties—to make sure that evildoers aren’t in our midst,” Bush said in South Carolina at an event sponsored by Americans for Peace, Prosperity, and Security, according to The Intercept.

Effective encryption makes the American government’s job harder?

grumpy-cat-good

Assault, murder, theft, extortion, and kidnapping should be hard and anything that makes those criminal activities harder is a good thing.

You Have Something To Hide Even If you Don’t Do Anything Illegal

The federal government’s non-military networks are a mess, which is why attackers have been focusing their efforts on hacking them. One of the agencies bitten in the ass was the Internal Revenue Service (IRS). Personal information for 100,000 people was leaked through one of the IRS’s online services. I’m sorry, did I say 100,000? I meant 334,000:

WASHINGTON (AP) — A computer breach at the IRS in which thieves stole tax information from thousands of taxpayers is much bigger than the agency originally disclosed.

An additional 220,000 potential victims had information stolen from an IRS website as part of a sophisticated scheme to use stolen identities to claim fraudulent tax refunds, the IRS said Monday. The revelation more than doubles the total number of potential victims, to 334,000.

The breach also started earlier than investigators initially thought. The tax agency first disclosed the breach in May.

The thieves accessed a system called “Get Transcript,” where taxpayers can get tax returns and other filings from previous years. In order to access the information, the thieves cleared a security screen that required knowledge about the taxpayer, including Social Security number, date of birth, tax filing status and street address, the IRS said.

We again see why even if you have nothing to hide you have plenty to worry about. You may not have done anything wrong, although that’s highly improbable, but any data collected on you can easily wind up in the wrong hands. In this case Social Security numbers, birth dates, street addresses, and tax filing statuses for 334,000 people ended up in unknown hands. Had that data not been collected in the first place it wouldn’t have been available to steal.

Manufacturer Included Malware

When we buy a computer we are necessarily trusting the manufacturer to some extent. One of the things we trust the manufacturer to do is deliver a system free of malware. This trust isn’t always properly placed since many manufacturers include a lot of software that is indistinguishable from malware but we usually trust the manufacturer to not make that malware persistent. What happens when the manufacturer not only includes malware but also makes it so persistent that a clean installation of Windows won’t remove it?

Windows 8 and Windows 10 contain a surprising feature that many users will find unwelcome: PC OEMs can embed a Windows executable in their system firmware. Windows 8 and 10 will then extract this executable during boot time and run it automatically. In this way, the OEM can inject software onto a Windows machine even if the operating system was cleanly installed.

The good news is that most OEMs fortunately do not seem to take advantage of this feature. The bad news is that “most” is not “all.” Between October 2014 and April of this year, Lenovo used this feature to preinstall software onto certain Lenovo desktop and laptop systems, calling the feature the “Lenovo Service Engine.”

[…]

Making this rather worse is that LSE and/or OKO appear to be insecure. Security issues, including buffer overflows and insecure network connections, were reported to Lenovo and Microsoft by researcher Roel Schouwenberg in April. In response, Lenovo has stopped including LSE on new systems (the company says that systems built since June should be clean). It has provided firmware updates for affected laptops and issued instructions on how to disable the option on desktops and clean up the LSE files.

This is an example of a manufacturer using a legitimate feature for nefarious purposes. The feature, as far as Microsoft intended it, was meant to be an anti-theft measure:

And in its own awful way, it’s a feature that makes sense. The underlying mechanism is simple enough; the firmware constructs tables of system information when the machine boots. The operating system then examines these tables to, for example, learn what hardware is installed in the machine and how it is connected. This is all governed by a specification called ACPI, Advanced Configuration and Power Interface. Microsoft defined a new ACPI table, the Windows Platform Binary Table (WPBT), that contains information about a firmware-embedded executable. When it boots, Windows looks for a WPBT. If it finds one, it copies the executable onto the filesystem and runs it.

The primary purpose of WPBT is the automatic installation of anti-theft software. This kind of software typically does a couple of things that require online connectivity: it can phone home to check if it’s been reported stolen (and brick or otherwise disable itself if it has), and it can phone home to simply report where it is to aid recovery of lost or stolen hardware.

Instead Lenovo used it to ensure the pre-install software that comes with the laptop, which was insecure, would always be installed even if the user did a clean install with a Windows disc. That’s pretty scummy behavior. Fortunately Lenovo appears to have stopped doing this but trust, as far as I’m concerned, has already been breached.

Another Infected Ad Network, Another Reason To Use An Ad Blocker

As many website publishers whine about ad blockers destroying their revenue source we have yet another story demonstrating that ad blockers are actually security tools. Another ad network was exploited and the exploit lead to malware being distributed to visitors of the Drudge Report (which, in addition to delivering malware, also delivers brain cancer to visitors) and Wundergorund:

Millions of people visiting drudgereport.com, wunderground.com, and other popular websites were exposed to attacks that can surreptitiously hijack their computers, thanks to maliciously manipulated ads that exploit vulnerabilities in Adobe Flash and other browsing software, researchers said.

The malvertising campaign worked by inserting malicious code into ads distributed by AdSpirit.de, a network that delivers ads to Drudge, Wunderground, and other third-party websites, according to a post published Thursday by researchers from security firm Malwarebytes. The ads, in turn, exploited security vulnerabilities in widely used browsers and browser plugins that install malware on end-user computers. The criminals behind the campaign previously carried out a similar attack on Yahoo’s ad network, exposing millions more people to the same drive-by attacks.

There are really two lessons to learn from this story. First, run an ad blocker. Second, uninstall Adobe Flash. But some people are unwilling to do the latter so they, even more than the rest of us, need to run a good ad blocker.

Personally I recommend using a tool such as NoScript to block all JavaScript from domains that haven’t been expressly white listed. But that’s a pain in the ass for many people and ad blockers act as a nice middle ground that blocks most of the crap but don’t require a lot of fine tuning to utilize.

Cat And Mouse Game

Since they want to revolutionize the world you would think libertarians would be hard to beat down. But so many of them, at least in my experience, are willing to roll over if the alternative requires too much work. Computer security is one of those things that tend to require too much work for the average libertarian.

Libertarianism is about wrestling power away from the state. One way of doing this is exploiting economics. The more resources you can make the state misallocate the less it will available for maintaining and expanding its power. That being the case cryptography should be every libertairans best friend. Cryptography, even when it’s not entirely effective, still forces the state to allocate more resources into its surveillance apparatus. Even data secured with weak cryptography requires more effort to snoop than plaintext data. When you start using effective cryptography the amount of resources you force the state to invest increased greatly.

Learning how to use cryptographic tools requires quite a bit of initial effort. Instead of investing their time into learning these tools a lot of libertarians invest their time in creating excuses to justify not learning these tools. One of the excuses I hear frequently is that current cryptographic tools will be broken in a few years anyways.

It’s certainly possible but that’s not an excuse. Cryptography is a cat and mouse game. As cryptographic tools improve the tools used to break them need to improve and as those tools improve cryptographic tools need to improve again. In keeping with the theme I established above the key to this cycle is that the tools to break cryptography need to improve as cryptography improves. In other words adopting better cryptography forces the state to allocate more of its resources into improving its tools to break cryptography. Using effective cryptography today forces the state to invest resources today. If you don’t use it the state doesn’t have to invest resources to break it and therefore has more resources to solidify its power further.

Libertarians have to accept the fact that they’re in a big cat and mouse game anyways. As libertarians work to seize power from the state the state develops new ways to maintain its power. Surveillance is one way it maintains its power and effective cryptography turns it into a cat and mouse game instead of a mouse and mousetrap game. So stop making excuses and start learning about these tools.

Peripherals Are Potentially Dangerous

Some auto insurance companies are exploring programs where customers can receive reduced rates in exchange for attaching a dongle to their vehicle’s on-board diagnostics (OBD) port. The dongles then use the diagnostics information provided by the vehicle to track your driving habits. If you’re a “good” driver you can get a discount (and if you’re a “bad” driver you’ll probably get charged more down the road). It seems like a good deal for drivers who always obey speed limits and such but the OBD port has access to everything in the vehicle, which means any dongle plugged into it could cause all sorts of havoc. Understandably auto insurance companies are unlikely to use such dongles for evil but that doesn’t mean somebody else won’t:

At the Usenix security conference today, a group of researchers from the University of California at San Diego plan to reveal a technique they could have used to wirelessly hack into any of thousands of vehicles through a tiny commercial device: A 2-inch-square gadget that’s designed to be plugged into cars’ and trucks’ dashboards and used by insurance firms and trucking fleets to monitor vehicles’ location, speed and efficiency. By sending carefully crafted SMS messages to one of those cheap dongles connected to the dashboard of a Corvette, the researchers were able to transmit commands to the car’s CAN bus—the internal network that controls its physical driving components—turning on the Corvette’s windshield wipers and even enabling or disabling its brakes.

“We acquired some of these things, reverse engineered them, and along the way found that they had a whole bunch of security deficiencies,” says Stefan Savage, the University of California at San Diego computer security professor who led the project. The result, he says, is that the dongles “provide multiple ways to remotely…control just about anything on the vehicle they were connected to.”

I guarantee any savings you get from your insurance company from attaching one of these dongles to your OBD port will be dwarfed in comparison to the cost of crashing your vehicle due to your brakes suddenly being disabled.

This is a perfect example of two entities with little experience in security compounding their failures to create a possible catastrophe. Automotive manufacturers are finally experiencing the consequences of having paid no attention to the security of their on-board systems. Insurance agencies now have a glimpse of what can happen when you fail to understand the technology you’re working with. While a dongle that tracks the driving behavior of customers seems like a really good idea if that dongle is remotely accessible and insecure it can actually be a far bigger danger than benefit.

I wouldn’t attach such a device to my vehicle because it creates a remote connection to the vehicle (if it didn’t the insurance companies would have any reliable way of acquiring the data from the unit) and that is just asking for trouble at this story shows.

Why I Generally Recommend iOS Over Android

As I’m sure many of you are, I’m the guy who friends and family come to when seeking advice on what electronic device to purchase. When somebody asks me whether they should get an iOS or Android device I generally point them towards iOS. It’s not because Android is bad, it’s a very good operating system. Unfortunately, in most cases, when you get an Android device you’re not so much dealing with Android as the manufacturer and carrier. Because of their meddling in an otherwise great operating system it’s difficult to know when or for how long you’ll get updates and that creates a security nightmare:

Now, though,Android has around 75-80 percent of the worldwide smartphone market—making it not just the world’s most popular mobile operating system but arguably the most popular operating system, period. As such, security has become a big issue. Android still uses a software update chain-of-command designed back when the Android ecosystem had zero devices to update, and it just doesn’t work. There are just too many cooks in the kitchen: Google releases Android to OEMs, OEMs can change things and release code to carriers, carriers can change things and release code to consumers. It’s been broken for years.

The Android ecosystem’s reaction to the “Stagefright” vulnerability is an example of how terrible things are. An estimated 95 percent of Android devices have a have a remote arbitrary code execution just by receiving malicious video MMS. Android has other protections in place to stop this vulnerability from running amok on your smartphone, but it’s still really scary. As you might expect, Google, Samsung, and LG have all pledged to “Take Security Seriously” and issue a fix as soon as possible.

Their “fix” is going to be to patch 2.6 percent of all active Android devices. Tops. That’s the percentage of Android devices that are running Android 5.1 today, nearly five months after the OS was released.

This isn’t a new problem. Manufacturers and carriers have been interfering with software updates for phones for ages. My first cell phone was a Palm Treo 700p running on Sprint’s network. Sprint, compared to other carriers who also had the 700p, would take forever to approve updates for the device and sometimes wouldn’t approve them at all. That meant I was stuck with unpatched software much of the time because Palm was at the mercy of Sprint.

Apple refused to allow carriers any control over iOS. Although this is likely part of why the iPhone was relegated to only being available on AT&T for a long time the decision paid off in the long run. When a vulnerability is discovered in iOS Apple can push out the patch and no carrier can interfere. Google, on the other hand, gave almost all control to manufacturers and carriers. Because of that it can’t push out Android updates to all of its users and that leaves many Android users with insecure devices.

I hope Google changes this and at least requires manufacturers to use Android’s official update channel in order to gain access to its proprietary apps (which is what most people use Android for anyways). The current situation is untenable, which is sad because Android really is a good operating system.

Why You Want Paranoid People To Comment On Features

When discussing security with the average person I’m usually accused of being paranoid. I carry a gun in case I have to defend myself? I must be paranoid! I only allow guests at my dwelling to use an separate network isolated from my own? I must be paranoid! I encrypt my hard drive? I must be paranoid! It probably doesn’t help that I live by the motto, just because you’re paranoid doesn’t mean they’re not out to get you.

Paranoid people aren’t given enough credit. They see things that others fail to see. Consider all of the application programming interface (API) calls the average browser has available to website developers. To the average person, and even to many engineers, the API calls available to website developers aren’t particularly threatening to user privacy. After all, what does it matter if a website can see how much charge is left in your batter? But a paranoid person would point out that such information is dangerous because it gives website developers more data to uniquely identify users:

The battery status API is currently supported in the Firefox, Opera and Chrome browsers, and was introduced by the World Wide Web Consortium (W3C, the organisation that oversees the development of the web’s standards) in 2012, with the aim of helping websites conserve users’ energy. Ideally, a website or web-app can notice when the visitor has little battery power left, and switch to a low-power mode by disabling extraneous features to eke out the most usage.

W3C’s specification explicitly frees sites from needing to ask user permission to discover they remaining battery life, arguing that “the information disclosed has minimal impact on privacy or fingerprinting, and therefore is exposed without permission grants”. But in a new paper from four French and Belgian security researchers, that assertion is questioned.

The researchers point out that the information a website receives is surprisingly specific, containing the estimated time in seconds that the battery will take to fully discharge, as well the remaining battery capacity expressed as a percentage. Those two numbers, taken together, can be in any one of around 14 million combinations, meaning that they operate as a potential ID number. What’s more, those values only update around every 30 seconds, however, meaning that for half a minute, the battery status API can be used to identify users across websites.

The people who developed the W3C specification weren’t paranoid enough. It was ignorant to claim that reporting battery information to websites would have only a minimal impact on private, especially when you combine it with all of the other uniquely identifiable data websites can obtain about users.

Uniquely identifying users becomes easier with each piece of data you can obtain. Being able to obtain battery information alone may not be terribly useful but combining it with other seemingly harmless data can quickly give a website enough data points to identify a specific user. Although that alone may not be enough to reveal their real identity it is enough to start following them around on the web until enough personal information has been tied to them to reveal who they are.

The moral of this story is paranoia isn’t properly appreciated.

Ad Blockers Are Security Tools

If you’re not already running an ad blocker I highly recommend you start. In addition to reducing bandwidth usage ad blockers also protect against ad network delivered malware. Because they span so many separate websites ad networks are common targets for malicious hackers. When they find an exploit they usually use the compromised network to deliver malware to users who access websites that rely on the ad network. Yahoo’s network is the most recent example of this scenario:

June and July have set new records for malvertising attacks. We have just uncovered a large scale attack abusing Yahoo!’s own ad network.

As soon as we detected the malicious activity, we notified Yahoo! and we are pleased to report that they took immediate action to stop the issue. The campaign is no longer active at the time of publishing this blog.

This latest campaign started on July 28th, as seen from our own telemetry. According to data from SimilarWeb, Yahoo!’s website has an estimated 6.9 Billion visits per month making this one of the largest malvertising attacks we have seen recently.

When a single ad network can see almost 7 billion visits per month it’s easy to see why malware distributors try to exploit them.

Many websites rely on advertisements for revenue so they understandably get upset when users visit their pages while using ad blockers. But their revenue model requires their users put themselves at risk so I don’t have any sympathies. If you run a website that relies on ads you should be looking at different revenue models, preferable ones that don’t put your users in harm’s way.