Denial of Service Attacks are Cheap to Perform

How expensive is it to perform a denial of service attack in the real world? More often than not the cost is nearly free. The trick is to exploit the target’s own security concerns:

A flight in America was delayed and almost diverted on Tuesday after a passenger changed the name of their wi-fi device to ‘Samsung Galaxy Note 7’.

An entire flight was screwed up by simply changing the SSID of a device.

Why did this simply trick cause any trouble whatsoever? Because the flight crew was more concerned about enforcing the rules than actual security. There was no evidence of a Galaxy Note 7 being onboard. Since anybody can change their device’s SSID to anything they want the presence of the SSID “Samsung Galaxy Note 7” shouldn’t have been enough to cause any issues. But the flight crew allowed that, at best, flimsy evidence to spur them into a hunt for the device.

This is why performing denial of service attacks in the real world is often very cheap. Staffers, such as flight crew, seldom have any real security training so they tend to overreact. They’re trying to cover their asses (and I don’t mean that as an insult, if they don’t cover their asses they very well could lose their job), which means you have an easy exploit sitting there for you.

Russia

Russia hasn’t occupied this much airtime on American news channels since the Cold War. But everywhere you look it’s Russia this and Russia that. Russia is propping up the Assad regime in Syria! Russia rigged the election! Russia stole my lunch money!

Wait, let’s step back to the second one. A lot of charges are being made that Russia “hacked” the election, which allowed Trump to win. And there’s some evidence that shenanigans were taking place regarding the election:

Georgia’s secretary of state says the state was hit with an attempted hack of its voter registration database from an IP address linked to the federal Department of Homeland Security.

Well that’s embarrassing. Apparently the Department of Motherland Fatherland Homeland Security (DHS) is a Russian agency. Who would have guessed?

Could Russia have influenced the election? Of course. We live in an age of accessible real-time global communications. Anybody could influence anybody else’s voting decision. A person in South Africa could influence a voter in South Korea to opt for one choice over another. This global communication system also means that malicious hackers in one nation could compromise any connected election equipment in another country.

However, the biggest check against Russian attempts to rig the election is all of the other forces that would be trying to do the exact same thing. People have accused both the Federal Bureau of Investigations (FBI) and the Central Intelligence Agency (CIA) (admittedly, rigging elections is what the CIA does) of trying to rig the election. Likewise, there are some questions about what exactly the DHS was doing in regards to Georgia. Major media companies were working overtime to influence people’s voting decision. Countries in Europe had a vested interest in the election going one way or another as did pretty much every other country on Earth.

I have no evidence one way or another but that’s never stopped me from guessing. My guess as to why these accusations against Russia are being made so vehemently is that a lot of voters are looking for answers as to why Trump won but are unwilling to consider that their preferred candidate was terrible. When you convince yourself that the candidate you oppose is Satan incarnate then you lose the ability to objectively judge your own candidate because in your head it’s now a battle between evil and good, not a battle between two flawed human beings.

Security Implications of Destructive Updates

More and more it should be becoming more apparent that you don’t own your smartphone. Sure, you paid for it and you physically control it but if the device itself can be disabled by a third-party without your authorization can you really say that you own it? This is a question Samsung Galaxy Note 7 owners should be asking themselves right now:

Samsung’s Galaxy Note 7 recall in the US is still ongoing, but the company will release an update in a couple of weeks that will basically force customers to return any devices that may still be in use. The company announced today that a December 19th update to the handsets in the States will prevent them from charging at all and “will eliminate their ability to work as mobile devices.” In other words, if you still have a Note 7, it will soon be completely useless.

One could argue that this ability to push an update to a device to disable it is a good thing in the case of the Note 7 since the device has a reputation for lighting on fire. But it has rather frightening ownership and security implications.

The ownership implications should be obvious. If the device manufacturer can disable your device at its whim then you can’t really claim to own it. You can only claim that you’re borrowing it for as long as the manufacturer deems you worthy of doing so. However, in regards to ownership, nothing has really changed. Since copyright and patent laws were applied to software your ability to own your devices has been basically nonexistent.

The security implications may not be as obvious. Sure, the ability for a device manufacturer to push implicitly trusted software to their devices carries risks but the tradeoff, relying on users to apply security updates, also carries risks. But this particular update being pushed out by Samsung has the ability to destroy users’ trust in manufacturer updates. Many users are currently happy to allow their devices to update themselves automatically because those updates tend to improve the device. It only takes a single bad update to make those users unhappy with automatic updates. If they become unhappy with automatic updates they will seek ways of disabling updates.

The biggest weakness in any security system tends to be the human component. Part of this is due to the difficulty of training humans to be secure. It takes a great deal of effort to train somebody to follow even basic security principles but it takes very little to undo all of that training. A single bad experience is all that generally stands between that effort and having all of it undone. If Samsung’s strategy becomes more commonplace I fear that years of getting users comfortable with automatic updates may be undone and we’ll be looking at a world where users jump through hoops to disable updates.

Degrees of Anonymity

When a service describes itself as anonymous how anonymous is it? Users of Yik Yak may soon have a chance to find out:

Yik Yak has laid 70 percent of employees amid a downturn in the app’s growth prospects, The Verge has learned. The three-year-old anonymous social network has raised $73.5 million from top-tier investors on the promise that its young, college-age network of users could one day build a company to rival Facebook. But the challenge of growing its community while moving gradually away from anonymity has so far proven to be more than the company could muster.

[…]

But growth stalled almost immediately after Sequoia’s investment. As with Secret before it, the app’s anonymous nature created a series of increasingly difficult problems for the business. Almost from the start, Yik Yak users reported incidents of bullying and harassment. Multiple schools were placed on lockdown after the app was used to make threats. Some schools even banned it. Yik Yak put tools in place designed to reduce harassment, but growth began to slow soon afterward.

Yik Yak claimed it was an anonymous social network and on the front end the data did appear anonymous. However, the backend may be an entirely different matter. How much information did Yik Yak regularly keep about its users? Internet Protocol (IP) addresses, Global Positioning System (GPS) coordinates, unique device identifiers, phone numbers, and much more can be easily collected and transmitted by an application running on your phone.

Bankruptcy is looking like a very real possibility for Yik Yak. If the company ends up filing then its assets will be liquidated. In this day and age user data is considered a valuable asset. Somebody will almost certainly end up buying Yik Yak’s user data and when they do they may discover that it wasn’t as anonymous as users may have thought.

Not all forms of anonymity are created equal. If you access a web service without using some kind of anonymity service, such as Tor or I2P, then the service has some identifiable information already such as your IP address and a browser fingerprint. If you’re access the service through a phone application then that application may have collected and transmitted your phone number, contacts list, and other identifiable information (assuming, of course, the application has permission to access all of that data, which it may not depending on your platform and privacy settings). While on the front end of the service you may appear to be anonymous the same may not hold true for the back end.

This issue becomes much larger when you consider that even if your data is currently being held by a benevolent company that does care about your privacy that may not always be the case. Your data is just a bankruptcy filing away from falling into the hands of somebody else.

Secure E-Mail is an Impossibility

A while back I wrote a handful of introductory guides on using Pretty Good Privacy (PGP) to encrypt the content of your e-mails. They were well intentioned guides. After all, everybody uses e-mail so we might as well try to secure it as much as possible, right? What I didn’t stop to consider was the fact that PGP is a dead end technology for securing e-mails not because the initial learning curve is steep but because the very implementation itself is flawed.

I recently came across a blog post by Filippo Valsorda that sums up the biggest issue with PGP:

But the real issues I realized are more subtle. I never felt confident in the security of my long term keys. The more time passed, the more I would feel uneasy about any specific key. Yubikeys would get exposed to hotel rooms. Offline keys would sit in a far away drawer or safe. Vulnerabilities would be announced. USB devices would get plugged in.

A long term key is as secure as the minimum common denominator of your security practices over its lifetime. It’s the weak link.

Worse, long term keys patterns like collecting signatures and printing fingerprints on business cards discourage practices that would otherwise be obvious hygiene: rotating keys often, having different keys for different devices, compartmentalization. It actually encourages expanding the attack surface by making backups of the key.

PGP, in fact the entire web of trust model, assumes that your private key will be more or less permanent. This assumption leads to a lot of implementation issues. What happens if you lose your private key? If you have an effective backup system you may laugh at this concern but lost private keys are the most common issue I’ve seen PGP users run into. When you lose your key you have to generate a new one and distribute it to everybody you communicate with. In addition to that, you also have to resign people’s existing keys. But worst of all, without your private key you can’t even revoke the corresponding published public key.

Another issue is that you cannot control the security practices of other PGP users. What happens when somebody who signed your key has their private key compromised? Their signature, which is used by others to decide whether or not to trust you, becomes meaningless because their private key is no longer confidential. Do you trust the security practices of your friends enough to make your own security practices reliant on them? I sure don’t.

PGP was a jury rigged solution to provide some security for e-mail. Because of that it has many limitations. For starters, while PGP can be used to encrypt the contents of a message it cannot encrypt the e-mail headers or the subject line. That means anybody snooping on the e-mail knows who the parties communicating are, what the subject is, and any other information stored in the headers. As we’ve learned from Edward Snowden’s leaks, metadata is very valuable. E-mail was never designed to be a secure means of communicating and can never be made secure. The only viable solution for secure communications is to find an alternative to e-mail.

With that said, PGP itself isn’t a bad technology. It’s still useful for signing binary packages, encrypting files for transferring between parties, and other similar tasks. But for e-mail it’s at best a bandage to a bigger problem and at worst a false sense of security.

A Beginner’s Guide to Privacy and Security

I’m always on the lookout for good guides on privacy and security for beginner’s. Ars Technica posted an excellent beginner’s guide yesterday. It covers the basics; such as installing operating system and browser updates, enabling two-factor authentication, and using a password manager to enable you to use strong and unique passwords for your accounts; that even less computer savvy users can follow to improve their security.

If you’re not sure where to begin when it comes to security and privacy take a look at Ars’ guide.

It was Going to Happen Eventually

Whenever there is an attack on a school or college campus most people tend to focus on the tool used by the attacker. So far we’ve been fortunate that a majority of these attackers have preferred firearms to explosives, which have the potential to cause far more damage and are only addressed in a limited capacity by current security measures. Unfortunately, yesterday an attacker decided to utilize an automobile and knife to attack the Ohio State University:

Police are investigating whether an attack at Ohio State University which left 11 injured was an act of terror.

Abdul Razak Ali Artan, 18, rammed his car into a group of pedestrians at the college and then began stabbing people before police shot him dead on Monday.

This is the second major incident where a knife was one of the weapons used by the attacker. A few months ago a guy went on a rampage with a knife in St. Cloud (and the police were good enough to lockdown the mall so people were trapped inside with the attacker). But this is the first time, at least in recent history, that this type of attack was perpetrated in part with one of the most dangerous commonly available weapons, an automobile.

The amount of energy something has is based on its mass and velocity. A 230 grain .45 bullet traveling at 900 feet per second will give you 414 foot pounds of energy. A 124 grain 9mm bullet traveling at 1,200 feet per second will give you 384 foot pounds of energy. A 1.5 ton vehicle moving at 30 miles per hour will give you 90,259 foot pounds of energy. As you can see, a vehicle can deliver a tremendous amount of energy and therefore can deliver a tremendous amount of damage. On top of that a vehicle provides the driver with some amount of protection against police weapons (in part because it’s capable of moving fast, in part because part of the driver is concealed, and in part because the engine block can protect the driver from a lot of types of commonly used ammunition). And then there’s the fact that an automobile contains combustable fuel.

So far people have been fortune that most of these attackers have opted for firearms on foot rather than using a vehicle. Even in this case the amount of damage the attacker could have caused was reduced because he opted to exit the vehicle and continue is rampage on foot with a knife.

Fortunately, it doesn’t appear as though the attacker had much success. He did manage to injure 11 people but so far it appears that he didn’t kill anybody. However, if the next attacker decides to study previous attacks to learn from them they could leave a bodycount in their wake. So the big question is, what can be done?

Of course colleges can try to hinder automobiles from entering the campus by erecting concrete pillars akin to those in front of many stores. But maintenance and delivery people often need to get vehicles on campus so some means of access has to remain. And blocking vehicle traffic will only cause an attacker to seek another tool. The only real defense against these kinds of attacks is a decentralized response system. One of the biggest weaknesses that allows these attacks to meet a high degree of success is the highly centralized security measures currently in place. When one of these attacks starts an alert is sent to the police. The police then need to get to the location of the attack, find the attacker, and engage them. This usually means that the attacker has several minutes of free reign. The faster the attacker can be engaged the less time they have to perpetuate their indiscriminate attack. Any further centralized security measures will meet with limited success. At most they will force an attacker to change their strategy to something not addressed by the centralized system.

Obviously legalizing the carrying of firearms on campus is a good start. Permit holders add a great deal of uncertainty for attackers because anybody could potentially engage them. Since permit holders don’t wear obvious uniforms an attacker also can’t know which individuals to take out first (and by surprise so the unformed security person doesn’t have a chance to respond). Another thing that can be done to make these attacks more difficult is getting rid of the shelter in place concept. Sheltering in place can be an effective defensive strategy if the people sheltering have a means of defending themselves. If they don’t then they’re basically fish in a barrel if the attacker finds them and gains entry to their shelter (although in the case of a vehicle sheltering in place can be effective, especially in a relatively hardened building like those on many college campuses).

LastPass Opts to Release Ad Supported “Free” Version

My hatred of using advertisements to fun “free” services is pretty well known at this point. However, it seems that a lot of people prefer the business model where they’re the product instead of the customer. Knowing that, and knowing that password reuse is still a significant security problem for most people, I feel the need to inform you that LastPass, which still remains a solid password manager despite being bought by LogMeIn, now has an ad supported “free” version:

I’m thrilled to announce that, starting today, you can use LastPass on any device, anywhere, for free. No matter where you need your passwords – on your desktop, laptop, tablet, or phone – you can rely on LastPass to sync them for you, for free. Anything you save to LastPass on one device is instantly available to you on any other device you use.

Anything that may convince more people to start using password managers is a win in my book. People who don’t utilize password managers tend to reuse the same credentials on multiple sites, which significantly increases the damage that a password database leak can cause. Furthermore, using a password manager lowers the hurdle for using strong passwords. Instead of having to use passwords that are memorizable a password manager also allows users to use long strings of pseudorandom characters, which means if a password database is breached the time it takes to unveil their password from its stored hash is significantly increased (because the attacker has to rely on brute force instead of a time saving method such as rainbow tables).

If money has been the only thing that has held you back from using a password manager you should take a look at LastPass’s “free” version. While ads are a potential vector for malware they can be blocked with an ad blocker and the risk of being infected through ads is significantly less than the risks involved in not using a password manager.

More Malware Spreading Through Advertising

My biggest grip with the advertisement based model most Internet services have opted to use is that ads can easily be used to spread malware. Because of that I view ad blockers as security software more than anything. And the Internet seems to enjoy proving my point every few weeks:

As a security researcher, it’s always exciting to discover new vulnerabilities and techniques used by malicious actors to deliver malware to unsuspecting users. These moments are actually quite rare, and it’s increasingly frustrating from a researcher’s perspective to watch the bad guys continue to use the same previously exposed methods to conduct their malicious operations.

Today’s example is no different. We discovered a malvertising campaign on Google AdWords for the search term “Google Chrome”, where unsuspecting MacOS users were being tricked into downloading a malicious installer identified as ‘OSX/InstallMiez’ (or ‘OSX/InstallCore’).

In this case the malware didn’t spread through a browser exploit. Instead it exploited the weakest component of any security system: the human. The malware developers bought ads from Google so that their link, which was cleverly titled “Get Google Chrome”, would appear at the very top of the page. This malware was targeted at macOS users so if you were a Windows user and clicked on the link you’d be redirected to a nonexistent page but macOS users would be taken to a page to download the malware installer. After running the installer the malware opens a browser page to a scareware site urging you to “clean your Mac” and then downloads more malware that opens automatically and urges the user to copy it to their Applications folder.

As operating systems have become more secure malware producers have begun relying on exploiting the human component. Unfortunately, it’s difficult to train mom, dad, grandpa, and grandma on proper computer security practices. Explaining the difference between Google advertisement links and Google search result links to your grandparents is often a hopeless cause. The easiest way of dealing with that situation is to hide the ads, and therefore any malware that tries to spread via ads, from their view and ad blockers are the best tools for that job.

Unfortunately, the advertisement based model isn’t going away anytime soon. Too many people think that web services are free because, as Bastiat explained way back when, they’re not seeing the unseen factors. Since they’re not paying money to access a service they think that the service is free. What remains unseens are the other costs such as being surveilled for the benefit of advertisers, increased bandwidth and battery usage for sending and displaying advertisements, the risk of malware infecting their system via advertisements, etc. So long as the advertisement based model continues to thrive you should run ad blockers on all of your devices to protect yourself.

The Weakest Link in a Security System is Usually the Human Component

No matter how secure you make your network you will always have one significant weakness: the users. Humans are terrible at risk management and if somebody doesn’t understand the risks involved in specific actions it is almost impossible to train them not to do those actions. Consider phishing scams. They often rely on e-mails that look like they’re from a specific site, say Gmail, that include a scary message about your account being unlawfully accessed and a link to a site where you can log in to change your password. Of course that link actually goes to a site controlled by the phisher and exists solely to steal your password so they can log into your account. But most people don’t understand the risks of trusting any official looking e-mail and visiting whatever link it provides and entering their password so training people not to fall for phishing scams is a significant challenge.

Even people who are in positions where they should expect to be targets of hackers fall for phishing scams:

On March 19 of this year, Hillary Clinton’s campaign chairman John Podesta received an alarming email that appeared to come from Google.

The email, however, didn’t come from the internet giant. It was actually an attempt to hack into his personal account. In fact, the message came from a group of hackers that security researchers, as well as the US government, believe are spies working for the Russian government. At the time, however, Podesta didn’t know any of this, and he clicked on the malicious link contained in the email, giving hackers access to his account.

While the United States government and some security researchers point the finger at Russia it should be noted that this kind of scam is trivial to execute. So trivial that anybody could do it. For all we know the e-mail could have been sent by a 13-year-old in Romania who wanted to cause a bunch of chaos for shits and giggles.

But speculating about who did this at this point is unimportant. What is important is the lesson that can be taught, which is that even people in high positions, people who should expect to be targets for malicious hackers, screw up very basic security practices.

If you want to make waves in the security field I suggest investing your time into researching ways to deal with the human component of a security system. Anybody who finds a more effective way to either train people or reduce the damage they can do to themselves (and by extent whatever organizations they’re involved in) while still being able to do their jobs will almost certain gain respect, fame, and fortune.