FBI Found Nothing Significant On Farook’s iPhone

After all that fuss over Farook’s iPhone the Federal Bureau of Investigations (FBI) finally managed to unlock it without conscripting Apple. So did the agency find information that will allow them to arrest the next terrorists before they can attack? Did the phone contain the secret to destroying the Islamic State? No and no. It turns out, as most people expected, there wasn’t anything significant on the phone:

A law enforcement source tells CBS News that so far nothing of real significance has been found on the San Bernardino terrorist’s iPhone, which was unlocked by the FBI last month without the help of Apple.

It was stressed that the FBI continues to analyze the information on the cellphone seized in the investigation, senior investigative producer Pat Milton reports.

All that hullabaloo over nothing. This is a reoccurring trend with the State. It makes a big stink about something to justify a demand for additional powers. Eventually it’s revealed that reason it needed the additional power was nothing more than fear mongering. Why anybody takes the State seriously is beyond me.

Free Apps Aren’t Free But Dumb Phones Won’t Protect Your Privacy

I have a sort of love/hate relationship with John McAfee. The man has a crazy history and isn’t so far up his own ass not to recognize it and poke fun at it. He’s also a very nonjudgemental person, which I appreciate. With the exception of Vermin Supreme, I think McAfee is currently the best person running for president. However, his views on security seem to be stuck in the previous decade at times. This wouldn’t be so bad but he seems to take any opportunity to speak on the subject and his statements are often taken as fact by many. Take the recent video of him posted by Business Insider:

It opens strong. McAfee refutes something that’s been a pet peeve of mine for a while, the mistaken belief that there’s such a thing as free. TANSTAAFL, there ain’t no such thing as a free lunch, is a principle I wish everybody learned in school. If an app or service is free then you’re the product and the app only exists to extract salable information from you.

McAfee also discusses the surveillance threat that smartphones pose, which should receive more airtime. But then he follows up with a ridiculous statement. He says that he uses dumb phones when he wants to communicate privately. I hear a lot of people spout this nonsense and it’s quickly becoming another pet peeve of mine.

Because smartphones have the builtin ability to easily install applications the threat of malware exists. In fact there have been several cases of malware making their way into both Google and Apple’s app stores. That doesn’t make smartphones less secure than dumb phones though.

The biggest weakness in dumb phones as far as privacy is concerned is their complete inability to encrypt communications. Dumb phones rely on standard cellular protocols for making both phone calls and sending text messages. In both cases the only encryption that exists is between the devices and the cell towers. And the encryption there is weak enough that any jackass with a IMSI-catcher render it meaningless. Furthermore, because the data is available in plaintext phone for the phone companies, the data is like collected by the National Security Agency (NSA) and is always available to law enforcers via a court order.

The second biggest weakness in dumb phones is the general lack of software updates. Dumb phones still run software, which means they can still have security vulnerabilities and are therefore also vulnerable to malware. How often do dumb phone manufacturers update software? Rarely, which means security vulnerabilities remain unpatched for extensive periods of time and oftentimes indefinitely.

Smart phones can address both of these weaknesses. Encrypted communications are available to most smart phone manufacturers. Apple includes iMessage, which utilizes end-to-end encryption. Signal and WhatsApp, two application that also utilize end-to-end encryption, are available for both iOS and Android (WhatsApp is available for Windows Phone as well). Unless your communications are end-to-end encrypted they are not private. With smartphones you can have private communications, with dumb phones you cannot.

Smart phone manufacturers also address the problem of security vulnerabilities by releasing periodic software updates (although access to timely updates can vary from manufacturer to manufacturer for Android users). When a vulnerability is discovered it usually doesn’t remain unpatched forever.

When you communicate using a smartphone there is the risk of being surveilled. When you communicate with a dumb phone there is a guarantee of being surveilled.

As I said, I like a lot of things about McAfee. But much of the security advice he gives is flawed. Don’t make the mistake of assuming he’s correct on security issues just because he was involved in the antivirus industry ages ago.

How The Government Protects Your Data

Although I oppose both public and private surveillance I especially loathe public surveillance. Any form of surveillance results in data about you being stored and oftentimes that data ends up leaking to unauthorized parties. When the data is leaked from a private entity’s database I at least have some recourse. If, for example, Google leaks my personal information to unauthorized parties I can choose not to use the service again. The State is another beast entirely.

When the State leaks your personal information your only recourse is to vote harder, which is the same as saying your only recourse is to shut up and take it. This complete lack of consequences for failing to implement proper security is why the State continues to ignore security:

FRANKFORT, Ky. (AP) — Federal investigators found significant cybersecurity weaknesses in the health insurance websites of California, Kentucky and Vermont that could enable hackers to get their hands on sensitive personal information about hundreds of thousands of people, The Associated Press has learned. And some of those flaws have yet to be fixed.

[…]

The GAO report examined the three states’ systems from October 2013 to March 2015 and released an abbreviated, public version of its findings last month without identifying the states. On Thursday, the GAO revealed the states’ names in response to a Freedom of Information request from the AP.

According to the GAO, one state did not encrypt passwords, potentially making it easy for hackers to gain access to individual accounts. One state did not properly use a filter to block hostile attempts to visit the website. And one state did not use the proper encryption on its servers, making it easier for hackers to get in. The report did not say which state had what problem.

Today encrypting passwords is something even beginning web developers understand is necessary (even if they often fail to property encrypt passwords). Most content management systems do this by default and most web development frameworks do this if you use their builtin user management features. The fact a state paid developers to implement their health insurance exchange and didn’t require encrypted passwords is ridiculous.

Filtering hostile attempts to visit websites is a very subjective statement. What constitutes a hostile attempt to visit a website? Some websites try to block all Tor users under the assumption that Tor has no legitimate uses, a viewpoint I strongly disagree with. Other websites utilize blacklists that contain IP addresses of supposedly hostile devices. These blacklists can be very hit or miss and often block legitimate devices. Without knowing what the Government Accountability Office (GOA) considered effective filtering I’ll refrain from commenting.

I’m also not entirely sure what GOA means by using property encryption on servers. Usually I’d assume it meant a lack of HTTP connections secured by TLS. But that doesn’t necessarily impact a malicious hackers ability to get into a web server. But it’s not uncommon for government websites to either not implement TLS or implement it improperly, which puts user data at risk.

But what happens next? If we were talking about websites operated by private entities I’d believe the next step would be fixing the security holes. Since the websites are operated by government entities though it’s anybody’s guess what will happen next. There will certainly be hearings where politicians will try to point the finger at somebody for these security failures but finger pointing doesn’t fix the problem and governments have a long history of never actually fixing problems.

FBI Claims Its Method Of Accessing Farook’s Phone Doesn’t Work On Newer iPhones

So far the Federal Bureau of Investigations (FBI) hasn’t given any specific details on how it was able to access the data on Farook’s phone. But agency’s director did divulge a bit of information regarding the scope of the method:

The FBI’s new method for unlocking iPhones won’t work on most models, FBI Director Comey said in a speech last night at Kenyon University. “It’s a bit of a technological corner case, because the world has moved on to sixes,” Comey said, describing the bug in response to a question. “This doesn’t work on sixes, doesn’t work on a 5s. So we have a tool that works on a narrow slice of phones.” He continued, “I can never be completely confident, but I’m pretty confident about that.” The exchange can be found at 52:30 in the video above.

Since he specifically mentioned the iPhone 5S, 6, and 6S it’s possible the Secure Enclave feature present in those phones thwarts the exploit. This does make sense assuming the FBI used a method to brute force the password. On the iPhone 5C the user password is combined with a hardware key to decrypt the phone’s storage. Farook used a four digit numerical password, which means there were only 10,000 possible passwords. With such a small pool of possible passwords it would have been trivial to bruce force the correct one. What stood in the way were two iOS security features. The first is a delay between entering passwords that increases with each incorrect password. The second is a feature that erases the decryption keys — which effectively renders all data stored on the phone useless — after 10 incorrect passwords have been entered.

On the 5C these features are implemented entirely in software. If an attacker can bypass the software and combine passwords with the hardware key they can try as many passwords they want without any artificial delay and prevent the decryption keys from being erased. On the iPhone 5S, 6, and 6S the Secure Enclave coprocessor handles all cryptographic operations, including enforcing a delay between incorrect passwords. Although this is entirely speculation, I’m guessing the FBI found a way to bypass the software security features on Farook’s phone and the method wouldn’t work on any device utilizing Secure Enclave.

Even though Secure Enclave makes four digit numerical passwords safer they’re still dependent on outside security measures to protect against bruce force attacks. I encourage everybody to set a complex password on their phone. On iPhones equipped with Touch ID this is a simple matter to do since you only have to enter your password after rebooting the phone or after not unlocking your phone for 48 hours. Besides those cases you can use your fingerprint to unlock the phone (just make sure you reboot the phone, which you can do at anytime by holding the power and home buttons down for a few seconds, if you interact with law enforcement so they can’t force you to unlock the phone with your fingerprint). With a strong password brute force attacks become unfeasible even if the software or hardware security enhancements are bypassed.

A New Hero Arises

Setting aside my general hatred of intellectual property, I want to discuss an especially heinous abuse of intellectual property laws. A lot of research done in the United States is funded by tax dollars. We’re told this is necessary because the research wouldn’t be done if it was left to the market and that we shouldn’t complain because the research benefits all of us. But the research fueled by tax funding seldom benefits all of us because the findings are locked away being the iron curtain of publisher paywalls. We may have been forced to fund it but we don’t get to read it unless we’re willing to pay even more to get a copy of the research papers.

Aaron Swartz fought against this and was ruthlessly pursued by the State for his actions. Now that he has left us a new hero has risen to the call. Alexandra Elbakyan is the creator and operator of Sci-Hub, a website created to distribute research papers currently secured behind paywalls:

But suddenly in 2016, the tale has new life. The Washington Post decries it as academic research’s Napster moment, and it all stems from a 27-year-old bioengineer turned Web programmer from Kazakhstan (who’s living in Russia). Just as Swartz did, this hacker is freeing tens of millions of research articles from paywalls, metaphorically hoisting a middle finger to the academic publishing industry, which, by the way, has again reacted with labels like “hacker” and “criminal.”

Meet Alexandra Elbakyan, the developer of Sci-Hub, a Pirate Bay-like site for the science nerd. It’s a portal that offers free and searchable access “to most publishers, especially well-known ones.” Search for it, download, and you’re done. It’s that easy.

“The more known the publisher is, the more likely Sci-Hub will work,” she told Ars via e-mail. A message to her site’s users says it all: “SCI-HUB…to remove all barriers in the way of science.”

I fear many libertarians will be quick to dismiss Alexandra because she espouses anti-capitalist ideals. But it’s important to focus her actions, which are very libertarian indeed. She is basically playing the role of Robin Hood by liberating stolen wealth from the State and returning it to the people. The money has already been spent so it cannot be retrieved but what it bought, research, is still there and should be returned to the people as compensation for the original theft. That is all freely releasing tax funded research is and for her part Alexandra should be treated as the hero she is.

Don’t Stick Just Anything In Your Port

Universal Serial Bus (USB) flash drives are ubiquitous and it’s easy to see why. For a few dollars you can get a surprising amount of storage in a tiny package that can be connected to almost any computer. Their ubiquity is also the reason they annoy me. A lot of people wanting to give me a file to work on will hand me a USB drive to which I respond, “E-mail it to me.” USB drives are convenient for moving files between local computers but they’re also hardware components, which means you can do even more malicious things with them than malicious software alone.

The possibility of using malicious USB drives to exploit computers isn’t theoretical. And it’s a good vector for targeted malware since the devices are cheap and a lot of fools will plug any old USB drive into their computer:

Using booby-trapped USB flash drives is a classic hacker technique. But how effective is it really? A group of researchers at the University of Illinois decided to find out, dropping 297 USB sticks on the school’s Urbana-Champaign campus last year.

As it turns out, it really works. In a new study, the researchers estimate that at least 48 percent of people will pick up a random USB stick, plug it into their computers, and open files contained in them. Moreover, practically all of the drives (98 percent) were picked up or moved from their original drop location.

Very few people said they were concerned about their security. Sixty-eight percent of people said they took no precautions, according to the study, which will appear in the 37th IEEE Symposium on Security and Privacy in May of this year.

Leaving USB drives lying around for an unsuspecting sucker to plug into their computer is an evolution of the old trick of leaving a floppy drive labeled “Payroll” lying around. Eventually somebody’s curiosity will get the better of them and they’ll plug it into their computer and helpfully load your malware onto their network. The weakest link in any security system is the user.

A lot of energy has been invested in warning users against opening unexpected e-mail attachments, visiting questionable websites, and updating their operating systems. While it seems this advice has mostly fallen on deaf ears it has at least been followed by some. I think it’s important to spend time warning about other threats such as malicious hardware peripherals as well. Since it’s something that seldom gets mentioned almost nobody thinks about it and that helps ensure experiments like this will show disappointing results.

But They’ll Keep A Master Key Safe

We’re constantly being told by the State and its worshippers that cryptographic backdoors are necessary for the safety and security of all. The path to security Nirvana, we’re told, lies in mandating cryptographic backdoors in all products that can be unlocked by the State’s master key. This path is dangerous and idiotic on two fronts. First, if the master key is compromised every system implementing the backdoor is also compromised. Second, the State can’t even detect when its networks are compromised so there’s no reason to believe it can keep a master key safe:

The feds warned that “a group of malicious cyber actors,” whom security experts believe to be the government-sponsored hacking group known as APT6, “have compromised and stolen sensitive information from various government and commercial networks” since at least 2011, according to an FBI alert obtained by Motherboard.

The alert, which is also available online, shows that foreign government hackers are still successfully hacking and stealing data from US government’s servers, their activities going unnoticed for years.

[…]

This group of “persistent cyber criminals” is especially persistent. The group is none other than the “APT6” hacking group, according to sources within the antivirus and threat intelligence industry. There isn’t much public literature about the group, other than a couple of old reports, but APT6, which stand for Advanced Persistent Threat 6, is a codename given to a group believed to be working for the Chinese government.

Even if somebody believes the United States government is a legitimate entity that can be trusted with a cryptographic master key, they probably don’t believe the likes of Iran, China, and North Korea are as well. But those are the governments that would likely get the master key and enjoy exploiting it for years before anybody became the wiser.

And the impact of such a master key being leaked, even if you mistakenly believe the United States government can be trusted to only use it for good, is hard to overstate. Assuming a law was passed mandating all devices manufactured or sold in the United States had to implement the backdoor, a leak of the master key would effective render every American device unencrypted.

So the real question is, do you trust a government that cannot detect threats within its network for years on end to secure a master key that can unlock all of your sensitive information? Only a fool would answer yes.

If You Don’t Own It, It’s Not Yours

If you don’t own it, it’s not yours. A lot of people are learning that lesson today after Google announced that it would be disabling customers’ Revolv smart-home hub in spite of the promised lifetime subscription:

As we reported on Tuesday, shutting down the Revolv smart-home hubs does not mean Nest is ceasing to support its products, leaving them vulnerable to bugs and other unpatched issues. It means that the $300 (£211) devices and accompanying apps will stop working completely.

[…]

And the decision to deliberately disable the smart-home hubs comes despite the fact they were previously advertised as having a “lifetime subscription.”

Do you own the devices you purchase? If you read most license agreements, which you usually can’t read until you’ve purchased and opened the product, you’re not buying the product but a license to use the product. This is especially true with products that include software, which are regulated under easily abused copyright laws. John Deere, for example, claims you don’t own your tractor, you’re merely licensing it. Because of that John Deere argues that you’re not allowed to fix the tractor as that is a violation of the license you agreed to.

The problem with licenses is that they can be revoked. In this case Google is not only ceasing online services for the Revolv but is entirely bricking the devices themselves, which is likely allowed under the device’s license agreement (those agreements basically read, “We can do whatever we want and you agree to like it.”) regardless of any marketing promises of a “lifetime subscription.”

Had the Revolv been a device that ran open source software with a permissive license its fate wouldn’t be so bleak. At least the option would exist for developers to continue updating the software and creating an alternate online service. That’s the type of freedom ownership allows but licensing usually doesn’t.

As more devices are needlessly tied to “the cloud” we’re going to see more bullshit like this. In my eyes it’s the “in-app purchases” economy brought into the physical world. Many applications used to sell for a one-time fee only for the developers to change their mind and start relying on in-app purchases. An example of this is Cyclemeter. When I first purchased the app it included everything. Now you need to pay a yearly subscription fee via the in-app purchase feature to unlock most of the features. The same bait and switch is coming to our physical world via the Internet of Things. Manufacturers will brick older devices to persuade customers to buy the latest model. Since these devices are almost exclusively licensed instead of owned there will be little recourse for customers. It’s going to be a large scale demonstration of if you don’t own it, it’s not yours.

An Encrypted Society Is A Polite Society

Playing off of my post from earlier today, I feel that it’s time to update Heinlein’s famous phrase. Not only is an armed society a polite society but an encrypted society is a polite society.

This article in Vice discusses the importance of encryption to the Lesbian, Gay, Bisexual, and Transgender (LGBT) communities but it’s equally applicable to any oppressed segment of a society:

Despite advances over the last few decades, LGBTQ people, particularly transgender folks and people of color, face alarming rates of targeted violence, housing and job discrimination, school and workplace bullying, and mistreatment by law enforcement. In the majority of US states, for example, you can still be legally fired just for being gay.

So while anyone would be terrified about the thought of their phone in the hands of an abusive authority figure or a jealous ex-lover, the potential consequences of a data breach for many LGBTQ people could be far more severe.

[…]

LGBTQ people around the world depend on encryption every day to stay alive and to protect themselves from violence and discrimination, relying on the basic security features of their phones to prevent online bullies, stalkers, and others from prying into their personal lives and using their sexuality or gender identity against them.

In areas where being openly queer is dangerous, queer and trans people would be forced into near complete isolation without the ability to connect safely through apps, online forums, and other venues that are only kept safe and private by encryption technology.

These situations are not just theoretical. Terrifying real life examples abound, like the teacher who was targeted by for being gay, and later fired, after his Dropbox account was hacked and a sex video was posted on his school’s website. Or the time a Russian gay dating app was breached, likely by the government, and tens of thousands of users received a message threatening them with arrest under the country’s anti-gay “propaganda” laws.

Systematic oppression requires information. In order to oppress a segment of the population an oppressor must be able to identify members of that segment. A good, albeit terrifying, example of this fact is Nazi Germany. The Nazis actually made heavy use of IBM counting machines to identify and track individuals it declared undesirable.

Today pervasive surveillance is used by state and non-state oppressors to identify those they wish to oppress. Pervasive surveillance is made possible by a the lack of the use of effective encryption. Encryption allows individuals to maintain the integrity and confidentiality of information and can be used to anonymize information as well.

For example, without encryption it’s trivial for the State to identify transgender individuals. A simple unencrypted text message, e-mail, or Facebook message containing information that identifies an individual a transgender can either be read by an automated surveillance system or acquired through a court order. Once identified an agent or agents can be tasked with keeping tabs on that individual and wait for them to perform an act that justified law enforcement involvement. Say, for example, violating North Carolina’s idiotic bathroom law. After the violation occurs the law enforcement agents can be sent in to kidnap the individual so they can be made an example of, which would serve to send a message of terror to other transgender individuals.

When data is properly encrypted the effectiveness of surveillance is greatly diminished. That prevents oppressors from identifying targets, which prevents the oppressors from initiating interactions entirely. Manners are good when one may have to back up his acts with his life. Manners are better when one doesn’t have to enter into conflict in the first place.

Compromising Self-Driving Vehicles

The difficult part about being a technophile and an anarchist is that the State often highjacks new technologies to further its own power. These highjackings are always done under the auspices of safety and the groundwork is already being laid for the State to get its fingers into self-driving vehicles:

It is time to start thinking about the rules of the new road. Otherwise, we may end up with some analog to today’s chaos in cyberspace, which arose from decisions in the 1980s about how personal computers and the Internet would work.

One of the biggest issues will be the rules under which public infrastructures and public safety officers may be empowered to override how autonomous vehicles are controlled.

When should law enforcers and safety officers be empowered to override another person’s self-driving vehicle? Never. Why? Setting aside the obvious abuses such empowerment would lead to we have the issue of security, which the article alludes to towards the end:

Last, but by no means least, is whether such override systems could possibly be made hack-proof. A system to allow authorized people to control someone else’s car is also a system with a built-in mechanism by which unauthorized people — aka hackers — can do the same.

Even if hackers are kept out, if every police officer is equipped to override AV systems, the number of authorized users is already in the hundreds of thousands — or more if override authority is extended to members of the National Guard, military police, fire/EMS units, and bus drivers.

No system can be “hacker-proof,” especially when that system has hundreds of thousands of authorized users. Each system is only as strong as its weakest user. It only takes one careless authorized user to leak their key for the entire world to have a means to gaining access to everything locked by that key.

In order to implement a system in self-driving cars that would allow law enforcers and safety officers to override them there would need to be a remote access option that allowed anybody employed by a police department, fire department, or hospital to log into the vehicle. Every vehicle would either have to be loaded with every law enforcer’s and safety officer’s credentials or, more likely, rely on a single master key. In the case of the former it would only take one careless law enforcer or safety officer posting their credentials somewhere an unauthorized party could access them, including the compromised network of a hospital, for every self-driving car to be compromised. In the case of the latter the only thing that would be required to compromise every self-driving car is the master key being leaked. Either way, the integrity of the system would be dependent on hundreds of thousands of people maintaining perfect security, which is an impossible goal.

If self-driving cars are setup to allow law enforcers and safety officers to override them then they will become useless due to being constantly compromised by malicious actors.