The FBI Cares More About Maintaining Browser Exploits Than Fighting Child Pornography

Creating and distributing child pornography are two things that most people seem to agree should be ruthlessly pursued by law enforcers. Law enforcers, on the other hand, don’t agree. The Federal Bureau of Investigations (FBI) would rather toss out a child pornography case than reveal one stupid browser exploit:

A judge has thrown out evidence obtained by the FBI via hacking, after the agency refused to provide the full code it used in the hack.

The decision is a symptom of the FBI using investigative techniques that are usually reserved for intelligence agencies, such as the NSA. When those same techniques are used in criminal cases, they have to stack up against the rights of defendants and are subject to court processes.

The evidence that was thrown out includes child pornography allegedly found on devices belonging to Jay Michaud, a Vancouver public schools worker.

Why did the FBI even bring the case Michaud if it wasn’t willing to reveal the exploit that the defense was guaranteed to demand technical information about?

This isn’t the first case the FBI has allowed to be thrown out due to the agency’s desperate desire to keep an exploit secret. In allowing these cases to be thrown out the FBI has told the country that it isn’t serious about pursuing these crimes and that it would rather all of us remain at the mercy of malicious hackers than reveal the exploits it, and almost certain they, rely on.

I guess the only crimes the FBI actually cares to fight are the ones it creates.

Free Apps Aren’t Free But Dumb Phones Won’t Protect Your Privacy

I have a sort of love/hate relationship with John McAfee. The man has a crazy history and isn’t so far up his own ass not to recognize it and poke fun at it. He’s also a very nonjudgemental person, which I appreciate. With the exception of Vermin Supreme, I think McAfee is currently the best person running for president. However, his views on security seem to be stuck in the previous decade at times. This wouldn’t be so bad but he seems to take any opportunity to speak on the subject and his statements are often taken as fact by many. Take the recent video of him posted by Business Insider:

It opens strong. McAfee refutes something that’s been a pet peeve of mine for a while, the mistaken belief that there’s such a thing as free. TANSTAAFL, there ain’t no such thing as a free lunch, is a principle I wish everybody learned in school. If an app or service is free then you’re the product and the app only exists to extract salable information from you.

McAfee also discusses the surveillance threat that smartphones pose, which should receive more airtime. But then he follows up with a ridiculous statement. He says that he uses dumb phones when he wants to communicate privately. I hear a lot of people spout this nonsense and it’s quickly becoming another pet peeve of mine.

Because smartphones have the builtin ability to easily install applications the threat of malware exists. In fact there have been several cases of malware making their way into both Google and Apple’s app stores. That doesn’t make smartphones less secure than dumb phones though.

The biggest weakness in dumb phones as far as privacy is concerned is their complete inability to encrypt communications. Dumb phones rely on standard cellular protocols for making both phone calls and sending text messages. In both cases the only encryption that exists is between the devices and the cell towers. And the encryption there is weak enough that any jackass with a IMSI-catcher render it meaningless. Furthermore, because the data is available in plaintext phone for the phone companies, the data is like collected by the National Security Agency (NSA) and is always available to law enforcers via a court order.

The second biggest weakness in dumb phones is the general lack of software updates. Dumb phones still run software, which means they can still have security vulnerabilities and are therefore also vulnerable to malware. How often do dumb phone manufacturers update software? Rarely, which means security vulnerabilities remain unpatched for extensive periods of time and oftentimes indefinitely.

Smart phones can address both of these weaknesses. Encrypted communications are available to most smart phone manufacturers. Apple includes iMessage, which utilizes end-to-end encryption. Signal and WhatsApp, two application that also utilize end-to-end encryption, are available for both iOS and Android (WhatsApp is available for Windows Phone as well). Unless your communications are end-to-end encrypted they are not private. With smartphones you can have private communications, with dumb phones you cannot.

Smart phone manufacturers also address the problem of security vulnerabilities by releasing periodic software updates (although access to timely updates can vary from manufacturer to manufacturer for Android users). When a vulnerability is discovered it usually doesn’t remain unpatched forever.

When you communicate using a smartphone there is the risk of being surveilled. When you communicate with a dumb phone there is a guarantee of being surveilled.

As I said, I like a lot of things about McAfee. But much of the security advice he gives is flawed. Don’t make the mistake of assuming he’s correct on security issues just because he was involved in the antivirus industry ages ago.

How The Government Protects Your Data

Although I oppose both public and private surveillance I especially loathe public surveillance. Any form of surveillance results in data about you being stored and oftentimes that data ends up leaking to unauthorized parties. When the data is leaked from a private entity’s database I at least have some recourse. If, for example, Google leaks my personal information to unauthorized parties I can choose not to use the service again. The State is another beast entirely.

When the State leaks your personal information your only recourse is to vote harder, which is the same as saying your only recourse is to shut up and take it. This complete lack of consequences for failing to implement proper security is why the State continues to ignore security:

FRANKFORT, Ky. (AP) — Federal investigators found significant cybersecurity weaknesses in the health insurance websites of California, Kentucky and Vermont that could enable hackers to get their hands on sensitive personal information about hundreds of thousands of people, The Associated Press has learned. And some of those flaws have yet to be fixed.

[…]

The GAO report examined the three states’ systems from October 2013 to March 2015 and released an abbreviated, public version of its findings last month without identifying the states. On Thursday, the GAO revealed the states’ names in response to a Freedom of Information request from the AP.

According to the GAO, one state did not encrypt passwords, potentially making it easy for hackers to gain access to individual accounts. One state did not properly use a filter to block hostile attempts to visit the website. And one state did not use the proper encryption on its servers, making it easier for hackers to get in. The report did not say which state had what problem.

Today encrypting passwords is something even beginning web developers understand is necessary (even if they often fail to property encrypt passwords). Most content management systems do this by default and most web development frameworks do this if you use their builtin user management features. The fact a state paid developers to implement their health insurance exchange and didn’t require encrypted passwords is ridiculous.

Filtering hostile attempts to visit websites is a very subjective statement. What constitutes a hostile attempt to visit a website? Some websites try to block all Tor users under the assumption that Tor has no legitimate uses, a viewpoint I strongly disagree with. Other websites utilize blacklists that contain IP addresses of supposedly hostile devices. These blacklists can be very hit or miss and often block legitimate devices. Without knowing what the Government Accountability Office (GOA) considered effective filtering I’ll refrain from commenting.

I’m also not entirely sure what GOA means by using property encryption on servers. Usually I’d assume it meant a lack of HTTP connections secured by TLS. But that doesn’t necessarily impact a malicious hackers ability to get into a web server. But it’s not uncommon for government websites to either not implement TLS or implement it improperly, which puts user data at risk.

But what happens next? If we were talking about websites operated by private entities I’d believe the next step would be fixing the security holes. Since the websites are operated by government entities though it’s anybody’s guess what will happen next. There will certainly be hearings where politicians will try to point the finger at somebody for these security failures but finger pointing doesn’t fix the problem and governments have a long history of never actually fixing problems.

FBI Claims Its Method Of Accessing Farook’s Phone Doesn’t Work On Newer iPhones

So far the Federal Bureau of Investigations (FBI) hasn’t given any specific details on how it was able to access the data on Farook’s phone. But agency’s director did divulge a bit of information regarding the scope of the method:

The FBI’s new method for unlocking iPhones won’t work on most models, FBI Director Comey said in a speech last night at Kenyon University. “It’s a bit of a technological corner case, because the world has moved on to sixes,” Comey said, describing the bug in response to a question. “This doesn’t work on sixes, doesn’t work on a 5s. So we have a tool that works on a narrow slice of phones.” He continued, “I can never be completely confident, but I’m pretty confident about that.” The exchange can be found at 52:30 in the video above.

Since he specifically mentioned the iPhone 5S, 6, and 6S it’s possible the Secure Enclave feature present in those phones thwarts the exploit. This does make sense assuming the FBI used a method to brute force the password. On the iPhone 5C the user password is combined with a hardware key to decrypt the phone’s storage. Farook used a four digit numerical password, which means there were only 10,000 possible passwords. With such a small pool of possible passwords it would have been trivial to bruce force the correct one. What stood in the way were two iOS security features. The first is a delay between entering passwords that increases with each incorrect password. The second is a feature that erases the decryption keys — which effectively renders all data stored on the phone useless — after 10 incorrect passwords have been entered.

On the 5C these features are implemented entirely in software. If an attacker can bypass the software and combine passwords with the hardware key they can try as many passwords they want without any artificial delay and prevent the decryption keys from being erased. On the iPhone 5S, 6, and 6S the Secure Enclave coprocessor handles all cryptographic operations, including enforcing a delay between incorrect passwords. Although this is entirely speculation, I’m guessing the FBI found a way to bypass the software security features on Farook’s phone and the method wouldn’t work on any device utilizing Secure Enclave.

Even though Secure Enclave makes four digit numerical passwords safer they’re still dependent on outside security measures to protect against bruce force attacks. I encourage everybody to set a complex password on their phone. On iPhones equipped with Touch ID this is a simple matter to do since you only have to enter your password after rebooting the phone or after not unlocking your phone for 48 hours. Besides those cases you can use your fingerprint to unlock the phone (just make sure you reboot the phone, which you can do at anytime by holding the power and home buttons down for a few seconds, if you interact with law enforcement so they can’t force you to unlock the phone with your fingerprint). With a strong password brute force attacks become unfeasible even if the software or hardware security enhancements are bypassed.

Don’t Stick Just Anything In Your Port

Universal Serial Bus (USB) flash drives are ubiquitous and it’s easy to see why. For a few dollars you can get a surprising amount of storage in a tiny package that can be connected to almost any computer. Their ubiquity is also the reason they annoy me. A lot of people wanting to give me a file to work on will hand me a USB drive to which I respond, “E-mail it to me.” USB drives are convenient for moving files between local computers but they’re also hardware components, which means you can do even more malicious things with them than malicious software alone.

The possibility of using malicious USB drives to exploit computers isn’t theoretical. And it’s a good vector for targeted malware since the devices are cheap and a lot of fools will plug any old USB drive into their computer:

Using booby-trapped USB flash drives is a classic hacker technique. But how effective is it really? A group of researchers at the University of Illinois decided to find out, dropping 297 USB sticks on the school’s Urbana-Champaign campus last year.

As it turns out, it really works. In a new study, the researchers estimate that at least 48 percent of people will pick up a random USB stick, plug it into their computers, and open files contained in them. Moreover, practically all of the drives (98 percent) were picked up or moved from their original drop location.

Very few people said they were concerned about their security. Sixty-eight percent of people said they took no precautions, according to the study, which will appear in the 37th IEEE Symposium on Security and Privacy in May of this year.

Leaving USB drives lying around for an unsuspecting sucker to plug into their computer is an evolution of the old trick of leaving a floppy drive labeled “Payroll” lying around. Eventually somebody’s curiosity will get the better of them and they’ll plug it into their computer and helpfully load your malware onto their network. The weakest link in any security system is the user.

A lot of energy has been invested in warning users against opening unexpected e-mail attachments, visiting questionable websites, and updating their operating systems. While it seems this advice has mostly fallen on deaf ears it has at least been followed by some. I think it’s important to spend time warning about other threats such as malicious hardware peripherals as well. Since it’s something that seldom gets mentioned almost nobody thinks about it and that helps ensure experiments like this will show disappointing results.

But They’ll Keep A Master Key Safe

We’re constantly being told by the State and its worshippers that cryptographic backdoors are necessary for the safety and security of all. The path to security Nirvana, we’re told, lies in mandating cryptographic backdoors in all products that can be unlocked by the State’s master key. This path is dangerous and idiotic on two fronts. First, if the master key is compromised every system implementing the backdoor is also compromised. Second, the State can’t even detect when its networks are compromised so there’s no reason to believe it can keep a master key safe:

The feds warned that “a group of malicious cyber actors,” whom security experts believe to be the government-sponsored hacking group known as APT6, “have compromised and stolen sensitive information from various government and commercial networks” since at least 2011, according to an FBI alert obtained by Motherboard.

The alert, which is also available online, shows that foreign government hackers are still successfully hacking and stealing data from US government’s servers, their activities going unnoticed for years.

[…]

This group of “persistent cyber criminals” is especially persistent. The group is none other than the “APT6” hacking group, according to sources within the antivirus and threat intelligence industry. There isn’t much public literature about the group, other than a couple of old reports, but APT6, which stand for Advanced Persistent Threat 6, is a codename given to a group believed to be working for the Chinese government.

Even if somebody believes the United States government is a legitimate entity that can be trusted with a cryptographic master key, they probably don’t believe the likes of Iran, China, and North Korea are as well. But those are the governments that would likely get the master key and enjoy exploiting it for years before anybody became the wiser.

And the impact of such a master key being leaked, even if you mistakenly believe the United States government can be trusted to only use it for good, is hard to overstate. Assuming a law was passed mandating all devices manufactured or sold in the United States had to implement the backdoor, a leak of the master key would effective render every American device unencrypted.

So the real question is, do you trust a government that cannot detect threats within its network for years on end to secure a master key that can unlock all of your sensitive information? Only a fool would answer yes.

An Encrypted Society Is A Polite Society

Playing off of my post from earlier today, I feel that it’s time to update Heinlein’s famous phrase. Not only is an armed society a polite society but an encrypted society is a polite society.

This article in Vice discusses the importance of encryption to the Lesbian, Gay, Bisexual, and Transgender (LGBT) communities but it’s equally applicable to any oppressed segment of a society:

Despite advances over the last few decades, LGBTQ people, particularly transgender folks and people of color, face alarming rates of targeted violence, housing and job discrimination, school and workplace bullying, and mistreatment by law enforcement. In the majority of US states, for example, you can still be legally fired just for being gay.

So while anyone would be terrified about the thought of their phone in the hands of an abusive authority figure or a jealous ex-lover, the potential consequences of a data breach for many LGBTQ people could be far more severe.

[…]

LGBTQ people around the world depend on encryption every day to stay alive and to protect themselves from violence and discrimination, relying on the basic security features of their phones to prevent online bullies, stalkers, and others from prying into their personal lives and using their sexuality or gender identity against them.

In areas where being openly queer is dangerous, queer and trans people would be forced into near complete isolation without the ability to connect safely through apps, online forums, and other venues that are only kept safe and private by encryption technology.

These situations are not just theoretical. Terrifying real life examples abound, like the teacher who was targeted by for being gay, and later fired, after his Dropbox account was hacked and a sex video was posted on his school’s website. Or the time a Russian gay dating app was breached, likely by the government, and tens of thousands of users received a message threatening them with arrest under the country’s anti-gay “propaganda” laws.

Systematic oppression requires information. In order to oppress a segment of the population an oppressor must be able to identify members of that segment. A good, albeit terrifying, example of this fact is Nazi Germany. The Nazis actually made heavy use of IBM counting machines to identify and track individuals it declared undesirable.

Today pervasive surveillance is used by state and non-state oppressors to identify those they wish to oppress. Pervasive surveillance is made possible by a the lack of the use of effective encryption. Encryption allows individuals to maintain the integrity and confidentiality of information and can be used to anonymize information as well.

For example, without encryption it’s trivial for the State to identify transgender individuals. A simple unencrypted text message, e-mail, or Facebook message containing information that identifies an individual a transgender can either be read by an automated surveillance system or acquired through a court order. Once identified an agent or agents can be tasked with keeping tabs on that individual and wait for them to perform an act that justified law enforcement involvement. Say, for example, violating North Carolina’s idiotic bathroom law. After the violation occurs the law enforcement agents can be sent in to kidnap the individual so they can be made an example of, which would serve to send a message of terror to other transgender individuals.

When data is properly encrypted the effectiveness of surveillance is greatly diminished. That prevents oppressors from identifying targets, which prevents the oppressors from initiating interactions entirely. Manners are good when one may have to back up his acts with his life. Manners are better when one doesn’t have to enter into conflict in the first place.

Compromising Self-Driving Vehicles

The difficult part about being a technophile and an anarchist is that the State often highjacks new technologies to further its own power. These highjackings are always done under the auspices of safety and the groundwork is already being laid for the State to get its fingers into self-driving vehicles:

It is time to start thinking about the rules of the new road. Otherwise, we may end up with some analog to today’s chaos in cyberspace, which arose from decisions in the 1980s about how personal computers and the Internet would work.

One of the biggest issues will be the rules under which public infrastructures and public safety officers may be empowered to override how autonomous vehicles are controlled.

When should law enforcers and safety officers be empowered to override another person’s self-driving vehicle? Never. Why? Setting aside the obvious abuses such empowerment would lead to we have the issue of security, which the article alludes to towards the end:

Last, but by no means least, is whether such override systems could possibly be made hack-proof. A system to allow authorized people to control someone else’s car is also a system with a built-in mechanism by which unauthorized people — aka hackers — can do the same.

Even if hackers are kept out, if every police officer is equipped to override AV systems, the number of authorized users is already in the hundreds of thousands — or more if override authority is extended to members of the National Guard, military police, fire/EMS units, and bus drivers.

No system can be “hacker-proof,” especially when that system has hundreds of thousands of authorized users. Each system is only as strong as its weakest user. It only takes one careless authorized user to leak their key for the entire world to have a means to gaining access to everything locked by that key.

In order to implement a system in self-driving cars that would allow law enforcers and safety officers to override them there would need to be a remote access option that allowed anybody employed by a police department, fire department, or hospital to log into the vehicle. Every vehicle would either have to be loaded with every law enforcer’s and safety officer’s credentials or, more likely, rely on a single master key. In the case of the former it would only take one careless law enforcer or safety officer posting their credentials somewhere an unauthorized party could access them, including the compromised network of a hospital, for every self-driving car to be compromised. In the case of the latter the only thing that would be required to compromise every self-driving car is the master key being leaked. Either way, the integrity of the system would be dependent on hundreds of thousands of people maintaining perfect security, which is an impossible goal.

If self-driving cars are setup to allow law enforcers and safety officers to override them then they will become useless due to being constantly compromised by malicious actors.

How The State Makes Us Less Secure Part MLVII

The State, by claiming to provide for the common defense and declaring a monopoly on justice, has a conflict of interest. Providing for the common defense would require it to disclose any vulnerabilities it discovers but it’s reliant on those vulnerabilities to obtain evidence to prosecute individuals accused of a crime.

Adding a new chapter to this ongoing saga is the Federal Bureau of Investigation’s (FBI) decision to fight a court order to reveal a vulnerability it used to uncover the identify of Tor users:

Last month, the FBI was ordered to reveal the full malware code used to hack visitors of a dark web child pornography site. The judge behind that decision, Robert J. Bryan, said it was a “fair question” to ask how exactly the FBI caught the defendant.

But the agency is pushing back. On Monday, lawyers for the Department of Justice filed a sealed motion asking the judge to reconsider, and also provided a public declaration from an FBI agent involved in the investigation.

In short, the FBI agent says that revealing the exploit used to bypass the protections offered by the Tor Browser is not necessary for the defense and their case. The defense, in previous filings, has said they want to determine whether the network investigative technique (NIT)—the FBI’s term for a hacking tool—carried out additional functions beyond those authorised in the warrant.

People around the world rely on tor to protect themselves from tyrannical regimes. Journalists living in countries such as Iran, China, and Thailand are only able to continue reporting on human rights violations because Tor protects their identities. Sellers and consumers of verboten drugs, neither of whom are causing involuntary harm to anybody, successfully used Tor hidden services to make their trade safer. Victims of domestic abuse rely on Tor to get access to help without being discovered by their abusers. By refusing to publish the vulnerability it used, the FBI is putting all of these individuals in danger.

On another point, I must also emphasize that that the FBI is claiming the defense doesn’t need to know this information, which speaks volumes to the egotistical nature of the agency. Who is the FBI to decide what the defense needs to know and doesn’t need to know? Being the prosecuting party should already disqualify the FBI’s opinion on the matter due to its obvious conflict of interest.

Checkpoints All The Way Down

The investigation into the Brussels attack hasn’t concluded yet but politicians are already calling for actions to be taken to prevent such an attack from happening here:

Security experts, politicians and travelers alike say the Brussels bombings exposed a weak spot in airport security, between the terminal entrance and the screening checkpoint.

“If you think about the way things were done in Brussels — and have been done in other places — literally people only have to only walk in, and they can attack at will,” said Daniel Wagner, CEO of security consulting firm Country Risk Solutions.

These idiots will be putting security checkpoints before the security checkpoints if we let them:

Wagner suggests U.S. airports establish pre-terminal screening before travelers enter the facility.

“That is a common approach in many countries around the world — you cannot even get in the terminal until your bags and your person have been pre-screened,” he said. “That is, through an X-ray machine both for the bags and for the individual.”

It’ll be checkpoints all the way down. What none of these stooges have stopped to consider is that the checkpoints themselves are attractive targets. Checkpoints are chokepoints. They forces large numbers of people to gather in a single place so they can slowly (very slowly in the case of Minneapolis’ airport) be filtered through by security. If a suicide bomber wants to kill a lot of people they need only step in the checkpoint line.

Adding an additional chokepoint or moving the current one doesn’t fix the problem. Reducing the amount of damage a terrorist can cause in an airport requires dispersing people, which means making major changes to current airport security practices. The long security lines have to go. This can be done by simplifying the screening process, making it consistent (anybody who travels frequently knows that the orders barked by the Transportation Security Administration (TSA) goons can change drastically from day to day), and increasing the number of checkpoints. None of those measures will be taken though because the idiots who make the policies know nothing about security.