Having Your Surveillance Cake And Eating It Too

At one point it wasn’t uncommon for employers to issue company devices to employees. Things have changed however and now it is common for employers to expect employees to use their personal devices for work. It seems like a win-win since employees don’t have to carry two cell phones or use whatever shitty devices their company issues and employers safe money on having to buy devices. However, it leads to an interesting situation. What happens when the employer wants to surveil an employee’s personal device? That’s the battle currently being waged by Minnesota’s state colleges and their employees:

Two faculty unions are up in arms over a new rule that would allow Minnesota’s state colleges and universities to inspect employee-owned cellphones and mobile devices if they’re used for work.

The unions say the rule, which is set to take effect on Friday, would violate the privacy of thousands of faculty members, many of whom use their own cellphones and computers to do their jobs.

“[It’s] a free pass to go on a fishing expedition,” said Kevin Lindstrom, president of the Minnesota State College Faculty.

But college officials say they have an obligation under state law to protect any “government data” that may be on such devices, and that as public employees, faculty members could be disciplined if they refuse to comply.

If the universities have such a legal obligation then they damn well should be issuing devices. Data is at the mercy of the security measures implemented on whatever devices it is copied to. When businesses allow employees to use personal devices for work any data that ends up on those devices is secured primarily by whatever measure the employee has put into place. While you can require certain security measures such as mandating a lock screen password on the employee’s phone, employees are still generally free to install any application, visit any website, and add any personal accounts to the device. All of those things can compromise proprietary company data.

By issuing centrally managed devices, the universities could restrict what applications are installed, what webpages devices are willing to visit, and what accounts can be added.

There is also the issue of property rights. What right does an employer have to surveil employee devices? If so, how far does that power extend? Does an employer has the right to surveil an employee’s home if they work form home or ever take work home? Does an employer have the right to surveil an employee’s vehicle if they use that vehicle to drive to work or travel for work? When employers purchase and issue devices these questions go away because the issued devices are the employer’s property to do with as they please.

If an employer wants to surveil employee devices then they should issue devices. If an employer is unwilling to issue devices then they should accept the fact they can’t surveil employee devices. If an employer is under a legal obligation to protect data then it needs to issue devices.

How The State Makes Us Less Secure Part MLVII

The State, by claiming to provide for the common defense and declaring a monopoly on justice, has a conflict of interest. Providing for the common defense would require it to disclose any vulnerabilities it discovers but it’s reliant on those vulnerabilities to obtain evidence to prosecute individuals accused of a crime.

Adding a new chapter to this ongoing saga is the Federal Bureau of Investigation’s (FBI) decision to fight a court order to reveal a vulnerability it used to uncover the identify of Tor users:

Last month, the FBI was ordered to reveal the full malware code used to hack visitors of a dark web child pornography site. The judge behind that decision, Robert J. Bryan, said it was a “fair question” to ask how exactly the FBI caught the defendant.

But the agency is pushing back. On Monday, lawyers for the Department of Justice filed a sealed motion asking the judge to reconsider, and also provided a public declaration from an FBI agent involved in the investigation.

In short, the FBI agent says that revealing the exploit used to bypass the protections offered by the Tor Browser is not necessary for the defense and their case. The defense, in previous filings, has said they want to determine whether the network investigative technique (NIT)—the FBI’s term for a hacking tool—carried out additional functions beyond those authorised in the warrant.

People around the world rely on tor to protect themselves from tyrannical regimes. Journalists living in countries such as Iran, China, and Thailand are only able to continue reporting on human rights violations because Tor protects their identities. Sellers and consumers of verboten drugs, neither of whom are causing involuntary harm to anybody, successfully used Tor hidden services to make their trade safer. Victims of domestic abuse rely on Tor to get access to help without being discovered by their abusers. By refusing to publish the vulnerability it used, the FBI is putting all of these individuals in danger.

On another point, I must also emphasize that that the FBI is claiming the defense doesn’t need to know this information, which speaks volumes to the egotistical nature of the agency. Who is the FBI to decide what the defense needs to know and doesn’t need to know? Being the prosecuting party should already disqualify the FBI’s opinion on the matter due to its obvious conflict of interest.

Paranoia I Appreciate

My first Apple product was a PowerBook G4 that I purchased back in college. At the time I was looking for a laptop that could run a Unix operating system. Back then (as is still the case today albeit to a lesser extent) running Linux on a laptop meant you had to usually give up sleep mode, Wi-Fi, the additional function buttons most manufacturers added on their keyboards, and a slew of power management features that made the already pathetic battery life even worse. Since OS X was (and still is) Unix based and didn’t involved the headaches of trying to get Linux to run on a laptop the PowerBook fit my needs perfectly.

Fast forward to today. Between then and now I’ve lost confidence in a lot of companies whose products I used to love. Apple on the other hand has continued to impress me. In recent times my preference for Apple products has been influenced in part by the fact that it doesn’t rely on selling my personal information to make money and displays a healthy level of paranoia:

Apple has begun designing its own servers partly because of suspicions that hardware is being intercepted before it gets delivered to Apple, according to a report yesterday from The Information.

“Apple has long suspected that servers it ordered from the traditional supply chain were intercepted during shipping, with additional chips and firmware added to them by unknown third parties in order to make them vulnerable to infiltration, according to a person familiar with the matter,” the report said. “At one point, Apple even assigned people to take photographs of motherboards and annotate the function of each chip, explaining why it was supposed to be there. Building its own servers with motherboards it designed would be the most surefire way for Apple to prevent unauthorized snooping via extra chips.”

Anybody who has been paying attention the the leaks released by Edward Snowden knows that concerns about surveillance hardware being added to off-the-shelf products isn’t unfounded. In fact some companies such as Cisco have taken measure to mitigate such threats.

Apple has a lot of hardware manufacturing capacity and it appears that the company will be using it to further protect itself against surveillance by manufacturing its own servers.

This is a level of paranoia I can appreciate. Years ago I brought a lot of my infrastructure in house. My e-mail, calendar and contact syncing, and even this website are all being hosted on servers running in my dwelling. Although part of the reason I did this was for the experience another reason was to guard against certain forms of surveillance. National Security Letters (NSL), for example, require service providers to surrender customer information to the State and legally prohibit them from informing the targeted customer. Since my servers are sitting in my dwelling any NSL would necessarily require me to inform myself of receiving it.

How The State Makes Us Less Secure Part MLVI

Statists often claim that the State is necessary for the common defense. If this were the case I would expect it to do what it can to make everybody safer. Instead it does the opposite. In its pursuit of power the State continues to take actions that make everybody under its rule less safe.

The latest chapter in this ongoing saga revolves around the iPhone of Syed Farook. After trying to get a court to force Apple to write a custom firmware for Farook’s iPhone that would allow the Federal Bureau of Investigations (FBI) to brute force the passcode, the agency postponed the hearing because it claimed to have found another method to get the data it wants. That method appears to be an exploit of some sort but the Justice Department has classified the matter so we may never know:

A new method to crack open locked iPhones is so promising that US government officials have classified it, the Guardian has learned.

The Justice Department made headlines on Monday when it postponed a federal court hearing in California. It had been due to confront Apple over an order that would have forced it to write software that would make it easier for investigators to guess the passcode for an iPhone used by San Bernardino gunman Syed Farook.

The government now says it may have figured out a way to get into the phone without Apple’s help. But it wants that discovery to remain secret, in an effort to prevent criminals, security researchers and even Apple itself from reengineering smartphones so that the tactic would no longer work.

By classifying this method the Justice Department is putting, at minimum, every iPhone 5C user running the same firmware as Farook’s phone at risk. But the exploit likely reaches further and may even put every user of every iOS device at risk.

Since Farook’s iPhone is in the State’s possession there is no risk of its firmware being upgraded. That being the case, there’s no reason for the Justice Department not to disclose the vulnerability its exploiting. Even if the exploit is disclosed the agency will still be able to use it to gain access to the data on Farook’s phone (assuming the exploit works as implied). But disclosing it would allow Apple to patch it so it couldn’t be used against the millions of innocent people using iOS devices.

There is a conflict of interest inherent in statism. The State is supposed to provide for the common defense of those within its territory. At the same time it’s charged with investigating crimes and dispensing justice. In order to fulfill the latter goal it must be able to gain access to whatever information it deems pertinent to an investigation. Ensuring that access is available conflicts with providing for a common defense since an effective defense against foreign aggressors, especially as it relates to protecting data, is also an effective defense against the State.

What The Paris Attackers Used Instead Of Encryption

Our overlords are still trying to make us believe the the reason the Paris attackers weren’t discovered before the attack is because they used effective cryptography. That is a blatant lie though. So what did the attackers use to avoid detection? A lot of cell phones:

New details of the Paris attacks carried out last November reveal that it was the consistent use of prepaid burner phones, not encryption, that helped keep the terrorists off the radar of the intelligence services.

As an article in The New York Times reports: “the three teams in Paris were comparatively disciplined. They used only new phones that they would then discard, including several activated minutes before the attacks, or phones seized from their victims.”

The article goes on to give more details of how some phones were used only very briefly in the hours leading up to the attacks. For example: “Security camera footage showed Bilal Hadfi, the youngest of the assailants, as he paced outside the stadium, talking on a cellphone. The phone was activated less than an hour before he detonated his vest.” The information come from a 55-page report compiled by the French antiterrorism police for France’s Interior Ministry.

I hesitate to say the attackers used burner phones because the term usually implies phones that were purchased in convenience stores with cash. In reality this type of evasion is possible with any type of cell phone so long as a group has enough of them. The trick is to only use a particular cell phone for one or two messages before disposing of it. With numbers changing constantly it’s difficult for the spooks to create a reliable social graph and therefore a plot.

This news will likely have the undesired effect of inspiring legislators to write bills prohibiting the purchase of cell phones for cash but such legislation won’t hinder this kind of strategy.

FBI Versus Apple Court Hearing Postponed

It appears that the Federal Bureau of Investigations (FBI) is finally following the advice of every major security expert and pursuing alternate means of acquire the data on Farook’s iPhone, which means the agency’s crusade against Apple is temporarily postponed:

A magistrate in Riverside, CA has canceled a hearing that was scheduled for Tuesday afternoon in the Apple v FBI case, at the FBI’s request late Monday. The hearing was part of Apple’s challenge to the FBI’s demand that the company create a new version of its iOS, which would include a backdoor to allow easier access to a locked iPhone involved in the FBI’s investigation into the 2015 San Bernardino shootings.

The FBI told the court that an “outside party” demonstrated a potential method for accessing the data on the phone, and asked for time to test this method and report back. This is good news. For now, the government is backing off its demand that Apple build a tool that will compromise the security of millions, contradicts Apple’s own beliefs, and is unsafe and unconstitutional.

This by no means marks the end of Crypto War II. The FBI very well could continue its legacy of incompetence and fail to acquire the data from the iPhone through whatever means its pursuing now. But this will buy us some time before a court rules that software developers are slave laborers whenever some judge issues a court order.

I’m going to do a bit of speculation here. My guess is that the FBI didn’t suddenly find somebody with a promising method of extracting data from the iPhone. After reading the briefs submitted by both Apple and the FBI it was obvious that the FBI either had incompetent lawyers or didn’t have a case. That being the case, I’m guessing the FBI decided to abandon its current strategy because it foresaw the court creating a precedence against it. It would be far better to abandon its current efforts and try again later, maybe against a company that is less competent than Apple, than to pursue what would almost certainly be a major defeat.

Regardless of the FBI’s reasoning, we can take a short breath and wait for the State’s next major attack against our rights.

iOS 9.3 With iMessage Fix Is Out

In the ongoing security arms race researchers from John Hopkins discovered a vulnerability in Apple’s iMessage:

Green suspected there might be a flaw in iMessage last year after he read an Apple security guide describing the encryption process and it struck him as weak. He said he alerted the firm’s engineers to his concern. When a few months passed and the flaw remained, he and his graduate students decided to mount an attack to show that they could pierce the encryption on photos or videos sent through iMessage.

It took a few months, but they succeeded, targeting phones that were not using the latest operating system on iMessage, which launched in 2011.

To intercept a file, the researchers wrote software to mimic an Apple server. The encrypted transmission they targeted contained a link to the photo stored in Apple’s iCloud server as well as a 64-digit key to decrypt the photo.

Although the students could not see the key’s digits, they guessed at them by a repetitive process of changing a digit or a letter in the key and sending it back to the target phone. Each time they guessed a digit correctly, the phone accepted it. They probed the phone in this way thousands of times.

“And we kept doing that,” Green said, “until we had the key.”

A modified version of the attack would also work on later operating systems, Green said, adding that it would likely have taken the hacking skills of a nation-state.

With the key, the team was able to retrieve the photo from Apple’s server. If it had been a true attack, the user would not have known.

There are several things to note about this vulnerability. First, Apple did response quickly by including a fix for it in iOS 9.3. Second, security is very difficult to get right so it often turns into an arms race. Third, designing secure software, even if you’re a large company with a lot of talented employees, is hard.

Christopher Soghoian also made a good point in the article:

Christopher Soghoian, principal technologist at the American Civil Liberties Union, said that Green’s attack highlights the danger of companies building their own encryption without independent review. “The cryptographic history books are filled with examples of crypto-algorithms designed behind closed doors that failed spectacularly,” he said.

The better approach, he said, is open design. He pointed to encryption protocols created by researchers at Open Whisper Systems, who developed Signal, an instant message platform. They publish their code and their designs, but the keys, which are generated by the sender and user, remain secret.

Open source isn’t a magic bullet but it does allow independent third party verification of your code. This advantage often goes unrealized as even very popular open source projects like OpenSSL have contained numerous notable security vulnerabilities for years without anybody being the wiser. But it’s unlikely something like iMessage would have been ignored so thoroughly.

The project would likely attracted a lot of developers interested in writing iMessage clients for Android, Windows, and Linux. Since iOS, and therefore by extension iMessage, is so popular in the public eye it’s likely a lot of security researchers would have looked through the iMessage code hoping to be the first to find a vulnerability and enjoy the publicity that would almost certainly entail. So open sourcing iMessage would likely have gained Apple a lot of third party verification.

In fact this is why I recommend applications like Signal over iMessage. Not only is Signal compatible with Android and iOS but it’s also open source so it’s available for third party verification.

It Was Snowden All Along

In 2013 the Federal Bureau of Investigations (FBI) demanded Ladar Levison hand over the TLS keys to his Lavabit service. He did comply, by providing the key printed out in small text, but also shutdown his service instead of letting the key be used to snoop on his customers. The FBI threw a hissy fit over this and even threatened to kidnap Levison for shutting down his business. But one question that always remained was who the FBI was after. Everybody knew it was Edward Snowden but there was no hard evidence… until now.

Court documents related to the Lavabit case have been released. The documents are naturally heavily redacted but the censors missed a page:

In court papers related to the Lavabit controversy, the target of the investigation was redacted, but it was widely assumed to be Edward Snowden. He was known to have used the service, and the charges against the target were espionage and theft of government property, the same charges Snowden faced.

Now, what was widely assumed has been confirmed. In documents posted to the federal PACER database this month, the government accidentally left his e-mail, “Ed_snowden@lavabit.com,” unredacted for all to see. The error was noted by the website Cryptome earlier this week, and Wired covered it yesterday.

This revelation didn’t tell us anything we didn’t know before but it’s nice to have hard evidence in hand. Now we know with certainty that the FBI completely destroyed a business as retaliation for having Snowden as a customer. I say this was retaliatory because the court documents [PDF] clearly show that Levison was willing to cooperate with the FBI by surveilling the single target of the order. However, the FBI decided it would accept nothing less than the surrender of Lavabit’s TLS key.

Had the FBI been reasonable it would have had its tap. Instead its agents decided to be unreasonable fuckheads, which forced Levison to shutdown his business entirely instead of putting thousands of innocent users at risk. This case is also a lesson in never cooperating with terrorists. Levison offered to cooperate and still had his business destroyed. When the FBI comes to your door you should refuse to cooperate in any way. Cooperating will not save you. The only difference between cooperating and refusing to cooperate is that in the case of the latter your business will be shutdown before innocent users are put at risk.

Let Me Emphasize That Ad-Blockers Are Security Tools

Once again ad networks have been utilized to serve up malware:

According to a just-published post from Malwarebytes, a flurry of malvertising appeared over the weekend, almost out of the blue. It hit some of the biggest publishers in the business, including msn.com, nytimes.com, bbc.com, aol.com, my.xfinity.com, nfl.com, realtor.com, theweathernetwork.com, thehill.com, and newsweek.com. Affected networks included those owned by Google, AppNexis, AOL, and Rubicon. The attacks are flowing from two suspicious domains, including trackmytraffic[c],buz and talk915[.]pw.

The ads are also spreading on sites including answers.com, zerohedge.com, and infolinks.com, according to SpiderLabs. Legitimate mainstream sites receive the malware from domain names that are associated with compromised ad networks. The most widely seen domain name in the current campaign is brentsmedia[.]com. Whois records show it was owned by an online marketer until January 1, when the address expired. It was snapped up by its current owner on March 6, a day before the malicious ad onslaught started.

In this case the attacks appear to be originated from domains of ad networks that had been allowed to expire. After being allowed to expire the domains were snapped up by malware distributors. This allowed them to distribute malware to visitors of sites that still allowed ads from those expired domains.

Ad networks have become an appealing target for malware distributors. By compromising a single ad network a malware distributor can successfully target users across many websites. It offers a much better return on investment than compromising a single large website such as the New York Times and the BBC. Compromising ad networks is often easier than compromising large websites as well since operators of large websites often have skilled administrators on hand that keep things fairly locked down. The fact that advertising companies come and go with notable frequency also makes life difficult for site administrators. In this case the purchased domains likely were legitimate ad networks at one time and simply vanished without anybody noticing. Since nobody noticed they weren’t removed from any of the ad distribution networks and could therefore still serve up ads to legitimate sites.

This event, if nothing else, should serve as a reminder that ad blockers are security tools.

Threat Posed By Personally Owned Drones Overblown, Water Is Wet

Last year the Federal Aviation Administration (FAA) announced it would be requiring all drone owners to register so their personal information, including home address, could be published for all to see. This requirement was justified under the claim that personally owned drones posed a major threat to other forms of aviation traffic. A lot of people, including myself, called bullshit on that and now research exists backing up our accusation:

That research, shown in a study just published by George Mason University’s Mercatus Center, was based on damage to aircraft from another sort of small, uncrewed aircraft—flying birds.

Much of the fear around drones hitting aircraft has been driven by FAA reports from pilots who have claimed near-misses with small drones. But an investigation last year by the Academy of Model Aeronautics (AMA) found that of the 764 near-miss incidents with drones recorded by the FAA, only 27 of them—3.5 percent—actually were near misses. The rest were just sightings, and those were often sightings that took place when drone operators were following the rules. The FAA also overcounted, including reports where the pilot said explicitly that there was no near miss and some where the flying object wasn’t identified, leading the AMA to accuse the FAA of exaggerating the threat in order to get support for its anti-drone agenda.

So for starters all the “near misses” we’ve read about in the media weren’t near misses. A vast majority of them were mere sightings. But the FAA’s bullshit doesn’t stop there:

There hasn’t yet been an incident in which a drone has struck an aircraft. But bird strikes (and bat strikes) do happen, and there’s a rich data set to work from to understand how often they do. Researchers Eli Dourado and Samuel Hammond reasoned that the chances of a bird strike remain much higher than that of an aircraft hitting a drone because “contrary to sensational media headlines, the skies are crowded not by drones but by fowl.”

The researchers studied 25 years of FAA “wildlife strike” data, reports voluntarily filed by pilots after colliding with birds. The data included over 160,000 reported incidents of collisions with birds, of which only 14,314 caused damage—and 80 percent of that number came from collisions with large or medium-sized birds such as geese and ducks.

Emphasis mine. No drones have struck a plane yet, which means the threat of drones to already existing aviation traffic is still entirely unrealized. Hell, this combined with the fact most reported near misses weren’t near misses, we should actually take a moment to recognize how much of a nonissue personally owned drones have been so far. Drone operators by and large have been very well behaved.

The data on wildlife strikes is also valuable since it indicates that when a drone finally does strike a plane there probably won’t be much damage to the plane. Most personally owned drones are more fragile than the large or medium sized birds that managed to cause damage when colliding with a plane.

What we have here is another example of a government money grab disguised as a crisis. With the FAA’s new rules in place the agency can extract $5 from every registered drone operator and up to $250,000 from operating a drone without being registered. Furthermore, the FAA can up the fees and fines as it sees fit.