When Karma Bites You In The Ass

The National Security Agency (NSA), which is supposedly tasked with security domestic networks in addition to exploiting foreign networks, has caused a lot of damage to overall computer security. It appears one of its efforts, inserting a backdoor into the Dual Elliptic Curve Deterministic Random Bit Generation algorithm, may have bit the State in the ass:

The government may have used compromised software for up to three years, exposing national security secrets to foreign spies, according to lawmakers and security experts.

Observers increasingly believe the software defect derived from an encryption “back door” created by the National Security Agency (NSA). Foreign hackers likely repurposed it for their own snooping needs.

[…]

The software vulnerability was spotted in December, when Juniper Networks, which makes a variety of IT products widely used in government, said it had found unauthorized code in its ScreenOS product.

[…]

The case is especially frustrating to security experts because it may have been avoidable. The hackers, they say, likely benefited from a flaw in the encryption algorithm that was inserted by the NSA.

For years, the NSA was seen as the standard-bearer on security technology, with many companies relying on the agency’s algorithms to lock down data.

But some suspected the NSA algorithms, including the one Juniper used, contained built-in vulnerabilities that could be used for surveillance purposes. Documents leaked by former NSA contractor Edward Snowden in 2013 appeared to confirm those suspicions.

Karma can be a real bitch.

This story does bring up a point many people often ignore: the State relies on a great deal of commercial hardware. Its infrastructure isn’t built of custom hardware and software free of the defects agencies such as the NSA introduce into commercial products. Much of its infrastructure is built on the exact same hardware and software the rest of us use. That means, contrary to what many libertarians claim as a pathetic justification not to learn proper computer security practices, the State is just as vulnerable to many of the issues as the rest of us and is therefore not as powerful as it seems.

Microsoft Makes Windows 10 A Recommended Upgrade For Users Of Older Versions Of Windows

File this under things that really annoy me:

From Monday, Windows Update will start making the upgrade to version 10 of the operating system a recommended update, rather than an optional one, a spokesperson for the software giant confirmed.

So if you’ve got Windows Update set up to automatically fetch and install recommended items – and the vast majority of people do because it’s the default setting – expect to, well, download and install a few gigabytes of Windows 10.

I understand Microsoft’s position. Its getting tried of sinking resources into supporting older versions of its operating system. Moving more people to Windows 10 reduces the amount of resources it has to invest in older versions. At the same time, this makes my life difficult.

One of the simplest pieces of security advice that can be given is to tell users to turn on automatic updates. A lot of malware infections are the result of a user failing to apply the latest security patches for their operating system. Turning on automatic updates ensures the latest security patches are automatically downloaded and installed soon after they’re released.

But a lot of users don’t want to upgrade to Windows 10. By moving Windows 10 into the recommended updates category users with automatic updates turned on will, unless they jump through a few hoops, find themselves running Windows 10.

This is an awkward position for me because I feel as though I must continue recommending people use automatic updates but I don’t want to force them into using the latest version of Windows if they don’t want to.

The Networks Have Ears

Can you trust a network you don’t personally administer? No. The professors at the University of California are learning that lesson the hard way:

“Secret monitoring is ongoing.”

Those ominous words captured the attention of many faculty members at the University of California at Berkeley’s College of Natural Resources when they received an email message from a colleague on Thursday telling them that a new system to monitor computer networks had been secretly installed on all University of California campuses months ago, without letting any but a few people know about it.

“The intrusive device is capable of capturing and analyzing all network traffic to and from the Berkeley campus, and has enough local storage to save over 30 days of *all* this data (‘full packet capture’). This can be presumed to include your email, all the websites you visit, all the data you receive from off campus or data you send off campus,” said the email from Ethan Ligon, associate professor of agricultural and resource economics. He is one of six members of the Academic Senate-Administration Joint Committee on Campus Information Technology.

When you control a network it’s a trivial matter to setup monitoring tools. This is made possible by the fact many network connects don’t utilize encryption. E-mail is one of the biggest offenders. Many e-mail server don’t encrypt traffic being sent so any network monitoring tools can’t read the contents. Likewise, many websites still utilize unencrypted connections so monitoring tools can easily read what is being sent and received between a browser and a web server. Instant messaging protocols often transmit data in the clear as well so monitoring tools can read entire conversations.

It’s not feasible to only use networks you control. A network that doesn’t connect to other networks is very limited in use. But there are tools to mitigate the risks associated with using a monitored network. For example, I run a Virtual Private Network (VPN) server that encrypts traffic between itself and my devices. When I connect to it all of my traffic goes through the encrypted connection so local network monitoring tools can’t snoop on my connects. Another tools that works very well for websites is the Tor Browser. The Tor Browser sends all traffic through an encrypted connection to an exit node. While the exit node can snoop on any unencrypted connections local monitoring tools cannot.

Such tools wouldn’t be as necessary to maintain privacy though if all connections utilized effective encryption. E-mail servers, websites, instant messengers, etc. can encrypt traffic and often do. But the lack of ubiquitous encryption means monitoring tools can still collect some data on you.

Everything Is Becoming A Snitch

The Internet of Things promises many wonderful benefits but the lack of security focus guarantees there will be severe detriments. A column in the New York Times inadvertently explains how dire some of these detriments could be:

WASHINGTON — For more than two years the F.B.I. and intelligence agencies have warned that encrypted communications are creating a “going dark” crisis that will keep them from tracking terrorists and kidnappers.

Now, a study in which current and former intelligence officials participated concludes that the warning is wildly overblown, and that a raft of new technologies — like television sets with microphones and web-connected cars — are creating ample opportunities for the government to track suspects, many of them worrying.

“ ‘Going dark’ does not aptly describe the long-term landscape for government surveillance,” concludes the study, to be published Monday by the Berkman Center for Internet and Society at Harvard.

The study argues that the phrase ignores the flood of new technologies “being packed with sensors and wireless connectivity” that are expected to become the subject of court orders and subpoenas, and are already the target of the National Security Agency as it places “implants” into networks around the world to monitor communications abroad.

The products, ranging from “toasters to bedsheets, light bulbs, cameras, toothbrushes, door locks, cars, watches and other wearables,” will give the government increasing opportunities to track suspects and in many cases reconstruct communications and meetings.

Encryption is only part of the electronic security puzzle. Even if your devices are properly implementing encryption to secure the data they store, transmit, or receive they may not be properly enforcing credentials. Authorized users are expected to be able to gain access to plaintext data so bypassing the security offered by encryption can be done by gaining access to an authorized user account.

Let’s consider the Amazon Echo. The Echo relies heavily on voice commands, which means it has a built-in microphone that’s always listening. Even if the data it transmits to and receives from Amazon is properly encrypted an unauthorized user who gains access to the device as an authorized user could use the microphone to record conversations. In this case cryptography hasn’t failed, the device is merely providing expected access.

Internet of Things devices, due to the lack of security focus, often fail to enforce authorization. Some devices require no authorized at all, have vulnerabilities that allow an unauthorized user to gain access to an authorized user’s account, include built-in backdoor administrative accounts with hardcoded passwords, etc. That gives the State potential access to a great deal of sensors in a targeted person’s household.

I’m not against the idea behind the Internet of Things per se. But I’m wary of such devices at the moment because the manufacturers are, in my opinion, being sloppy with security. In time I’m sure the hard lessons will be learned just as they were learned by operating system developers in the past. When that finally happens and I can be reasonably assured the security of my smart television isn’t nonexistent I may becoming more willing to buy such products.

Mandatory Tracking

Fitness trackers are convenient devices for tracking health related information. Unfortunately many organizations see genuinely good ideas and decide they must be mandatory. That’s what the Oral Roberts University in Oklahoma has decided:

Oral Roberts University in Tulsa, Oklahoma, is requiring incoming freshmen to wear Fitbit fitness trackers to record 10,000 steps per day, with the information being made available to professors.

“ORU offers one of the most unique educational approaches in the world by focusing on the Whole Person — mind, body and spirit,” ORU President William M. Wilson said in a statement, a local CBS News affiliate reported.

“The marriage of new technology with our physical fitness requirements is something that sets ORU apart,” he said. “In fact, when we began this innovative program in the fall of 2015, we were the first university in the world to offer this unique approach to a fitness program.”

The Fitbit device uses GPS technology to track how and where students exercise, eat and sleep, as well as the calories they burn, how much they weigh and other personal information, EAGNews reported.

This raises so many privacy related questions. How does the university verify each student has taken the right number of steps per day? Is the information synced to the student’s smartphone (assuming the student has a smartphone)? If so, is the data collected by an app created by the university or Fitbit’s app? If the latter does the university demand students hand over their Fitbit account credentials? Is the health data accessible at any time to the university?

More concerning is how this technology will be mandated in the future. Will health insurance companies begin mandating that customers must wear Fitbits and meet a certain number of daily steps? While one can choose not to attend the Orwell, err, Oral Roberts University they cannot decide to forgo health insurance less they be fined by the State. Could businesses require employees to wear Fitbits as part of a wellness program (one of my friends works a place where wearing a Fitbit is required to receive a health insurance discount but it’s not mandatory yet)?

Technology is great so long as it remains voluntary. It’s when organizations start mandating the use of a technology that things become frightening.

There The Market Goes Again, Solving Problems Without Threats Of Force

Humans aren’t very good drivers. We’re unable to watch everything that’s going on around us at all times, we’re easily distracted, and many of us seem utterly incapable of putting the cell phone down even when we’re driving. Not surprisingly, especially when you consider the number of vehicles on the road, a lot of collisions happen every day. The State benefits from this because it has create numerous laws that allow it to rake in cash when people crash into one another but do fuck all for safety. Fortunately the market is here to help and it doesn’t even need a gang of armed agents to shoot our pets:

In what may not come as a surprise, vehicles with automatic braking systems are involved in rear-end crashes (that is, accidents in which a vehicle hits a car directly in front of them) at lower rates than vehicles not equipped with the systems, says the Insurance Institute for Highway Safety, or IIHS.

The research focused on Forward Collision Warning (FCW) and Automatic Emergency Braking (AEB), as well as the suite of systems made by Volvo called City Safety, which includes advanced versions of those two technologies. The research examined vehicles from a number of different automakers including Acura, Honda, Mercedes-Benz, Subaru and Volvo, which were equipped with FCW and AEB, as well as vehicles that included just FCW or no crash prevention tech at all.

According to the IIHS research, equipping vehicles with both warning and autobraking systems reduced the rate of rear-end crashes by 39 percent and rear end crashes with injuries by 42 percent. That’s an overall reduction in crashes by 12 percent and a reduction in injury crashes by 15 percent.

Machines can be far better drivers than humans. With the right sensors they can watch everything that’s going on around them, they don’t get distracted, and they can multitask so sending information over a cellular connection doesn’t hinder their ability to drive. Adding automation to automobiles has been improving safety since, at least, power brakes became a thing. As the amount of tasks an automobile can do itself increases we will likely continue to see a correlating increase in safety.

What’s beautiful about these safety systems is that they don’t require the threat of violence to create. I’m sure the State will take credit for these automated breaking systems by making them mandatory but the State didn’t invent them, the market did. Automobile manufacturers have voluntarily developed these systems to make their vehicles safer and therefore, they hope, more appealing to customers.

Meanwhile the State will continue passing laws to needlessly change the roadways and highways, make more things a finable offense, and other such nonsense under the false claims of increasing safety while really increasing its revenue.

Police Body Cameras Won’t Save Us

Setting aside the severe privacy implications of pervasive police body cameras the biggest issue is that the police remain in sole control of the devices and data. Even in cities that require police to wear body cameras I still urge people to record any and all police interactions they’re either a party to or come across. When individuals record the police the footage isn’t in the polices’ control so there are barriers that make it more difficult for them to use it to prosecute somebody. Footage recorded by individuals is also more resilient to the body camera memory hole:

Chicago Police Department officers stashed microphones in their squad car glove boxes. They pulled out batteries. Microphone antennas got busted or went missing. And sometimes, dashcam systems didn’t have any microphones at all, DNAinfo Chicago has learned.

Police officials last month blamed the absence of audio in 80 percent of dashcam videos on officer error and “intentional destruction.”

When the only footage of a police encounter comes from a police controlled device it’s a simple matter for the officer to disable it. The best way to counter such a threat is to record police interactions yourself.

Most people carry smartphones, which usually come equipped with a decent camera. You can use the builtin video recording app but there are better options in my opinion. A friend of mine who spends a lot of time recording the police uses and recommends Bambuser. The American Civil Liberties Union has region specific apps for recording the police. Both options are good because they upload the video to a remote server so a cop cannot destroy the footage by confiscating or destroying your recording device.

Police body cameras sound like a great idea on paper but as with most things in life if you want something done right you should do it yourself.

The Public-Private Surveillance Partnership

Between government and corporate surveillance I would, nominally, agree that government surveillance is more dangerous. This is because corporations aren’t in the practice of sending armed goons to your home to kick in your door, shoot your dog, and kidnap you based on what their surveillance has uncovered. But the distinction is only nominal because the data collected from corporate surveillance often finds its way into the government’s hands:

Throughout the United States—outside private houses, apartment complexes, shopping centers, and businesses with large employee parking lots—a private corporation, Vigilant Solutions, is taking photos of cars and trucks with its vast network of unobtrusive cameras. It retains location data on each of those pictures, and sells it.

It’s happening right now in nearly every major American city.

The company has taken roughly 2.2 billion license-plate photos to date. Each month, it captures and permanently stores about 80 million additional geotagged images. They may well have photographed your license plate. As a result, your whereabouts at given moments in the past are permanently stored. Vigilant Solutions profits by selling access to this data (and tries to safeguard it against hackers). Your diminished privacy is their product. And the police are their customers.

The company counts 3,000 law-enforcement agencies among its clients. Thirty thousand police officers have access to its database. Do your local cops participate?

One of the biggest risks of corporate surveillance is the collected data, either through sale or warrant, ends up in the hands of the State. While I have no real concerns about Facebook using my social graph to justify sending armed goons to kidnap me I do have concerns about judge granting a warrant to a law enforcement agency to obtain that data as a justification for kidnapping me.

Security Is Critical Even If You Think You Have Nothing To Hide

In my position as a discount security advisor to the proles one of the hardest challenges I face is convincing people how important security is. Most people assume they have nothing to hide. They usually claim they won’t lose anything of importance if an unauthorized party gains access to their online accounts. I can’t remember how many times I’ve heard, “If they get into my Facebook they’ll just learn how boring I am.”

Even if you are the most boring person in the world, preventing unauthorized persons from accessing your accounts is critically important. Failing to do so can lead to severe real life ramifications:

In one nasty spurt in May, a hacker gained control of Amy’s Twitter account, which she had used only twice before, and posted a series of racist and antisemitic messages. (See if you can tell where Amy’s tweets end and the hacker’s begin in the timeline below.)

That same day, a hacker used Amy’s email account to post a message to a Yahoo Groups list of about 300 residents of the Straters’ subdivision, including many parents of students at the elementary school that the family’s youngest daughter attends. According to local news reports, the message carried a chilling subject line—“I Will Shoot Up Your School”—and detailed a planned attack on the school. Oswego police quickly verified that Amy’s account had been hacked and that the message was a hoax, but the damage had been done.

Later that day, Amy discovered that her LinkedIn profile had been hacked, too. The hacker posted a message calling her employer, Ingalls Health System, “A TERRIBLE COMPANY RAN [sic] BY JEWS.”

Amy, who had worked at Ingalls for seven months as a director of decision support, had suspected that the trolls might target her employer. She says she had previously alerted the company’s IT department that the company’s systems might be compromised by the same people who were attacking her and her son.

She expected support—after all, if it was her house that was being repeatedly robbed, rather than her social media accounts, wouldn’t the company be sympathetic? But none came. Shortly after the hack, Ingalls fired Amy from her six-figure job, giving her 12 weeks of severance pay. Amy says she got no satisfactory explanation for her dismissal, other than a hint that she was “too much of a liability.” (A spokeswoman for Ingalls Health System declined to comment.)

[…]

She hasn’t been able to get another job in hospital administration because for months, her first page of Google results has included her LinkedIn profile and her Twitter account, both of which were filled with racist and anti-semitic language. (She recently regained access to her LinkedIn account after contacting the company’s fraud division, but her defaced Twitter account is still up, since the attacker changed the password to prevent her from restoring it.)

I won’t lie to you and claim proper security practices will thwart a dedicated attacker such as the ones praying on the Straters. What proper security practices will do is make you a harder target. The cost of attacking you will go up and when it comes to self-defense, whether it’s online or offline, the goal is to raise the cost of attacking you high enough to dissuade your attackers. If you can’t dissuade your attacker entirely you can still reduce the amount of damage they cause.

Twitter, Yahoo, Google, LinkedIn, Facebook, and many other websites now offer two factor authentication. Two factor authentication requires both a password and an additional authentication token, usually tied to a physical device such as your phone, to log into an account. Enabling it is a relatively easy way to notably raise the cost of gaining unauthorized access to your accounts. If nothing else you should make sure your primary e-mail account supports two factor authentication and that it is enabled. E-mail accounts are a common method used by websites to reset passwords so gaining access to your e-mail account often allows an attacker to gain access to many of your other online accounts.

I also recommend using a password manager. There are many to choose from. I use 1Password. LastPass is still a managed I’m willing to recommend with the caveat that I don’t trust the new owners and therefore am wary of it as a longterm solution. Password managers allow you to use a unique, complex password for each of your accounts. If you use a common password for all of your accounts, which is a sadly common practice, and an unauthorized party learns that password they will have access to all of those accounts. Using a password manager allows you to limited damage by securing accounts with complex passwords that are difficult to guess and ensures an unauthorized party cannot gain access to any additional accounts by learning the password to one of them.

I must note that there is the potential threat of an unauthorized party compromising your password manager. In general the risk of this is lower than the risks involved with not using a password manager. There are also ways to mitigate the risk of unauthorized parties gaining access. LastPass, along with many other online password managers, supports two factor authentication. 1Password syncs passwords using iCloud or Dropbox, both of which support two factor authentication. You can also disable syncing in 1Password entirely so your password database never leaves your computer. LastPass, 1Password, and most other password managers also encrypt your password database so even if an unauthorized party does obtain a copy of the database they cannot read it without your decryption key.

Using two factor authentication and a password manager are by no means the only actions you can take. I mention them because they are simple ways for the average person to bolster the security of their online accounts quickly.

Nothing I’ve described above will protect you from social engineering attacks. Due to the lack of authentication inherent in many systems it’s still possible for an attacker to send the police to your home, order pizzas to be delivered to your home, call your employer and harass them enough to convince them to fire you, sending anonymous bomb threats in your name, getting your utilities disconnected, etc.

What I’ve described can reduce the risks of an attacker gaining access to your social media accounts and posting things that could cost you your job and haunt you for the rest of your life. And regardless of what most people believe, keeping attackers out of these accounts it important. Failure to do so can lead to dire consequences as demonstrated in the linked story.

Judges Don’t Have To Understand Something To Rule On It

In most professions the opinions of those who lack an understanding of a pertinent topic are rightfully ignored. Why would anybody waste time asking somebody who knows nothing about software development about the best method to implement a software feature? But the legal field is not most professions. In the legal field you can lack an understanding of a pertinent topic and still be taken seriously as proven time and again when a judge attempts to rule on a case involving technology:

In short, Judge Byran, despite hearing the views of those who took part in the investigation, and having read the briefs submitted by the defense and prosecution several times, could not fully grasp what the NIT was doing.

“If a smart federal judge still has trouble understanding after hours of expert testimony what is actually going on,” then the average judge signing warrant applications has little hope of truly understanding what the FBI is proposing, Nate Wessler, staff attorney at the American Civil Liberties Union (ACLU), told Motherboard in a phone interview. (The ACLU has agreed to a protective order for the Michaud case, allowing it access to the sealed filings.)

“It appears in this case, and that’s consistent with other cases we’ve seen elsewhere in the country involving use of malware, the government explanations and warrant applications are quite sparse, and do not fully explain to judges how these technologies works,” Wessler added.

As the hearing continued, Judge Byran said “I suppose there is somebody sitting in a cubicle somewhere with a keyboard doing this stuff. I don’t know that. It may be they seed the clouds, and the clouds rain information. I don’t know.”

Emphasis mine. The judge openly admits that he doesn’t know how the Federal Bureau of Investigation’s (FBI) malware works and further emphasizes this fact but saying something entirely nonsensical. In almost any other profession the judge’s rambling would have been dismissed but in the legal profession his ruling, even though he has no idea what he’s ruling on, is respected.

This is yet another item in a long list of problems with the United States legal system. The fate of accused parties is being put into the hands of individuals who are entirely unqualified to make the decisions they’re tasked with making. As soon as Judge Byran said he didn’t know what was going on he should have been replaced by somebody qualified. In any other profession he would have been. But a judge’s power is more important than their knowledge in the courtroom. How anybody can look at such a system and claim it dispenses justice is beyond me.