The Double Edged Sword of Body Cameras

As the public’s trust in law enforcers diminished demands were made to monitor working police officers. These demands resulted in calls for making officers wear body cameras that recorded their actions while they worked. In response many law enforcement agencies started buying body cameras and issuing them to the police. This satiated many peoples’ demands for police monitoring but some of us pointed out the limited utility of body cameras due to the fact that the departments usually controlled the footage. So long as body camera footage isn’t made available to the public in some manner it’s far too easy for departments to make any footage that incriminates their officers disappear down a memory hole.

Since no standards exist regarding the availability of police body camera footage states, counties, and cities are making up their own rules as they go. Locally a Hennepin County judge recently ruled that police body camera footage is off limits to the public:

So Hennepin prosecutors met with the chief judge and representatives of the Hennepin Public Defender’s Office, which handles 45,000 cases a year. The result was Bernhardson’s order, which asserts that prosecutors and defense attorneys have to follow the guidelines of the law, which save for “certain narrow exceptions,” classifies body camera video as off-limits to the public.

As the article points out, there are some difficult privacy questions regarding police body camera footage. However, body cameras are of limited use if such footage is classified as off-limits to the public. Under such a system body cameras allow law enforcers to use the footage as evidence against the people they arrest but don’t allow the public to use the footage to hold bad law enforcers accountable.

This lopsided policy shouldn’t surprise anybody. Law enforcement departments wouldn’t willingly adopt body cameras if they could realistically be used to hold officers accountable. But they would jump at the chance to use such devices to prosecute more people because then body cameras are a revenue generator instead of a liability. The State, having an interest in appeasing its revenue generators, has been more than happy to give law enforcers a ruleset that gives them the benefits of body cameras without the pesky downsides.

What does this mean for the general public? It means everybody should record, and preferably livestream, every police encounter they are either a party to or come across.

CNN and Hackers

The media’s portrayal of hackers is never accurate but almost always amusing. From hooded figures stooping over keyboards and looking at green ones and zeros on a black screen to balaclava clad individuals holding a laptop in one hand while they furiously type with the other hand, the creative minds behind the scenes at major media outlets always have a way to make hackers appear far more sinister than they really are.

CNN recently aired a segment about Russian hackers. How did the creative minds at CNN portray hackers to the viewing public? By showing a mini-game from a game you may have heard of:

In a recent story about President Obama proposing sanctions against Russia for its role in cyberattacks targeting the United States, CNN grabbed a screenshot of the hacking mini-game from the extremely popular RPG Fallout 4. First spotted by Reddit, the screenshot shows the menacing neon green letters that gamers will instantly recognize as being from the game.

Personally, I would have lifted a screenshot from the hacking mini-game in Deus Ex, it looks far more futuristic.

A lot of electrons have been annoyed by all of the people flipping out about fake news. But almost no attention has been paid to uninformed news. Most major media outlets are woefully uninformed about many (most?) of the subjects they report on. If you know anything about guns or technology you’re familiar with the amount of inaccurate reporting that occurs because of the media’s lack of understanding. When the outlet reporting on a subject doesn’t know anything about the subject the information they provide is worthless. Why aren’t people flipping out about that?

The Walls Have Ears

Voice activated assistances such as the Amazon Echo and Google Home are becoming popular household devices. With a simple voice command these devices can allow you to do anything from turning on your smart lightbulbs to playing music. However, any voice activated device must necessarily be listening at all times and law enforcers know that:

Amazon’s Echo devices and its virtual assistant are meant to help find answers by listening for your voice commands. However, police in Arkansas want to know if one of the gadgets overheard something that can help with a murder case. According to The Information, authorities in Bentonville issued a warrant for Amazon to hand over any audio or records from an Echo belonging to James Andrew Bates. Bates is set to go to trial for first-degree murder for the death of Victor Collins next year.

Amazon declined to give police any of the information that the Echo logged on its servers, but it did hand over Bates’ account details and purchases. Police say they were able to pull data off of the speaker, but it’s unclear what info they were able to access.

While Amazon declined to provide any server side information logged by the Echo there’s no reason a court order couldn’t compel Amazon to provide such information. In addition to that, law enforcers also managed to pull some unknown data locally from the Echo. Those two points raise questions about what kind of information devices like the Echo and Home collect as they’re passively sitting on your counter awaiting your command.

As with much of the Internet of Things, I haven’t purchased one of these voice activated assistances yet and have no plans to buy one anytime in the near future. They’re too big of a privacy risk for my tastes since I don’t even know what kind of information they’re collecting as they sit there listening.

Bypassing the Censors

What happens when a government attempts to censor people who are using a secure mode of communication? The censorship is bypassed:

Over the weekend, we heard reports that Signal was not functioning reliably in Egypt or the United Arab Emirates. We investigated with the help of Signal users in those areas, and found that several ISPs were blocking communication with the Signal service and our website. It turns out that when some states can’t snoop, they censor.

[…]

Today’s Signal release uses a technique known as domain fronting. Many popular services and CDNs, such as Google, Amazon Cloudfront, Amazon S3, Azure, CloudFlare, Fastly, and Akamai can be used to access Signal in ways that look indistinguishable from other uncensored traffic. The idea is that to block the target traffic, the censor would also have to block those entire services. With enough large scale services acting as domain fronts, disabling Signal starts to look like disabling the internet.

Censorship is an arms race between the censors and the people trying to communicate freely. When one side finds a way to bypass the other then the other side responds. Fortunately, each individual government is up against the entire world. Egypt and the United Arab Emirates only have control over their own territories but the people in those territories can access knowledge from anywhere in the world. With odds like that, the State is bound to fail every time.

This is also why any plans to compromise secure means of communication are doomed to fail. Let’s say the United States passes a law that requires all encryption software used within its borders to include a government backdoor. That isn’t the end of secure communications in the United States. It merely means that people wanting to communicate securely need to obtain tools developed in nations where such rules don’t exist. Since the Internet is global access to the goods and services of other nations is at your fingertips.

Denial of Service Attacks are Cheap to Perform

How expensive is it to perform a denial of service attack in the real world? More often than not the cost is nearly free. The trick is to exploit the target’s own security concerns:

A flight in America was delayed and almost diverted on Tuesday after a passenger changed the name of their wi-fi device to ‘Samsung Galaxy Note 7’.

An entire flight was screwed up by simply changing the SSID of a device.

Why did this simply trick cause any trouble whatsoever? Because the flight crew was more concerned about enforcing the rules than actual security. There was no evidence of a Galaxy Note 7 being onboard. Since anybody can change their device’s SSID to anything they want the presence of the SSID “Samsung Galaxy Note 7” shouldn’t have been enough to cause any issues. But the flight crew allowed that, at best, flimsy evidence to spur them into a hunt for the device.

This is why performing denial of service attacks in the real world is often very cheap. Staffers, such as flight crew, seldom have any real security training so they tend to overreact. They’re trying to cover their asses (and I don’t mean that as an insult, if they don’t cover their asses they very well could lose their job), which means you have an easy exploit sitting there for you.

You Have No Right to Privacy, Slave

It’s a good thing we have a right to not incriminate ourselves. Without that right a police officer could legally require us to give them our passcodes to unlock our phones:

A Florida man arrested for third-degree voyeurism using his iPhone 5 initially gave police verbal consent to search the smartphone, but later rescinded permission before divulging his 4-digit passcode. Even with a warrant, they couldn’t access the phone without the combination. A trial judge denied the state’s motion to force the man to give up the code, considering it equal to compelling him to testify against himself, which would violate the Fifth Amendment. But the Florida Court of Appeals’ Second District reversed that decision today, deciding that the passcode is not related to criminal photos or videos that may or may not exist on his iPhone.

‘Merica!

George W. Bush was falsely accused of saying that Constitution was just a “Goddamn piece of paper!” Those who believed the quote were outraged because that sentiment is heresy against the religion of the State. But it’s also true. The Constitution, especially the first ten amendments, can’t restrict the government in any way. It’s literally just a piece of paper, which is why your supposed rights enshrined by the document keep becoming more and more restricted.

Any sane interpretation of the Fifth Amendment would say that nobody is required to surrender a password to unlock their devices. But what you or I think the Constitution says is irrelevant. The only people who get to decide what it says, according to the Constitution itself, are the men who wear magical muumuus.

Facebook’s Attempt to Combat Scam News Sites

Fake news is the current boogeyman occupying news headlines. Ironically, this boogeyman is being promoted by many organizations that produce fake news such as CNN, Fox News, and MSNBC. For the most part fake news isn’t harmful. In fact fake news, which was originally referred to as tabloids, has probably been around as long as real news. But fake news can be harmful when it’s used to scam individuals, which is a problem Facebook is looking to address:

A new suite of tools will allow independent fact checkers to investigate stories that Facebook users or algorithms have flagged as potentially fake. Stories will be mostly flagged based on user feedback. But Mosseri also noted that the company will investigate stories that become viral in suspicious ways, such as by using a misleading URL. The company is also going to flag stories that are shared less than normal. “We’ve found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way,” Mosseri wrote.

Mosseri indicated that the company’s new efforts will only target scammers, not sites that push conspiracies like Pizzagate. “Fake news means a lot of different things to a lot of different people, but we are specifically focused on the worst of the worst—clear intentional hoaxes,” he told BuzzFeed. In other words, if a publisher genuinely believes fake news to be true, it will not be fact checked.

On the surface this doesn’t seem like a bad idea. I’ve seen quite a few people repost what they thought was a legitimate news article because the article was posted on a website that looked like CNBC and had a URL very close to CNBC but wasn’t actually CNBC. If you caught the slightly malformed URL you realized that the site was a scam.

However, I don’t have much faith in the method Facebook is using to judge whether an article is legitimate or not:

Once a story is flagged, it will go into a special queue that can only be accessed by signatories to the International Fact-Checkers Network Code of Principles, a project of nonprofit journalism organization Poynter. IFCN Code of Principles signatories in the U.S. will review the flagged stories for accuracy. If the signatory decides the story is fake news, a “disputed” warning will appear on the story in News Feed. The warning will also pop up when you share the story.

I don’t particularly trust many of the IFCN signatories. Websites such as FactCheck.org and Snopes have a very hit or miss record when it comes to fact checking. And I especially don’t trust nonprofit organizations. Any organization that claims that it doesn’t want to make a profit is suspect because, let’s face it, everybody wants to make a profit (although it may not necessarily be a monetary profit).

Either way, it’ll be interesting to see if Facebook’s tactic works for reducing the spread of outright scam sites.

Russia

Russia hasn’t occupied this much airtime on American news channels since the Cold War. But everywhere you look it’s Russia this and Russia that. Russia is propping up the Assad regime in Syria! Russia rigged the election! Russia stole my lunch money!

Wait, let’s step back to the second one. A lot of charges are being made that Russia “hacked” the election, which allowed Trump to win. And there’s some evidence that shenanigans were taking place regarding the election:

Georgia’s secretary of state says the state was hit with an attempted hack of its voter registration database from an IP address linked to the federal Department of Homeland Security.

Well that’s embarrassing. Apparently the Department of Motherland Fatherland Homeland Security (DHS) is a Russian agency. Who would have guessed?

Could Russia have influenced the election? Of course. We live in an age of accessible real-time global communications. Anybody could influence anybody else’s voting decision. A person in South Africa could influence a voter in South Korea to opt for one choice over another. This global communication system also means that malicious hackers in one nation could compromise any connected election equipment in another country.

However, the biggest check against Russian attempts to rig the election is all of the other forces that would be trying to do the exact same thing. People have accused both the Federal Bureau of Investigations (FBI) and the Central Intelligence Agency (CIA) (admittedly, rigging elections is what the CIA does) of trying to rig the election. Likewise, there are some questions about what exactly the DHS was doing in regards to Georgia. Major media companies were working overtime to influence people’s voting decision. Countries in Europe had a vested interest in the election going one way or another as did pretty much every other country on Earth.

I have no evidence one way or another but that’s never stopped me from guessing. My guess as to why these accusations against Russia are being made so vehemently is that a lot of voters are looking for answers as to why Trump won but are unwilling to consider that their preferred candidate was terrible. When you convince yourself that the candidate you oppose is Satan incarnate then you lose the ability to objectively judge your own candidate because in your head it’s now a battle between evil and good, not a battle between two flawed human beings.

Security Implications of Destructive Updates

More and more it should be becoming more apparent that you don’t own your smartphone. Sure, you paid for it and you physically control it but if the device itself can be disabled by a third-party without your authorization can you really say that you own it? This is a question Samsung Galaxy Note 7 owners should be asking themselves right now:

Samsung’s Galaxy Note 7 recall in the US is still ongoing, but the company will release an update in a couple of weeks that will basically force customers to return any devices that may still be in use. The company announced today that a December 19th update to the handsets in the States will prevent them from charging at all and “will eliminate their ability to work as mobile devices.” In other words, if you still have a Note 7, it will soon be completely useless.

One could argue that this ability to push an update to a device to disable it is a good thing in the case of the Note 7 since the device has a reputation for lighting on fire. But it has rather frightening ownership and security implications.

The ownership implications should be obvious. If the device manufacturer can disable your device at its whim then you can’t really claim to own it. You can only claim that you’re borrowing it for as long as the manufacturer deems you worthy of doing so. However, in regards to ownership, nothing has really changed. Since copyright and patent laws were applied to software your ability to own your devices has been basically nonexistent.

The security implications may not be as obvious. Sure, the ability for a device manufacturer to push implicitly trusted software to their devices carries risks but the tradeoff, relying on users to apply security updates, also carries risks. But this particular update being pushed out by Samsung has the ability to destroy users’ trust in manufacturer updates. Many users are currently happy to allow their devices to update themselves automatically because those updates tend to improve the device. It only takes a single bad update to make those users unhappy with automatic updates. If they become unhappy with automatic updates they will seek ways of disabling updates.

The biggest weakness in any security system tends to be the human component. Part of this is due to the difficulty of training humans to be secure. It takes a great deal of effort to train somebody to follow even basic security principles but it takes very little to undo all of that training. A single bad experience is all that generally stands between that effort and having all of it undone. If Samsung’s strategy becomes more commonplace I fear that years of getting users comfortable with automatic updates may be undone and we’ll be looking at a world where users jump through hoops to disable updates.

Degrees of Anonymity

When a service describes itself as anonymous how anonymous is it? Users of Yik Yak may soon have a chance to find out:

Yik Yak has laid 70 percent of employees amid a downturn in the app’s growth prospects, The Verge has learned. The three-year-old anonymous social network has raised $73.5 million from top-tier investors on the promise that its young, college-age network of users could one day build a company to rival Facebook. But the challenge of growing its community while moving gradually away from anonymity has so far proven to be more than the company could muster.

[…]

But growth stalled almost immediately after Sequoia’s investment. As with Secret before it, the app’s anonymous nature created a series of increasingly difficult problems for the business. Almost from the start, Yik Yak users reported incidents of bullying and harassment. Multiple schools were placed on lockdown after the app was used to make threats. Some schools even banned it. Yik Yak put tools in place designed to reduce harassment, but growth began to slow soon afterward.

Yik Yak claimed it was an anonymous social network and on the front end the data did appear anonymous. However, the backend may be an entirely different matter. How much information did Yik Yak regularly keep about its users? Internet Protocol (IP) addresses, Global Positioning System (GPS) coordinates, unique device identifiers, phone numbers, and much more can be easily collected and transmitted by an application running on your phone.

Bankruptcy is looking like a very real possibility for Yik Yak. If the company ends up filing then its assets will be liquidated. In this day and age user data is considered a valuable asset. Somebody will almost certainly end up buying Yik Yak’s user data and when they do they may discover that it wasn’t as anonymous as users may have thought.

Not all forms of anonymity are created equal. If you access a web service without using some kind of anonymity service, such as Tor or I2P, then the service has some identifiable information already such as your IP address and a browser fingerprint. If you’re access the service through a phone application then that application may have collected and transmitted your phone number, contacts list, and other identifiable information (assuming, of course, the application has permission to access all of that data, which it may not depending on your platform and privacy settings). While on the front end of the service you may appear to be anonymous the same may not hold true for the back end.

This issue becomes much larger when you consider that even if your data is currently being held by a benevolent company that does care about your privacy that may not always be the case. Your data is just a bankruptcy filing away from falling into the hands of somebody else.