CNN and Hackers

The media’s portrayal of hackers is never accurate but almost always amusing. From hooded figures stooping over keyboards and looking at green ones and zeros on a black screen to balaclava clad individuals holding a laptop in one hand while they furiously type with the other hand, the creative minds behind the scenes at major media outlets always have a way to make hackers appear far more sinister than they really are.

CNN recently aired a segment about Russian hackers. How did the creative minds at CNN portray hackers to the viewing public? By showing a mini-game from a game you may have heard of:

In a recent story about President Obama proposing sanctions against Russia for its role in cyberattacks targeting the United States, CNN grabbed a screenshot of the hacking mini-game from the extremely popular RPG Fallout 4. First spotted by Reddit, the screenshot shows the menacing neon green letters that gamers will instantly recognize as being from the game.

Personally, I would have lifted a screenshot from the hacking mini-game in Deus Ex, it looks far more futuristic.

A lot of electrons have been annoyed by all of the people flipping out about fake news. But almost no attention has been paid to uninformed news. Most major media outlets are woefully uninformed about many (most?) of the subjects they report on. If you know anything about guns or technology you’re familiar with the amount of inaccurate reporting that occurs because of the media’s lack of understanding. When the outlet reporting on a subject doesn’t know anything about the subject the information they provide is worthless. Why aren’t people flipping out about that?

The Walls Have Ears

Voice activated assistances such as the Amazon Echo and Google Home are becoming popular household devices. With a simple voice command these devices can allow you to do anything from turning on your smart lightbulbs to playing music. However, any voice activated device must necessarily be listening at all times and law enforcers know that:

Amazon’s Echo devices and its virtual assistant are meant to help find answers by listening for your voice commands. However, police in Arkansas want to know if one of the gadgets overheard something that can help with a murder case. According to The Information, authorities in Bentonville issued a warrant for Amazon to hand over any audio or records from an Echo belonging to James Andrew Bates. Bates is set to go to trial for first-degree murder for the death of Victor Collins next year.

Amazon declined to give police any of the information that the Echo logged on its servers, but it did hand over Bates’ account details and purchases. Police say they were able to pull data off of the speaker, but it’s unclear what info they were able to access.

While Amazon declined to provide any server side information logged by the Echo there’s no reason a court order couldn’t compel Amazon to provide such information. In addition to that, law enforcers also managed to pull some unknown data locally from the Echo. Those two points raise questions about what kind of information devices like the Echo and Home collect as they’re passively sitting on your counter awaiting your command.

As with much of the Internet of Things, I haven’t purchased one of these voice activated assistances yet and have no plans to buy one anytime in the near future. They’re too big of a privacy risk for my tastes since I don’t even know what kind of information they’re collecting as they sit there listening.

Bypassing the Censors

What happens when a government attempts to censor people who are using a secure mode of communication? The censorship is bypassed:

Over the weekend, we heard reports that Signal was not functioning reliably in Egypt or the United Arab Emirates. We investigated with the help of Signal users in those areas, and found that several ISPs were blocking communication with the Signal service and our website. It turns out that when some states can’t snoop, they censor.

[…]

Today’s Signal release uses a technique known as domain fronting. Many popular services and CDNs, such as Google, Amazon Cloudfront, Amazon S3, Azure, CloudFlare, Fastly, and Akamai can be used to access Signal in ways that look indistinguishable from other uncensored traffic. The idea is that to block the target traffic, the censor would also have to block those entire services. With enough large scale services acting as domain fronts, disabling Signal starts to look like disabling the internet.

Censorship is an arms race between the censors and the people trying to communicate freely. When one side finds a way to bypass the other then the other side responds. Fortunately, each individual government is up against the entire world. Egypt and the United Arab Emirates only have control over their own territories but the people in those territories can access knowledge from anywhere in the world. With odds like that, the State is bound to fail every time.

This is also why any plans to compromise secure means of communication are doomed to fail. Let’s say the United States passes a law that requires all encryption software used within its borders to include a government backdoor. That isn’t the end of secure communications in the United States. It merely means that people wanting to communicate securely need to obtain tools developed in nations where such rules don’t exist. Since the Internet is global access to the goods and services of other nations is at your fingertips.

You Have No Right to Privacy, Slave

It’s a good thing we have a right to not incriminate ourselves. Without that right a police officer could legally require us to give them our passcodes to unlock our phones:

A Florida man arrested for third-degree voyeurism using his iPhone 5 initially gave police verbal consent to search the smartphone, but later rescinded permission before divulging his 4-digit passcode. Even with a warrant, they couldn’t access the phone without the combination. A trial judge denied the state’s motion to force the man to give up the code, considering it equal to compelling him to testify against himself, which would violate the Fifth Amendment. But the Florida Court of Appeals’ Second District reversed that decision today, deciding that the passcode is not related to criminal photos or videos that may or may not exist on his iPhone.

‘Merica!

George W. Bush was falsely accused of saying that Constitution was just a “Goddamn piece of paper!” Those who believed the quote were outraged because that sentiment is heresy against the religion of the State. But it’s also true. The Constitution, especially the first ten amendments, can’t restrict the government in any way. It’s literally just a piece of paper, which is why your supposed rights enshrined by the document keep becoming more and more restricted.

Any sane interpretation of the Fifth Amendment would say that nobody is required to surrender a password to unlock their devices. But what you or I think the Constitution says is irrelevant. The only people who get to decide what it says, according to the Constitution itself, are the men who wear magical muumuus.

Facebook’s Attempt to Combat Scam News Sites

Fake news is the current boogeyman occupying news headlines. Ironically, this boogeyman is being promoted by many organizations that produce fake news such as CNN, Fox News, and MSNBC. For the most part fake news isn’t harmful. In fact fake news, which was originally referred to as tabloids, has probably been around as long as real news. But fake news can be harmful when it’s used to scam individuals, which is a problem Facebook is looking to address:

A new suite of tools will allow independent fact checkers to investigate stories that Facebook users or algorithms have flagged as potentially fake. Stories will be mostly flagged based on user feedback. But Mosseri also noted that the company will investigate stories that become viral in suspicious ways, such as by using a misleading URL. The company is also going to flag stories that are shared less than normal. “We’ve found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way,” Mosseri wrote.

Mosseri indicated that the company’s new efforts will only target scammers, not sites that push conspiracies like Pizzagate. “Fake news means a lot of different things to a lot of different people, but we are specifically focused on the worst of the worst—clear intentional hoaxes,” he told BuzzFeed. In other words, if a publisher genuinely believes fake news to be true, it will not be fact checked.

On the surface this doesn’t seem like a bad idea. I’ve seen quite a few people repost what they thought was a legitimate news article because the article was posted on a website that looked like CNBC and had a URL very close to CNBC but wasn’t actually CNBC. If you caught the slightly malformed URL you realized that the site was a scam.

However, I don’t have much faith in the method Facebook is using to judge whether an article is legitimate or not:

Once a story is flagged, it will go into a special queue that can only be accessed by signatories to the International Fact-Checkers Network Code of Principles, a project of nonprofit journalism organization Poynter. IFCN Code of Principles signatories in the U.S. will review the flagged stories for accuracy. If the signatory decides the story is fake news, a “disputed” warning will appear on the story in News Feed. The warning will also pop up when you share the story.

I don’t particularly trust many of the IFCN signatories. Websites such as FactCheck.org and Snopes have a very hit or miss record when it comes to fact checking. And I especially don’t trust nonprofit organizations. Any organization that claims that it doesn’t want to make a profit is suspect because, let’s face it, everybody wants to make a profit (although it may not necessarily be a monetary profit).

Either way, it’ll be interesting to see if Facebook’s tactic works for reducing the spread of outright scam sites.

Security Implications of Destructive Updates

More and more it should be becoming more apparent that you don’t own your smartphone. Sure, you paid for it and you physically control it but if the device itself can be disabled by a third-party without your authorization can you really say that you own it? This is a question Samsung Galaxy Note 7 owners should be asking themselves right now:

Samsung’s Galaxy Note 7 recall in the US is still ongoing, but the company will release an update in a couple of weeks that will basically force customers to return any devices that may still be in use. The company announced today that a December 19th update to the handsets in the States will prevent them from charging at all and “will eliminate their ability to work as mobile devices.” In other words, if you still have a Note 7, it will soon be completely useless.

One could argue that this ability to push an update to a device to disable it is a good thing in the case of the Note 7 since the device has a reputation for lighting on fire. But it has rather frightening ownership and security implications.

The ownership implications should be obvious. If the device manufacturer can disable your device at its whim then you can’t really claim to own it. You can only claim that you’re borrowing it for as long as the manufacturer deems you worthy of doing so. However, in regards to ownership, nothing has really changed. Since copyright and patent laws were applied to software your ability to own your devices has been basically nonexistent.

The security implications may not be as obvious. Sure, the ability for a device manufacturer to push implicitly trusted software to their devices carries risks but the tradeoff, relying on users to apply security updates, also carries risks. But this particular update being pushed out by Samsung has the ability to destroy users’ trust in manufacturer updates. Many users are currently happy to allow their devices to update themselves automatically because those updates tend to improve the device. It only takes a single bad update to make those users unhappy with automatic updates. If they become unhappy with automatic updates they will seek ways of disabling updates.

The biggest weakness in any security system tends to be the human component. Part of this is due to the difficulty of training humans to be secure. It takes a great deal of effort to train somebody to follow even basic security principles but it takes very little to undo all of that training. A single bad experience is all that generally stands between that effort and having all of it undone. If Samsung’s strategy becomes more commonplace I fear that years of getting users comfortable with automatic updates may be undone and we’ll be looking at a world where users jump through hoops to disable updates.

Pebble Goes Bankrupt

Pebble was an interesting company. While the company didn’t invent the smartwatch concept, I have a Fossil smartwatch running Palm OS that came out way before the Pebble, it did popularize the market. But making a product concept popular doesn’t mean you’re going to be successful. Pebble has filed for bankruptcy and effective immediately will no longer sell products, honor warranties, or provide any support beyond the material already posted on the Pebble website.

But what really got me was how the announcement was handled. If you read the announcement you may be lead to believe that Fitbit has purchased Pebble. The post talks about this being Pebble’s “next step” and the e-mail announcement sent out yesterday even said that Pebble was joining Fitbit:

It’s no surprise that a lot of Pebble users were quite upset with Fitbit since, based on the information released by Pebble, it appeared that Fitbit had made the decision to not honor warranties, release regular software updates for current watches, and discontinue the newly announced watches. But Fitbit didn’t buy Pebble, it only bought some of its assets:

Fitbit Inc., the fitness band maker, has acquired software assets from struggling smartwatch startup Pebble Technology Corp., a move that will help it better compete with Apple Inc..

The purchase excludes Pebble’s hardware, Fitbit said in a statement Wednesday. The deal is mainly about hiring the startup’s software engineers and testers, and getting intellectual property such as the Pebble watch’s operating system, watch apps, and cloud services, people familiar with the matter said earlier.

While Fitbit didn’t disclose terms of the acquisition, the price is less than $40 million, and Pebble’s debt and other obligations exceed that, two of the people said. Fitbit is not taking on the debt, one of the people said. The rest of Pebble’s assets, including product inventory and server equipment, will be sold off separately, some of the people said.

I bring this up partially because I was a fan of Pebble’s initial offering and did enjoy the fact that the company offered a unique product (a smartwatch with an always on display that only needed to be charged every five to seven days) but mostly because I found the way Pebble handled this announcement rather dishonest. If your company is filing bankruptcy you should just straight up admit it instead of trying to make it sound like you’ve been bought out by the first company to come by and snap up some of your assets. Since you’re already liquidating the company there’s nothing to be gained by pussyfooting around the subject.

The Real Life Ramification of Software Glitches

When people think of software glitches they generally think of annoyances such as their application crashing and losing any changes since their last save, their smart thermostat causing the furnace not to kick on, or the graphics in their game displaying abnormally. But as software has become more and more integrated into our lives the real life implications of software glitches have become more severe:

OAKLAND, Calif.—Most pieces of software don’t have the power to get someone arrested—but Tyler Technologies’ Odyssey Case Manager does. This is the case management software that runs on the computers of hundreds and perhaps even thousands of court clerks and judges in county courthouses across the US. (Federal courts use an entirely different system.)

Typically, when a judge makes a ruling—for example, issuing or rescinding a warrant—those words said by a judge in court are entered into Odyssey. That information is then relied upon by law enforcement officers to coordinate arrests and releases and to issue court summons. (Most other courts, even if they don’t use Odyssey, use a similar software system from another vendor.)

But, just across the bay from San Francisco, one of Alameda County’s deputy public defenders, Jeff Chorney, says that since the county switched from a decades-old computer system to Odyssey in August, dozens of defendants have been wrongly arrested or jailed. Others have even been forced to register as sex offenders unnecessarily. “I understand that with every piece of technology, bugs have to be worked out,” he said, practically exasperated. “But we’re not talking about whether people are getting their paychecks on time. We’re talking about people being locked in cages, that’s what jail is. It’s taking a person and locking them in a cage.”

First, let me commend Jeff Chorney for stating that jails are cages. Too many people like to prevent that isn’t the case. Second, he has a point. Case management software, as we’ve seen in this case, can have severe ramifications if bugs are left in the code.

The threat of bugs causing significant real life consequences isn’t a new one. A lot of software manages a lot of equipment that can lead to people dying if there is a malfunction. In response to that many industries have gone to great lengths to select tools and come up with procedures to minimize the chances of major bugs making it into released code. The National Aeronautics and Space Administration (NASA), for example, has an extensive history of writing code where malfunctions can cost millions of dollars or even kill people and its programmers have developed tools and standards to minimize their risks. Most industrial equipment manufacturers also spend a significant amount of time developing tools and standards to minimize code errors because their software mistakes can lead to millions of dollars being lost of people dying.

Software developers working on products that can have severe real life consequences need to focus on developing reliable code. Case management software isn’t Facebook. When a bug exists in Facebook the consequences are annoying to users but nobody is harmed. When a bug exists in case management software innocent people can end up in cages of on a sex offender registry, which can ruin their entire lives.

Likewise, people purchasing and use critical software needs to thoroughly test it before putting it in production. Do you think there are many companies that buy multi-million dollar pieces of equipment and don’t test them thoroughly before putting it on the assembly line? That would be foolish and any company that did that would end up facing millions of dollars of downtime or even bankruptcy if the machine didn’t perform as needed. The governments that are using the Odyssey Case Management software should have thoroughly tested the product before using it in any court. But since the governments themselves don’t face any risks from bad case management software they likely did, at best, basic testing before rushing the product into production.

Concealing a Cellular Interceptor in a Printer

As a rule technology improves. Processors become faster, storage space becomes more plentiful, and components become smaller. We’ve seen computers go from slow machines with very little storage that were as big as a room to tiny little powerhouses with gigabytes of storage that fit in your pocket. Cellular technology is no different. Cellular inceptors, for example, can now be concealed in a printer:

Stealth Cell Tower is an antagonistic GSM base station in the form of an innocuous office printer. It brings the covert design practice of disguising cellular infrastructure as other things – like trees and lamp-posts – indoors, while mimicking technology used by police and intelligence agencies to surveil mobile phone users.

[…]

Stealth Cell Tower is a Hewlett Packard Laserjet 1320 printer modified to contain and power components required implement a GSM 900 Base Station.

These components comprise:

  • BladeRF x40
  • Raspberry Pi 3
  • 2x short GSM omnidirectional antennae with magnetic base
  • 2x SMA cable
  • Cigarette-lighter-to-USB-charger circuit (converting 12-24v to 5v)
  • 1x USB Micro cable (cut and soldered to output of USB charger)
  • 1x USB A cable (cut and soldered to printer mainboard)

The HP Laserjet 1320 was chosen not only for its surprisingly unmentionable appearance but also because it had (after much trial and error) the minimal unused interior volumes required to host the components. No cables, other than the one standard power-cord, are externally visible. More so, care has been taken to ensure the printer functions normally when connected via USB cable to the standard socket in the rear.

It’s an impressive project that illustrates a significant problem. Cellular interceptors work because the protocols used by the Global System for Mobile Communications (GSM) standard are insecure. At one time this probably wasn’t taken seriously because it was believed that very few actors had the resources necessary to build equipment that could exploit the weaknesses in GSM. Today a hobbyist can buy such equipment for a very low price and conceal it in a printer, which means inserting an interceptor into an office environment is trivial.

Fortunate, Long-Term Evolution (LTE) is a more secure protocol. Unfortunate, most cell phones don’t use LTE for phones calls and text messages. Until everything is switched over to LTE the threat posed by current cellular interceptors should not be taken lightly.

You’re the Product, Not the Customer

If you’re using an online service for free then you’re the product. I can’t drive this fact home enough. Social media sites such as Facebook and Twitter make their money by selling the information you post. And, unfortunately, they’ll sell to anybody, even violent gangs:

The FBI is using a Twitter tool called Dataminr to track criminals and terrorist groups, according to documents spotted by The Verge. In a contract document, the agency says Dataminr’s Advanced Alerting Tool allows it “to search the complete Twitter firehose, in near real-time, using customizable filters.” However, the practice seems to violate Twitter’s developer agreement, which prohibits the use of its data feed for surveillance or spying purposes.

This isn’t the first time that a company buying access to various social media feeds has been caught selling that information to law enforcers. Earlier this year Geofeedia was caught doing the same thing. Stories like this show that there’s no real divider between private and government surveillance. You should be guarding yourself against private surveillance as readily as you guard against government surveillance because the former becomes the latter with either a court order or a bit of money exchanging hands.

Will Dataminr have its access revoked like Geofeedia did? Let’s hope so. But simply cutting off Dataminr won’t fix the problem since I guarantee there are a bunch of other companies providing the same service. The only way to fix this problem is to stop using social media sites for activities you want to keep hidden from law enforcers. Don’t plan your protests on Facebook, don’t try to coordinate protest activity using Twitter, and don’t post pictures of your protest planning sessions on Instagram. Doing any of those things is a surefire way for law enforcers to catch wind of what you’re planning before you can execute your plan.