Now Your Water Pitcher Can Be A Network Vulnerability

this-business-will-get-out-of-control

This Internet of Things will get out of control.

Everybody is rushing to either “cloud” enable their products or make it part of the Internet of things. There are countless examples of this nonsense. Now we even have water pitchers with fucking Wi-Fi capabilities:

Starting today, Brita will sell a sensor-filled, WiFi-connected Brita pitcher (yes, you read that correctly) that will work with Dash Replenishment Service.

The new pitcher, called the Brita Infinity pitcher, will be able to track how much water is flowing through the pitcher. When approximately 40 gallons of water have passed through the pitcher’s purification filter, the pitcher will then send a signal to the Dash Replenishment Service to reorder more filters.

Instead of having a watch pitcher you have to replace filters on whenever you water starts to taste funky you can have that and concerns about battery power, whether the pitcher is accurately measuring water usage and not shaving a bit off of the top to increase Brita’s profits, and network security too!

We’re at the point where we need to strongly consider separate wireless networks and VLANs for our Internet enabled devices. The utter lack of security concerns most Internet of Things manufacturers have shown so far makes these devices too dangerous to let onto our usual networks but the technology is becoming so pervasive that simply ignoring the technology will become increasingly more difficult.

When Idiots Write About Computer Security

People trying to justify the Federal Bureau of Investigation’s (FBI) demands of Apple are possibly the most amusing thing about the agency’s recent battle with Apple. Siding with the FBI requires either being completely ignorant of security or being so worshipful of the State that you believe any compromise made in the name empowering it is justified.

A friend of mine posted an article that tries to justify the FBI’s demands by claiming Apple is spreading fear, uncertainty, and disinformation (FUD). Ironically, the article is FUD. In fact it’s quite clear that the author has little to no understanding of security:

In its campaign, Apple is mustering all the fear, uncertainty and doubt it can. In an open letter to its customers, it states that “the government would have us write an entirely new operating system for their use. They are asking Apple to remove security features and add a new ability to the operating system to attack iPhone encryption, allowing a passcode to be input electronically. … It would be wrong to intentionally weaken our products with a government-ordered backdoor.” The FUD factor in that statement is “weaken our products.” It is grossly misleading, the plural suggesting that the FBI wants Apple to make this back door a standard part of iPhones. That’s flat-out false. What the government has asked is that Apple modify software to remove a feature that was not present in earlier versions of the software, and then install that new software on the single phone used by the terrorist. Apple can then destroy the software.

Apple’s statement is entirely accurate. The FBI is demanding a signed version of iOS that removes security features and includes a mechanism to brute force the password used to encrypt the contents of the device. Because the firmware would be signed it could be loaded onto other iPhones. We also know the FBI has about a dozen more phones it wants Apple to unlock so this case isn’t about a single phone. This case is about setting a precedence that will make it easier for the State to coerce companies into bypassing the security features of their own products.

The claim that Apple can destroy the software is also naive. In order to unlock the device the software must be loaded onto the phone. Since the phone is evidence it must be returned to the FBI. That means the FBI will have a signed copy of the custom firmware sitting on the phone and the phone will be unlocked so it would be feasible for the FBI to extract the firmware. Furthermore, the process involved in writing software for a court case will likely involve several third parties receiving access to the firmware:

Once the tool is ready, it must be tested and validated by a third party. In this case, it would be NIST/NIJ (which is where my own tools were validated). NIST has a mobile forensics testing and validation process by which Apple would need to provide a copy of the tool (which would have to work on all of their test devices) for NIST to verify.

[…]

During trial, the court will want to see what kind of scientific peer review the tool has had; if it is not validated by NIST or some other third party, or has no acceptance in the scientific community, the tool and any evidence gathered by it could be rejected.

[…]

If evidence from a device ever leads to a case in a court room, the defense attorney will (and should) request a copy of the tool to have independent third party verification performed, at which point the software will need to be made to work on another set of test devices. Apple will need to work with defense experts to instruct them on how to use the tool to provide predictable and consistent results.

It will likely be impossible for Apple to maintain exclusive control over the firmware.

Once the genie is out of the bottle it can’t be put back in. This is especially true with software since it can be reproduced almost infinitely for costs so small they’re practically free. If Apple produces this firmware it will not be able to make it not exist afterward. Let’s continue with the article in question:

More contradictory to Apple’s claims is that the FBI has specifically stated that it does not intend to cause a weakening of the consumer product, so this case cannot be used as a precedent. Should the government at any time attempt to do that so that back doors to be embedded in products, its own words would be the most compelling argument to counter that.

The FBI claims a lot of things. That doesn’t make those claims true. By merely existing this firmware would make consumer products less secure. Currently the iPhone’s security is quite strong as noted by the fact that the FBI has been unable to break into about a dozen phones in its possession. If Apple releases a firmware that can bypass security features on iPhones it necessarily means the overall security of iPhones, which are consumer products, is weakened. There is no way to logically argue otherwise. When something that couldn’t be broken into can be broken into it is less secure than it was. The fact that I felt the need to write the previous sentence causes me great pain because it speaks so ill of the education of the author.

The FUD continues, with Apple saying, “Law enforcement agents around the country have already said they have hundreds of iPhones they want Apple to unlock if the FBI wins this case.” That might very well be the case. But it has zero relevance. Each of those cases could be resolved only with a court order of its own, regardless of what happens with the San Bernardino iPhone. Even if this case were not in front of the court at the moment, any state, local or federal law enforcement agency could bring a similar case forward.

Actually, it’s entirely relevant. The FBI wants the court precedence so prosecutors in other cases can compel companies to bypass security features on their products. Apple isn’t simply fighting the creation of a purposely broken firmware, it’s fighting a precedence that would allow other courts to coerce companies into performing labor against their will. Obviously the author’s understanding of the legal system, specifically how precedence works, is as lacking as his understanding of security.

Gaining access to locked data is a legitimate law enforcement issue, and whatever your personal beliefs, all law enforcement officers have a responsibility to attempt to collect all information that is legally possible to collect.

While law enforcers may have a responsibility to attempt to collect all information within their power to collect that doesn’t mean they should be able to compel others to assist them at the point of a gun.

In other forums, Apple has been claiming that if the U.S. requires Apple to cooperate in providing access to the phone, all other governments around the world will then expect the same sort of cooperation. It is a bogus claim — more FUD. Do Apple’s lawyers really not know that the law of one country does not apply to another? Apple’s winning its case in the U.S. would do nothing to stop another country from initiating a similar action. Its losing its case should have no influence on whether other countries decide to pursue such matters.

I see the author doesn’t pay attention to world events. Oftentimes when a government sees another government get away with something nasty it decides it can also get away with it. Take Blackberry, for example. India demanded that Blackberry give it access to a backdoor and Blackberry complied. Seeing India getting what it wanted the government of Pakistan demanded the same. Monkey see, monkey do. It should be noted that Blackberry actually left Pakistan but it was obviously for reasons other than the backdoor demands.

Apple knows that if it rolls over it will encourage other governments to demand the same as the FBI. If, however, it digs its heels in it knows that it will discourage other governments from demanding the same. This is the same principle as not negotiating with terrorists. If you give in once it will encourage others to pull the same shit against you.

But of all of Apple’s arguments, the one that is most ludicrous, or perhaps the most damning of its much-touted security prowess, is revealed in this response to the government’s request for a key that could unlock one phone:

“Of course, Apple would do our best to protect that key, but in a world where all of our data is under constant threat, it would be relentlessly attacked by hackers and cybercriminals. As recent attacks on the IRS systems and countless other data breaches have shown, no one is immune to cyberattacks.”

First, Apple is already relentlessly attacked by hackers and criminals. I would like to hope that Apple has better security practices than the IRS. But when you unpack this statement, you are left with the impression that we should not trust any of Apple’s software or products. You have to assume that, should Apple write the software that the FBI wants, it would be among the most protected software in the company. If Apple is concerned about this software being compromised, what does that say about all of its other software?

This is another claim that can only be made by somebody who doesn’t understand security. This firmware wouldn’t be entirely in Apple’s hands. As noted above, the FBI would possess a phone with the firmware installed on it. And anybody who has paid attention to the various congressional hearings on the numerous federal network breaches knows the federal government’s network is incapable of protecting anything of value.

This firmware isn’t like a private key, which can serve its purpose even if you keep it within your exclusive control. It’s a piece of software that must be loaded onto a device that is evidence in a crime, which necessarily means it must leave your exclusive control. So Apple’s security isn’t the only cause for concern here.

Even assuming that a bad guy gets hold of just the software that law enforcement wants created, it would have to be signed by Apple’s security certificate to load on any phone.

Which the copy on the phone and any copies sent out for independent testing would be.

If the criminal gets a copy of the software and it has already been signed with the certificate, Apple could revoke the certificate.

If the author read the Electronic Frontier Foundation’s (EFF) excellent technical overview of this case he would know that the public key is built into the hardware of the iPhone. This is actually a smart security practice because it prevents malware from replacing the public key. If the public key was replaced it would allow malware to load its own code. The downside to this is that Apple can’t revoke the public key to prevent software signed with the corresponding private key from loading.

But if a bad guy gets hold of Apple’s digital certificate, then the whole Apple software base is at risk, and this feature that the FBI wants bypassed is irrelevant. After all, Apple has stated that it is not immune from attack, and it has implied it is a reasonable concern that its most protected software can be compromised.

I’m going to take this opportunity to write about a specific feature of public key cryptography that is relevant here. Public key cryptography relies on two keys: a private key and a public key. The private key, as the name implies, can be kept private. Anything signed with the private key can be verified by the public key. Because of this you only need to hand out the public key.

I have a Pretty Good Privacy (PGP) key that I use to encrypt and sign e-mails. Anybody with my public key can validate my signature but they cannot sign an e-mail as me. If, however, they had my private key they could sign e-mails as me. Because of this I keep my private key very secure. Apple likely keeps its software signing key in a vault on storage media that is only ever connected to a secure computer that has no network connectivity. Under such circumstances an attacker with access to Apple’s network would still be unable to access the company’s software signing key. For reasons I stated earlier, that’s not a model Apple can follow with the firmware the FBI is demanding. Apple’s security concerns in this case are entirely unrelated to the security practices of its private key.

In addition to his technical incompetence, the author decided to display his argumentative incompetence by closing his article with a pretty pathetic ad hominid:

But Apple, seeming to take a page from Donald Trump’s presidential campaign, is using the situation to promote its brand with free advertising.

If all else fails in your argument just compare your opponent to Trump.

The Abysmal State Of Credit Card Security

Credit card fraud is a major problem. This isn’t surprising since until recently, at least here in the United States, credit cards included no security. Hoping to reduce fraud the credit card companies developed the Europay, Mastercard, and Visa (EMV) standard. Cards that comply with the EMV standard include a chip, which offers some security. But here in the United States two setbacks have prevent EMV from delivering better credit card security. First, the United States is adopting chip and signature, not chip and PIN. Secondly, most merchants still aren’t equipped to process EMV credit cards:

This week a management consulting company called The Strawhecker Group (TSG) released the results of a study that found that only 37 percent of US retailers were ready to process chip-embedded credit and debit cards. The slow adoption of chip-embedded cards leaves merchants open to accepting liability for fraud perpetrated with traditional, less-secure magnetic stripe cards.

I attribute this low adoption rate to the credit card companies failing to set a hard cutoff date for magnetic strips. Even if you get an EMV card it will contain an insecure magnetic strip so it can be used at merchants that aren’t setup to process EMV cards. Since all EMV cards are equipped with magnetic strips merchants aren’t motivated to get setup to process EMV cards.

When it comes to security hard cutoff dates are necessary. Without them users of the old insecure standard see no reason to upgrade. With them users grumble about having to upgrade but will begrudgingly do it out of necessity. Credit card companies need to set a date and tell merchants that after that date magnetic swipe transactions will be declined otherwise we’ll never get over this financial fraud fuckery.

Bill Gates Sides With The FBI

Microsoft has always enjoy a cozy relationship with the State. This isn’t surprising to anybody who has paid attention to Bill Gates and his ongoing love affair with the State. It’s also not surprising that he is siding with the Federal Bureau of Investigations (FBI) against Apple:

Technology companies should be forced to cooperate with law enforcement in terrorism investigations, Gates said, according to a Financial Times story posted late Monday.

“This is a specific case where the government is asking for access to information. They are not asking for some general thing, they are asking for a particular case,” he said.

This statement by Gates is laughable. The FBI is demanding Apple create a custom signed version of iOS that doesn’t include several security features and includes builtin software to brute force the decryption key set by the user. That is not a general thing for a particular case, that’s a general tool that can used on many iPhones.

What is funny about this though is that Bill Gates tried to backpedal but in so doing only said exactly the same thing over again:

In an interview with Bloomberg, Bill Gates says he was “disappointed” by reports that he supported the FBI in its legal battle with Apple, saying “that doesn’t state my view on this.”

Still, Gates took a more moderate stance than some of his counterparts in the tech industry, not fully backing either the FBI or Apple but calling for a broader “discussion” on the issues. “I do believe that with the right safeguards, there are cases where the government, on our behalf — like stopping terrorism, which could get worse in the future — that that is valuable.” But he called for “striking [a] balance” between safeguards against government power and security.

Any “balance” would require Apple to create firmware that includes a backdoor for government use. In other words, it would require exactly what the FBI is demanding of Apple.

Cryptography is math and math belongs to that very small category of things that are either black or white. Either the cryptography you’re using is effective and only allows authorized parties to access the unencrypted content or it is ineffective. There is no middle ground. You cannot break cryptography just a little bit.

Although the existence of a version of iOS with a backdoor is frightening in of itself, the idea that a single judge can enslave software developers by issuing a writ is terrifying. That’s an aspect of this case that is getting glossed over a lot. Apple has already publicly stated it has no desire to write a weakened version of iOS. If the court sides with the FBI it will try to force Apple to write software against its will. Why should any individual have the power to legally do that?

Google Releases RCS Client. It’s Backdoored.

With the recent kerfuffle between Apple and the Federal Bureau of Investigations (FBI) the debate between secure and insecure devices is in the spotlight. Apple has been marketing itself as a company that defends users’ privacy and this recent court battle gives merits to its claims. Other companies have expressed support for Apple’s decision to fight the FBI’s demand, including Google. That makes this next twist in the story interesting.

Yesterday Christopher Soghoian posted the following Tweet:

His Tweet linked to a comment on a Hacker News thread discussing Google’s new Rich Communication Services (RCS) client, Jibe. What’s especially interesting about RCS is that it appears to include a backdoor as noted in the Hacker News thread:

When using MSRPoTLS, and with the following two objectives allow compliance with legal interception procedures, the TLS authentication shall be based on self-signed certificates and the MSRP encrypted connection shall be terminated in an element of the Service Provider network providing service to that UE. Mutual authentication shall be applied as defined in [RFC4572].

It’s important to note that this doesn’t really change anything from the current Short Message Service (SMS) service and cellular voice protocols, which offers no real security. By using this standard Google isn’t introducing a new security hole. However, Google also isn’t fixing a known security hole.

When Apple created iMessage and FaceTime it made use of strong end-to-end encryption (although that doesn’t protect your messages if you back them up to iCloud). Apple’s replacement for SMS and standard cellular calls addressed a known security hole.

Were I Google, especially with the security debate going on, I would have avoided embracing RCS since it’s insecure by default. RCS may be an industry standard, since it’s managed by the same association that manages Global System for Mobile Communications (GSM), but it’s a bad standard that shouldn’t see widespread adoption.

Political Campaigns Suck At Protecting Your Personal Information

I don’t need more reasons to abandon politics but I realize others do. To that end I feel that it’s important to point out the abysmal security record of political campaigns:

Over the last three months, more than 100 million US voters have had their data exposed online. These data breaches weren’t caused by a sophisticated hack or malware. Instead, political campaigns’ abysmal cybersecurity practices are to blame. Although modern campaigns constantly acquire and purchase massive amounts of data, they often neglect to fully beef up security surrounding it, effectively turning the campaigns into sitting ducks — huge operations with databases left open and vulnerable.

[…]

That might be unsettling, but perhaps more troubling is the fact that political campaigns are terrible at cybersecurity. Not only do the organizations have access to more information than ever before, they’re not able to keep it safe. The incentives to do so just don’t exist, and that’s why we’re seeing so much compromised voter data.

In Iowa last month, the state’s Republican party failed to adequately protect a database containing information on 2 million voters, making it readily available through just a basic scan of the website’s source code. In December, an independent security researcher uncovered a publicly available database of 191 million voter records. Included in that trove was each voter’s full name, home address, mailing address, unique voter ID, state voter ID, gender, date of birth, phone number, date of registration, political affiliation, and voter history since 2000.

I’ve mentioned these sorts of issues to friends before but they always hid behind the “I give campaigns a fake phone number” excuse. But the phone number you gave to a campaign isn’t what’s getting out, it’s your real personal information including your home address.

Politics is continuing to become more polarizing in this country. Both parties have become religions where disagreements with the party being tantamount to heresy. True believers are often willing to shun former friends and family members. Some employers are even willing to avoid hiring or terminating employees based on their form of political worship. There are no signs indicating this trend will cease or reverse so your voting record could become a major problem in the near future.

The amount of personal information many campaigns have on individuals is rather shocking. It’s often enough information for people with access to commit acts of identify theft.

There really isn’t anything to gain for political participation and there’s a lot to lose. Control over your personal information is one of the things you could potentially lose. My advise is to avoid politics since it’s obvious campaigns have no interest in protecting you.

Everything Is Better With Internet Connectivity

I straddle that fine line between an obsessive love of everything technologically advanced and a curmudgeonly attitude that results in me asking why new products ever see the light of day. The Internet of Things (IoT) trend has really put me in a bad place. There are a lot of new “smart” devices that I want to like but they’re so poorly executed that I end up hating their existence. Then there are the products I can’t fathom on any level. This is one of those:

Fisher-Price’s “Smart Toys” are a line of digital stuffed animals, like teddy bears, that are connected to the Internet in order to offer personalized learning activities. Aimed at kids aged 3 to 8, the toys actually adapt to children to figure out their favorite activities. They also use a combination of image and voice recognition to identify the child’s voice and to read “smart cards,” which kick off the various games and adventures.

According to a report released today by security researchers at Rapid7, these Smart Toys could have been compromised by hackers who wanted to take advantage of weaknesses in the underlying software. Specifically, the problem was that the platform’s web service (API) calls were not appropriately verifying the sender of messages, meaning an attacker could have sent requests that should not otherwise have been authorized.

I’m sure somebody can enlighten me on the appeal of Internet connected stuffed animals but I can only imagine these products being the outcome of some high level manager telling a poor underling to “Cloud enable our toys!” In all likelihood no specialists were brought in to properly implement the Internet connectivity features so Fisher-Price ended up releasing a prepackaged network vulnerability. Herein lies the problem with the IoT. Seemingly every company has become entirely obsessed with Internet enabled products but few of them know enough to know that they don’t know what they’re doing. This is creating an Internet of Bad Ideas.

There’s no reason the IoT has to be this way. Companies can bring in people with the knowledge to implement Internet connectivity correctly. But they’re not. Some will inevitably blame each company’s desire to keep overhead as low as possible but I think the biggest part of the problem may be rooted in ignorance. Most of these companies know they want to “cloud enable” their products to capitalize on the new hotness but are so ignorant about network connectivity that they don’t even know they’re ignorant.

Ted Nuget Riding The Crazy Train

I offer this post in the hopes of being helpful to the gun rights community. As with any community the gun rights community has its good and bad members. While many of the old guard rub me the wrong way, specifically because of their socially conservative views, I don’t really hold them in ill regard. However, there are some positively vile members. One of those is Ted Nuget who not only involves himself in gun rights but is a member of the National Riffle Association’s (NRA) board.

Deciding he hasn’t been in the spotlight for saying vile shit for too long, Ted decided it would be a jolly good idea to post some anti-Semetic shit on his Facebook page. Here’s a screen shot in case the post is pulled:

ted-nuget-crazy-train

I feel it necessary to first point out that judaism and Israel aren’t synonymous. Unless Ted is implying Israel is behind the gun control movement, which would seem rather odd to me, he can’t even get his bigotry symbolism right.

Speaking of bigots, they really are my least favorite part of, well, pretty much any movement. My support for gun rights stems from my belief that everybody should enjoy a right to self-defense. I don’t care what your race, religion, sexual orientation, gender, or any other defining characteristics are. Hell, I don’t even care what species you are. If you’re an organism you have a right to fight anything that attempts to kill you.

My advice is that individuals involved in the fight for gun rights should strongly consider disassociating themselves with Ted Nuget. He’s a vile piece of shit that contributes absolutely nothing of value.

Registering A Drone Puts Your Home Address Publicly To The Internet

When a handful of drone owners made some poor choices the Federal Aviation Administration (FAA) saw the opportunity to drum up some cash. It mandated that all drones must be registered with the FAA. Registering as a drone pilot costs $5.00 and failing to register can cost up to $250,000 and/or up to three years in a cage. Either way the FAA wins and you lose. Why do you lose? Because a hidden costs of registering your drone is making your home address publicly available on the Internet:

The FAA is delighted that signups for its new drone registry have hit 300,000. But the agency’s buoyant mood is destined for a nosedive. The FAA isn’t warning drone owners their names and addresses are easily searchable and downloadable (47MB) in the agency’s online registry.

To add a bit more insult than usual to public registries, the FAA’s drone pilot registry even includes minors:

While drone owners must be 13 years old to register, the privacy threat posed by this registry is particularly concerning for minors — for obvious reasons.

The poor manner in which this registry program has been handled just adds credence to the entire thing being a quick cash grab. Even a little bit of thought would have caused the developers to realize how bad of an idea making people’s name and addresses publicly available is. It’s especially damning when it’s so easy to make a more anonymized database.

Democracy Has No Place In The Crypto Wars

AT&T’s CEO, Randall Stephenson, had some choice words for Apple’s CEO, Tim Cook. Namely, Stephenson doesn’t appreciate Cook’s stance on effective encryption:

AT&T CEO Randall Stephenson doesn’t think Apple CEO Tim Cook should be making long-term decisions around encryption that could ripple across the technology industry. “I don’t think it is Silicon Valley’s decision to make about whether encryption is the right thing to do,” he told The Wall Street Journal in an interview on Wednesday. “I understand Tim Cook’s decision, but I don’t think it’s his decision to make,” said Stephenson. “I personally think that this is an issue that should be decided by the American people and Congress, not by companies.”

I’m sure this has everything to do with Stephenson’s strong belief in democracy and nothing at all to do with his company’s surveillance partnership with the National Security Agency (NSA). But let’s address the issue of democracy.

Stephenson says that effective cryptography should be decided by the American people. Unless I’m missing something Tim Cook is an American citizen. His stance on effective cryptography is his decision. Therefore is position is decided by an American person. Furthermore, why should anybody outside of Apple have a voice in the company’s stance? Stephenson is an employee of AT&T so his opinion shouldn’t be relevant to Apple. Congress, likewise, isn’t employed by Apple so their opinions shouldn’t be relevant to Apple either. Democracy, outside of groups voluntarily decided to vote on matters involving only themselves, is bullshit. It’s a tool for people to inflict their will on others. In fact it may very well be the grossest form of might makes right our species has developed.

I understand Stephenson’s decision, part of his business relies on surveillance, but it’s not his decision to make. This is an issue that should be decided by those creating the tools. If Stephenson wants to insert backdoors into his company’s products that’s fine, I’ll simply avoid using his products. But his has no right to demand other companies follow suit.