Linksys Won’t Lock Out Third-Party Firmware

The Federal Communications Commission (FCC), an agency that believes it has a monopoly on the naturally occurring electromagnetic spectrum, decreed that all Wi-Fi router manufacturers are now responsible for enforcing the agency’s restrictions on spectrum use. Any manufacturer that fails to be the enforcement arm of the FCC will face consequences (being a government agency must be nice, you can just force other people to do your work for you).

Most manufacturers have responded to this decree by taking measures that prevent users from loading third-party firmware of any sort. Such a response is unnecessary and goes beyond the demands of the FCC. Linksys, fortunately, is setting the bar higher and will not lock out third-party firmware entirely:

Next month, the FCC will start requiring manufacturers to prevent users from modifying the RF (radio frequency) parameters on Wi-Fi routers. Those rules were written to stop RF-modded devices from interfering with FAA Doppler weather radar systems. Despite the restrictions, the FCC stressed it was not advocating for device-makers to prevent all modifications or block the installation of third-party firmware.

[…]

Still, it’s a lot easier to lock down a device’s firmware than it is to prevent modifications to the radio module alone. Open source tech experts predicted that router manufacturers would take the easy way out by slamming the door shut on third-party firmware. And that’s exactly what happened. In March, TP-Link confirmed they were locking down the firmware in all Wi-Fi routers.

[…]

Instead of locking down everything, Linksys went the extra mile to ensure owners still had the option to install the firmware of their choice: “Newly sold Linksys WRT routers will store RF parameter data in a separate memory location in order to secure it from the firmware, the company says. That will allow users to keep loading open source firmware the same way they do now,” reports Ars Technica’s Josh Brodkin.

This is excellent news. Not only will it allow users to continue using their preferred firmware, it also sets a precedence for the industry. TP-Link, like many manufacturers, took the easy road. If every other manufacturer followed suit we’d be in a wash of shitty firmware (at least until bypasses for the firmware blocks were discovered). By saying it would still allow third-party firmware to be loaded on its devices, Linksys has maintained its value for many customers and may have convinced former users of other devices to buy its devices instead. Other manufacturers may find themselves having to follow Linksys’s path to prevent paying customers from going over to Linksys. By being a voice of reason, Linksys may end up saving Wi-Fi consumers from only having terrible firmware options.

Updating Your Brand New Xbox One When It Refuses To Update

The new Doom finally convinced me to buy a new console. I debated between a PlayStation 4 and an Xbox One. In the end I settled on the Xbox One because I still don’t fully trust Sony (I may never get over the fact that they included malicious root kits on music CDs to enforce their idiotic copy protection and I’m still unhappy about them removing the Linux capabilities for the PlayStation 3) and I was able to buy a refurbished unit for $100.00 off (I’m cheap).

When I hooked up the Xbox One and powered it up for the first time it said it needed to download and apply an update before doing anything else. I let it download the update, since I couldn’t do anything with it until it finished updating, only for it to report that “There was a problem with the update.” That was the entirety of the error message and the only diagnostic option available was to test the network connection, which reported that everything was fine and I was connected to the Internet. I tried power cycling the device, disconnecting it from power for 30 seconds, and every other magical dance that Microsoft recommended on its useless trouble shooting site. Nothing would convince the Xbox to download and install the update it said it absolutely needed.

After a lot of fucking around I finally managed to update it. If you’re running into this problem you can give this strategy a try. Hopefully it saves you the hour and a half of fucking around I went through. What you will need is a USB flash drive formatted in NTFS (the Xbox One will not read the drive if it’s formatted in a variation of FAT because reasons) and some time to wait for the multi-gigabyte files to download.

Go to Microsoft’s site for downloading the Offline System Update Diagnostic Tool. Scroll down to the downloads. You’ll notice that they’re separated by OS versions. Since you cannot do anything on the Xbox One until the update is applies you can’t look up your OS version (nice catch-22). What you will want to do is download both OSUDT3 and OSUDT2.

When you have the files unzip them. Copy the contents of OSUDT3 to the root directory of the flash drive and connect the flash drive to the side USB port on the Xbox One. Hold down the controller sync button on the side and press the power button on the Xbox One (do not turn the Xbox One on with the controller otherwise this won’t work). Still holding down the sync button now press and hold the DVD eject button as well. You should hear the startup sound play twice. After that you can release the two buttons and the Xbox One should start applying the OSUDT3 update. Once that is finished the system will boot normally and you will return to the initial update screen that refuses to apply any updates.

Remove the flash drive, erase the OSUDT3 files from it, and copy the contents of the OSUDT2 zip file to the root directory of the flash drive. Insert the flash drive into the side USB port on the Xbox One and perform the above dance all over again. Once the update has applied your Xbox One should boot up and actually be something other than a useless brick.

As an aside, my initial impression of the Xbox One is less than stellar.

I’m Satoshi Nakamoto! No, I’m Satoshi Nakamoto!

The price of Bitcoin was getting a little wonky again, which meant that the media must be covering some story about it. This time around the media has learned the real identify of Satoshi Nakamoto!

Australian entrepreneur Craig Wright has publicly identified himself as Bitcoin creator Satoshi Nakamoto.

His admission follows years of speculation about who came up with the original ideas underlying the digital cash system.

Mr Wright has provided technical proof to back up his claim using coins known to be owned by Bitcoin’s creator.

Prominent members of the Bitcoin community and its core development team say they have confirmed his claims.

Mystery sovled, everybody go home! What’s that? Wright provided a technical proof? It’s based on a cryptographic signature? In that case I’m sure the experts are looking into his claim:

SUMMARY:

  1. Yes, this is a scam. Not maybe. Not possibly.
  2. Wright is pretending he has Satoshi’s signature on Sartre’s writing. That would mean he has the private key, and is likely to be Satoshi. What he actually has is Satoshi’s signature on parts of the public Blockchain, which of course means he doesn’t need the private key and he doesn’t need to be Satoshi. He just needs to make you think Satoshi signed something else besides the Blockchain — like Sartre. He doesn’t publish Sartre. He publishes 14% of one document. He then shows you a hash that’s supposed to summarize the entire document. This is a lie. It’s a hash extracted from the Blockchain itself. Ryan Castellucci (my engineer at White Ops and master of Bitcoin Fu) put an extractor here. Of course the Blockchain is totally public and of course has signatures from Satoshi, so Wright being able to lift a signature from here isn’t surprising at all.
  3. He probably would have gotten away with it if the signature itself wasn’t googlable by Redditors.
  4. I think Gavin et al are victims of another scam, and Wright’s done classic misdirection by generating different scams for different audiences.

Some congratulations should go to Wright — who will almost certainly claim this was a clever attempt to troll people so he doesn’t feel luck a schmuck for being too stupid to properly pull off a scam — for trolling so many people. Not only did the media get suckered but even members of the Bitcoin community fell for his scam hook, line, and sinker.

Compromising Self-Driving Vehicles

The difficult part about being a technophile and an anarchist is that the State often highjacks new technologies to further its own power. These highjackings are always done under the auspices of safety and the groundwork is already being laid for the State to get its fingers into self-driving vehicles:

It is time to start thinking about the rules of the new road. Otherwise, we may end up with some analog to today’s chaos in cyberspace, which arose from decisions in the 1980s about how personal computers and the Internet would work.

One of the biggest issues will be the rules under which public infrastructures and public safety officers may be empowered to override how autonomous vehicles are controlled.

When should law enforcers and safety officers be empowered to override another person’s self-driving vehicle? Never. Why? Setting aside the obvious abuses such empowerment would lead to we have the issue of security, which the article alludes to towards the end:

Last, but by no means least, is whether such override systems could possibly be made hack-proof. A system to allow authorized people to control someone else’s car is also a system with a built-in mechanism by which unauthorized people — aka hackers — can do the same.

Even if hackers are kept out, if every police officer is equipped to override AV systems, the number of authorized users is already in the hundreds of thousands — or more if override authority is extended to members of the National Guard, military police, fire/EMS units, and bus drivers.

No system can be “hacker-proof,” especially when that system has hundreds of thousands of authorized users. Each system is only as strong as its weakest user. It only takes one careless authorized user to leak their key for the entire world to have a means to gaining access to everything locked by that key.

In order to implement a system in self-driving cars that would allow law enforcers and safety officers to override them there would need to be a remote access option that allowed anybody employed by a police department, fire department, or hospital to log into the vehicle. Every vehicle would either have to be loaded with every law enforcer’s and safety officer’s credentials or, more likely, rely on a single master key. In the case of the former it would only take one careless law enforcer or safety officer posting their credentials somewhere an unauthorized party could access them, including the compromised network of a hospital, for every self-driving car to be compromised. In the case of the latter the only thing that would be required to compromise every self-driving car is the master key being leaked. Either way, the integrity of the system would be dependent on hundreds of thousands of people maintaining perfect security, which is an impossible goal.

If self-driving cars are setup to allow law enforcers and safety officers to override them then they will become useless due to being constantly compromised by malicious actors.

Paranoia I Appreciate

My first Apple product was a PowerBook G4 that I purchased back in college. At the time I was looking for a laptop that could run a Unix operating system. Back then (as is still the case today albeit to a lesser extent) running Linux on a laptop meant you had to usually give up sleep mode, Wi-Fi, the additional function buttons most manufacturers added on their keyboards, and a slew of power management features that made the already pathetic battery life even worse. Since OS X was (and still is) Unix based and didn’t involved the headaches of trying to get Linux to run on a laptop the PowerBook fit my needs perfectly.

Fast forward to today. Between then and now I’ve lost confidence in a lot of companies whose products I used to love. Apple on the other hand has continued to impress me. In recent times my preference for Apple products has been influenced in part by the fact that it doesn’t rely on selling my personal information to make money and displays a healthy level of paranoia:

Apple has begun designing its own servers partly because of suspicions that hardware is being intercepted before it gets delivered to Apple, according to a report yesterday from The Information.

“Apple has long suspected that servers it ordered from the traditional supply chain were intercepted during shipping, with additional chips and firmware added to them by unknown third parties in order to make them vulnerable to infiltration, according to a person familiar with the matter,” the report said. “At one point, Apple even assigned people to take photographs of motherboards and annotate the function of each chip, explaining why it was supposed to be there. Building its own servers with motherboards it designed would be the most surefire way for Apple to prevent unauthorized snooping via extra chips.”

Anybody who has been paying attention the the leaks released by Edward Snowden knows that concerns about surveillance hardware being added to off-the-shelf products isn’t unfounded. In fact some companies such as Cisco have taken measure to mitigate such threats.

Apple has a lot of hardware manufacturing capacity and it appears that the company will be using it to further protect itself against surveillance by manufacturing its own servers.

This is a level of paranoia I can appreciate. Years ago I brought a lot of my infrastructure in house. My e-mail, calendar and contact syncing, and even this website are all being hosted on servers running in my dwelling. Although part of the reason I did this was for the experience another reason was to guard against certain forms of surveillance. National Security Letters (NSL), for example, require service providers to surrender customer information to the State and legally prohibit them from informing the targeted customer. Since my servers are sitting in my dwelling any NSL would necessarily require me to inform myself of receiving it.

iOS 9.3 With iMessage Fix Is Out

In the ongoing security arms race researchers from John Hopkins discovered a vulnerability in Apple’s iMessage:

Green suspected there might be a flaw in iMessage last year after he read an Apple security guide describing the encryption process and it struck him as weak. He said he alerted the firm’s engineers to his concern. When a few months passed and the flaw remained, he and his graduate students decided to mount an attack to show that they could pierce the encryption on photos or videos sent through iMessage.

It took a few months, but they succeeded, targeting phones that were not using the latest operating system on iMessage, which launched in 2011.

To intercept a file, the researchers wrote software to mimic an Apple server. The encrypted transmission they targeted contained a link to the photo stored in Apple’s iCloud server as well as a 64-digit key to decrypt the photo.

Although the students could not see the key’s digits, they guessed at them by a repetitive process of changing a digit or a letter in the key and sending it back to the target phone. Each time they guessed a digit correctly, the phone accepted it. They probed the phone in this way thousands of times.

“And we kept doing that,” Green said, “until we had the key.”

A modified version of the attack would also work on later operating systems, Green said, adding that it would likely have taken the hacking skills of a nation-state.

With the key, the team was able to retrieve the photo from Apple’s server. If it had been a true attack, the user would not have known.

There are several things to note about this vulnerability. First, Apple did response quickly by including a fix for it in iOS 9.3. Second, security is very difficult to get right so it often turns into an arms race. Third, designing secure software, even if you’re a large company with a lot of talented employees, is hard.

Christopher Soghoian also made a good point in the article:

Christopher Soghoian, principal technologist at the American Civil Liberties Union, said that Green’s attack highlights the danger of companies building their own encryption without independent review. “The cryptographic history books are filled with examples of crypto-algorithms designed behind closed doors that failed spectacularly,” he said.

The better approach, he said, is open design. He pointed to encryption protocols created by researchers at Open Whisper Systems, who developed Signal, an instant message platform. They publish their code and their designs, but the keys, which are generated by the sender and user, remain secret.

Open source isn’t a magic bullet but it does allow independent third party verification of your code. This advantage often goes unrealized as even very popular open source projects like OpenSSL have contained numerous notable security vulnerabilities for years without anybody being the wiser. But it’s unlikely something like iMessage would have been ignored so thoroughly.

The project would likely attracted a lot of developers interested in writing iMessage clients for Android, Windows, and Linux. Since iOS, and therefore by extension iMessage, is so popular in the public eye it’s likely a lot of security researchers would have looked through the iMessage code hoping to be the first to find a vulnerability and enjoy the publicity that would almost certainly entail. So open sourcing iMessage would likely have gained Apple a lot of third party verification.

In fact this is why I recommend applications like Signal over iMessage. Not only is Signal compatible with Android and iOS but it’s also open source so it’s available for third party verification.

One Step Forward, Two Steps Back

Were I asked I would summarize the Internet of Things as taking one step forward and two steps back. While integrating computers into everyday objects offers some potential the way manufacturers are going about it is all wrong.

Consider the standard light switch. A light switch usually has two states. One state, which closes the circuit, turns the lights on while the other state, which opens the circuit, turns the lights off. It’s simple enough but has some notable limitations. First, it cannot be controlled remotely. Having a remotely controlled light switch would be useful, especially if you’re away from home and want to make it appear as though somebody is there to discourage burglars. It would also be nice to verify if you turned all your lights off when you left to reduce the electric bill. Of course remotely operated switches also introduce the potential for remotely accessible vulnerabilities.

What happens when you take the worst aspects of connected light switches, namely vulnerabilities, and don’t even offer the positives? This:

Garrett, who’s also a member of the Free Software Foundation board of directors, was in London last week attending a conference, and found that his hotel room has Android tablets instead of light switches.

“One was embedded in the wall, but the two next to the bed had convenient looking ethernet cables plugged into the wall,” he noted. So, he got ahold of a couple of ethernet adapters, set up a transparent bridge, and put his laptop between the tablet and the wall.

He discovered that the traffic to and from the tablet is going through the Modbus serial communications protocol over TCP.

“Modbus is a pretty trivial protocol, and notably has no authentication whatsoever,” he noted. “Tcpdump showed that traffic was being sent to 172.16.207.14, and pymodbus let me start controlling my lights, turning the TV on and off and even making my curtains open and close.”

He then noticed that the last three digits of the IP address he was communicating with were those of his room, and successfully tested his theory:

“It’s basically as bad as it could be – once I’d figured out the gateway, I could access the control systems on every floor and query other rooms to figure out whether the lights were on or not, which strongly implies that I could control them as well.”

As far as I can tell the only reason the hotel swapped out mechanical light switches with Android tablets was to attempt to look impressive. What they ended up with was a setup that may look impressive to the layman but is every trolls dream come true.

I can’t wait to read a story about a 14 year-old turning off the lights to every room in a hotel.

Google Releases RCS Client. It’s Backdoored.

With the recent kerfuffle between Apple and the Federal Bureau of Investigations (FBI) the debate between secure and insecure devices is in the spotlight. Apple has been marketing itself as a company that defends users’ privacy and this recent court battle gives merits to its claims. Other companies have expressed support for Apple’s decision to fight the FBI’s demand, including Google. That makes this next twist in the story interesting.

Yesterday Christopher Soghoian posted the following Tweet:

His Tweet linked to a comment on a Hacker News thread discussing Google’s new Rich Communication Services (RCS) client, Jibe. What’s especially interesting about RCS is that it appears to include a backdoor as noted in the Hacker News thread:

When using MSRPoTLS, and with the following two objectives allow compliance with legal interception procedures, the TLS authentication shall be based on self-signed certificates and the MSRP encrypted connection shall be terminated in an element of the Service Provider network providing service to that UE. Mutual authentication shall be applied as defined in [RFC4572].

It’s important to note that this doesn’t really change anything from the current Short Message Service (SMS) service and cellular voice protocols, which offers no real security. By using this standard Google isn’t introducing a new security hole. However, Google also isn’t fixing a known security hole.

When Apple created iMessage and FaceTime it made use of strong end-to-end encryption (although that doesn’t protect your messages if you back them up to iCloud). Apple’s replacement for SMS and standard cellular calls addressed a known security hole.

Were I Google, especially with the security debate going on, I would have avoided embracing RCS since it’s insecure by default. RCS may be an industry standard, since it’s managed by the same association that manages Global System for Mobile Communications (GSM), but it’s a bad standard that shouldn’t see widespread adoption.

Everything Is Better With Internet Connectivity

I straddle that fine line between an obsessive love of everything technologically advanced and a curmudgeonly attitude that results in me asking why new products ever see the light of day. The Internet of Things (IoT) trend has really put me in a bad place. There are a lot of new “smart” devices that I want to like but they’re so poorly executed that I end up hating their existence. Then there are the products I can’t fathom on any level. This is one of those:

Fisher-Price’s “Smart Toys” are a line of digital stuffed animals, like teddy bears, that are connected to the Internet in order to offer personalized learning activities. Aimed at kids aged 3 to 8, the toys actually adapt to children to figure out their favorite activities. They also use a combination of image and voice recognition to identify the child’s voice and to read “smart cards,” which kick off the various games and adventures.

According to a report released today by security researchers at Rapid7, these Smart Toys could have been compromised by hackers who wanted to take advantage of weaknesses in the underlying software. Specifically, the problem was that the platform’s web service (API) calls were not appropriately verifying the sender of messages, meaning an attacker could have sent requests that should not otherwise have been authorized.

I’m sure somebody can enlighten me on the appeal of Internet connected stuffed animals but I can only imagine these products being the outcome of some high level manager telling a poor underling to “Cloud enable our toys!” In all likelihood no specialists were brought in to properly implement the Internet connectivity features so Fisher-Price ended up releasing a prepackaged network vulnerability. Herein lies the problem with the IoT. Seemingly every company has become entirely obsessed with Internet enabled products but few of them know enough to know that they don’t know what they’re doing. This is creating an Internet of Bad Ideas.

There’s no reason the IoT has to be this way. Companies can bring in people with the knowledge to implement Internet connectivity correctly. But they’re not. Some will inevitably blame each company’s desire to keep overhead as low as possible but I think the biggest part of the problem may be rooted in ignorance. Most of these companies know they want to “cloud enable” their products to capitalize on the new hotness but are so ignorant about network connectivity that they don’t even know they’re ignorant.

Even An Air Gap Won’t Save You

Security is a fascinating field that is in a constant state of evolution. When new defenses are created new attackers follow and vice versa. One security measure some people take is to create and store their cryptography keys on a computer that isn’t attached to any network. This is known as an air gap and is a pretty solid security measure if implemented correctly (which is harder than most people realize). But even air gaps can be remotely exploited under the right circumstances:

In recent years, air-gapped computers, which are disconnected from the internet so hackers can not remotely access their contents, have become a regular target for security researchers. Now, researchers from Tel Aviv University and Technion have gone a step further than past efforts, and found a way to steal data from air-gapped machines while their equipment is in another room.

“By measuring the target’s electromagnetic emanations, the attack extracts the secret decryption key within seconds, from a target located in an adjacent room across a wall,” Daniel Genkin, Lev Pachmanov, Itamar Pipman, and Eran Tromer write in a recently published paper. The research will be presented at the upcoming RSA Conference on March 3.

It needs to be stated up front that this attack requires a tightly controlled environment so isn’t yet practical for common real world exploitation. But attacks only improve over time so it’s possible this attack will become more practical with further research. Some may decry this as the end of computer security, because that’s what people commonly do when new exploits are created, but it will simply cause countermeasures to be implemented. Air gapped machines may be operated in a Faraday cage or computer manufacturers may improve casings to better control electromagnetic emissions.

This is just another chapter in the never ending saga of security. And it’s a damn impressive chapter no matter how you look at it.