The Fix for High Sierra’s Embarrassing Privilege Escalation Bug and the Fix for the Fix

Apple has already released a fix for its embarrassing privilege escalation bug. If you haven’t already, open the App Store, go to Updates, and install Security Update 2017-001. However, after installing that you may notice that file sharing no longer works. In order to fix this problem you need to perform the following steps:

  1. Open the Terminal app, which is in the Utilities folder of your Applications folder.
  2. Type sudo /usr/libexec/configureLocalKDC and press Return.
  3. Enter your administrator password and press Return.
  4. Quit the Terminal app.

In conclusion High Sierra is still a steaming pile of shit and you should stick to Sierra if you can.

macOS High Sierra is Still Terrible

macOS High Sierra may go down in the history books as Apple’s worst release of macOS since the initial one. Swapping the graphical user interface to use the Metal API wasn’t a smooth transition to say the least but the real mess is in regards to security. There was a bug where a user’s password could be displayed in the password hint field so logging in as a malicious user only requires entering a user’s password incorrectly to trigger the hint field. But yesterday it was revealed that the root account, which is normally disabled entirely, could be activated in High Sierra by simply typing root into the user name field in System Preferences:

The bug, discovered by developer Lemi Ergin, lets anyone log into an admin account using the username “root” with no password. This works when attempting to access an administrator’s account on an unlocked Mac, and it also provides access at the login screen of a locked Mac.

The only good news is that you can defend against this bug by enabling the root account and giving it a password.

The security mistakes in High Sierra are incredibly amateur. Automated regression testing should have caught both the password hint mistake and this root account mistake. I can only assume that Apple’s quality assurance department took the year off because both High Sierra and iOS 11 are buggy messes that should never have been released in the states they were released in.

When You’re Trying to Be Very Smart™ but End Up Looking Stupid

The announcement of the iPhone X was one of the biggest product announcements of the year. Not only is it the latest iPhone, which always captures headlines, but it includes a new facial recognition feature dubbed Face ID. With the popularity of the iPhone it’s inevitable that politicians will try to latch onto it to capture some headlines of their own. Al Franken, one of Minnesota’s congress critters, decided to try to latch onto the iPhone X by expressing concern about the privacy implications of the Face ID feature. This may appear to have been a smart political maneuver but the senator only managed to make himself appear illiterate since Apple had already published all of the technical information about Face ID:

Apple has responded to Senator Al Franken’s concerns over the privacy implications of its Face ID feature, which is set to debut on the iPhone X next month. In his letter to Tim Cook, Franken asked about customer security, third-party access to data (including requests by law enforcement), and whether the tech could recognize a diverse set of faces.

In its response, Apple indicates that it’s already detailed the tech in a white paper and Knowledge Base article — which provides answers to “all of the questions you raise”. But, it also offers a recap of the feature regardless (a TL:DR, if you will). Apple reiterates that the chance of a random person unlocking your phone is one in a million (in comparison to one in 500,000 for Touch ID). And, it claims that after five unsuccessful scans, a passcode is required to access your iPhone.

Franken should feel fortunate that Apple even bothered entertaining his concerns. Were I Tim Cook I would have directed a member of my staff to send Franken links to the technical publications with a request to have a member of his staff read them to him and not bothered giving him a TL;DR. After all, Apple’s time is worth far more money than Franken’s since it’s actually producing products and services that people want instead of being a parasite feeding off of stolen money.

Still I admit that it was pretty funny seeing Franken make an ass of himself yet again.

APFS and FileValut

Apple released macOS High Sierra yesterday. Amongst other changes, High Sierra includes the new Apple File System (APFS), which replaces the decades old Hierarchical File System (HFS). When you install High Sierra, at least if your boot drive is a Solid State Drive (SSD), the file system is supposed to be automatically converted to APFS. Although Apple’s website says that FileVault encrypted drives will be automatically converted, it didn’t give any details.

I installed High Sierra on two of my systems last night. One was a 2012 MacBook Pro and the other was a 2010 Mac Mini. Both contain Crucial SSDs. Since they’re third-party SSDs I wasn’t sure if High Sierra would automatically convert them. I’m happy to report that both were converted automatically. I’m also happy to report that FileVault didn’t throw a wrench into the conversion. I was worried that converting a FileVault encrypted drive would require copying files from one encrypted container to a new encrypted container but that wasn’t necessary.

If you’re installing High Sierra on a FileVault encrypted drive, the conversion from HFS to APFS won’t take a noticeably greater amount of time.

iOS 11 Makes It More Difficult for Police to Access Your Device

One reason I prefer iOS over Android is because Apple has invested more heavily in security than Google has. Part of this comes from the fact Apple controls both the hardware and software so it can implement hardware security features such as its Secure Enclave chip whereas the hardware security features available on an Android device are largely dependent on the manufacturer. However, even the best security models have holes in them.

Some of those holes are due to improperly implemented features while others are due to legalities. For example, here in the United States law enforcers have a lot of leeway in what they can do. One thing that has become more popular, especially at the border, are devices that copy data from smartphones. This has been relatively easy to do on Apple devices if the user unlocks the screen because trusting a knew connection has only required the tapping of a button. That will change in iOS 11:

For the mobile forensic specialist, one of the most compelling changes in iOS 11 is the new way to establish trust relationship between the iOS device and the computer. In previous versions of the system (which includes iOS 8.x through iOS 10.x), establishing trusted relationship only required confirming the “Trust this computer?” prompt on the device screen. Notably, one still had to unlock the device in order to access the prompt; however, fingerprint unlock would work perfectly for this purpose. iOS 11 modifies this behaviour by requiring an additional second step after the initial “Trust this computer?” prompt has been confirmed. During the second step, the device will ask to enter the passcode in order to complete pairing. This in turn requires forensic experts to know the passcode; Touch ID alone can no longer be used to unlock the device and perform logical acquisition.

Moreover, Apple has also included a way for users to quickly disable the fingerprint sensor:

In iOS 11, Apple has added an new emergency feature designed to give users an intuitive way to call emergency by simply pressing the Power button five times in rapid succession. As it turns out, this SOS mode not only allows quickly calling an emergency number, but also disables Touch ID.

These two features appear to be aimed at keeping law enforcers accountable. Under the legal framework of the United States, a police officer can compel you to provide your fingerprint to unlock your device but compelling you to provide a password is still murky territory. Some courts have ruled that law enforcers can compel you to provide your password while others have not. This murky legal territory offers far better protection than the universal ruling that you can be compelled to provide your fingerprint.

Even if you are unable to disable the fingerprint sensor on your phone, law enforcers will still be unable to copy the data on your phone without your password.

It’s Not Your Data When It’s in The Cloud

I’ve annoyed a great many electrons writing about the dangers of using other people’s computer (i.e. “the cloud”) to store personal information. Most of the time I’ve focused on the threat of government surveillance. If your data is stored on somebody else’s computer, a subpoena is all that is needed for law enforcers to obtain your data. However, law enforcers aren’t the only threat when it comes to “the cloud.” Whoever is storing your data, unless you’ve encrypted it in a way that make it inaccessible to others before you uploaded it, has access to it, which means that their employees could steal it:

Chinese authorities say they have uncovered a massive underground operation involving the sale of Apple users’ personal data.

Twenty-two people have been detained on suspicion of infringing individuals’ privacy and illegally obtaining their digital personal information, according to a statement Wednesday from police in southern Zhejiang province.

Of the 22 suspects, 20 were employees of an Apple “domestic direct sales company and outsourcing company”.

This story is a valuable lesson and warning. Apple has spent a great deal of time developing a reputation for guarding the privacy of its users. But data uploaded to its iCloud service are normally stored unencrypted so while a third-party may not be able to intercept en route, at least some of Apple’s employees have access to it.

The only way you can guard your data from becoming public is to either keep it exclusively on your machines or encrypt it in such a way that third parties cannot access it before uploading it to “the cloud.”

Paranoia I Appreciate

My first Apple product was a PowerBook G4 that I purchased back in college. At the time I was looking for a laptop that could run a Unix operating system. Back then (as is still the case today albeit to a lesser extent) running Linux on a laptop meant you had to usually give up sleep mode, Wi-Fi, the additional function buttons most manufacturers added on their keyboards, and a slew of power management features that made the already pathetic battery life even worse. Since OS X was (and still is) Unix based and didn’t involved the headaches of trying to get Linux to run on a laptop the PowerBook fit my needs perfectly.

Fast forward to today. Between then and now I’ve lost confidence in a lot of companies whose products I used to love. Apple on the other hand has continued to impress me. In recent times my preference for Apple products has been influenced in part by the fact that it doesn’t rely on selling my personal information to make money and displays a healthy level of paranoia:

Apple has begun designing its own servers partly because of suspicions that hardware is being intercepted before it gets delivered to Apple, according to a report yesterday from The Information.

“Apple has long suspected that servers it ordered from the traditional supply chain were intercepted during shipping, with additional chips and firmware added to them by unknown third parties in order to make them vulnerable to infiltration, according to a person familiar with the matter,” the report said. “At one point, Apple even assigned people to take photographs of motherboards and annotate the function of each chip, explaining why it was supposed to be there. Building its own servers with motherboards it designed would be the most surefire way for Apple to prevent unauthorized snooping via extra chips.”

Anybody who has been paying attention the the leaks released by Edward Snowden knows that concerns about surveillance hardware being added to off-the-shelf products isn’t unfounded. In fact some companies such as Cisco have taken measure to mitigate such threats.

Apple has a lot of hardware manufacturing capacity and it appears that the company will be using it to further protect itself against surveillance by manufacturing its own servers.

This is a level of paranoia I can appreciate. Years ago I brought a lot of my infrastructure in house. My e-mail, calendar and contact syncing, and even this website are all being hosted on servers running in my dwelling. Although part of the reason I did this was for the experience another reason was to guard against certain forms of surveillance. National Security Letters (NSL), for example, require service providers to surrender customer information to the State and legally prohibit them from informing the targeted customer. Since my servers are sitting in my dwelling any NSL would necessarily require me to inform myself of receiving it.

FBI Versus Apple Court Hearing Postponed

It appears that the Federal Bureau of Investigations (FBI) is finally following the advice of every major security expert and pursuing alternate means of acquire the data on Farook’s iPhone, which means the agency’s crusade against Apple is temporarily postponed:

A magistrate in Riverside, CA has canceled a hearing that was scheduled for Tuesday afternoon in the Apple v FBI case, at the FBI’s request late Monday. The hearing was part of Apple’s challenge to the FBI’s demand that the company create a new version of its iOS, which would include a backdoor to allow easier access to a locked iPhone involved in the FBI’s investigation into the 2015 San Bernardino shootings.

The FBI told the court that an “outside party” demonstrated a potential method for accessing the data on the phone, and asked for time to test this method and report back. This is good news. For now, the government is backing off its demand that Apple build a tool that will compromise the security of millions, contradicts Apple’s own beliefs, and is unsafe and unconstitutional.

This by no means marks the end of Crypto War II. The FBI very well could continue its legacy of incompetence and fail to acquire the data from the iPhone through whatever means its pursuing now. But this will buy us some time before a court rules that software developers are slave laborers whenever some judge issues a court order.

I’m going to do a bit of speculation here. My guess is that the FBI didn’t suddenly find somebody with a promising method of extracting data from the iPhone. After reading the briefs submitted by both Apple and the FBI it was obvious that the FBI either had incompetent lawyers or didn’t have a case. That being the case, I’m guessing the FBI decided to abandon its current strategy because it foresaw the court creating a precedence against it. It would be far better to abandon its current efforts and try again later, maybe against a company that is less competent than Apple, than to pursue what would almost certainly be a major defeat.

Regardless of the FBI’s reasoning, we can take a short breath and wait for the State’s next major attack against our rights.

iOS 9.3 With iMessage Fix Is Out

In the ongoing security arms race researchers from John Hopkins discovered a vulnerability in Apple’s iMessage:

Green suspected there might be a flaw in iMessage last year after he read an Apple security guide describing the encryption process and it struck him as weak. He said he alerted the firm’s engineers to his concern. When a few months passed and the flaw remained, he and his graduate students decided to mount an attack to show that they could pierce the encryption on photos or videos sent through iMessage.

It took a few months, but they succeeded, targeting phones that were not using the latest operating system on iMessage, which launched in 2011.

To intercept a file, the researchers wrote software to mimic an Apple server. The encrypted transmission they targeted contained a link to the photo stored in Apple’s iCloud server as well as a 64-digit key to decrypt the photo.

Although the students could not see the key’s digits, they guessed at them by a repetitive process of changing a digit or a letter in the key and sending it back to the target phone. Each time they guessed a digit correctly, the phone accepted it. They probed the phone in this way thousands of times.

“And we kept doing that,” Green said, “until we had the key.”

A modified version of the attack would also work on later operating systems, Green said, adding that it would likely have taken the hacking skills of a nation-state.

With the key, the team was able to retrieve the photo from Apple’s server. If it had been a true attack, the user would not have known.

There are several things to note about this vulnerability. First, Apple did response quickly by including a fix for it in iOS 9.3. Second, security is very difficult to get right so it often turns into an arms race. Third, designing secure software, even if you’re a large company with a lot of talented employees, is hard.

Christopher Soghoian also made a good point in the article:

Christopher Soghoian, principal technologist at the American Civil Liberties Union, said that Green’s attack highlights the danger of companies building their own encryption without independent review. “The cryptographic history books are filled with examples of crypto-algorithms designed behind closed doors that failed spectacularly,” he said.

The better approach, he said, is open design. He pointed to encryption protocols created by researchers at Open Whisper Systems, who developed Signal, an instant message platform. They publish their code and their designs, but the keys, which are generated by the sender and user, remain secret.

Open source isn’t a magic bullet but it does allow independent third party verification of your code. This advantage often goes unrealized as even very popular open source projects like OpenSSL have contained numerous notable security vulnerabilities for years without anybody being the wiser. But it’s unlikely something like iMessage would have been ignored so thoroughly.

The project would likely attracted a lot of developers interested in writing iMessage clients for Android, Windows, and Linux. Since iOS, and therefore by extension iMessage, is so popular in the public eye it’s likely a lot of security researchers would have looked through the iMessage code hoping to be the first to find a vulnerability and enjoy the publicity that would almost certainly entail. So open sourcing iMessage would likely have gained Apple a lot of third party verification.

In fact this is why I recommend applications like Signal over iMessage. Not only is Signal compatible with Android and iOS but it’s also open source so it’s available for third party verification.

Illustrating Cryptographic Backdoors With Mechanical Backdoors

A lot of people don’t understand the concept of cryptographic backdoors. This isn’t surprising because cryptography and security are very complex fields of study. But it does lead to a great deal of misunderstanding, especially amongst those who tend to trust what government agents say.

I’ve been asked by quite a few people why Apple doesn’t comply with the demands of the Federal Bureau of Investigations (FBI). They’ve fallen for the FBI’s claims that the compromised firmware would only be used on that single iPhone and Apple would be allowed to maintain total control over the firmware at all times. However, as Jonathan Zdziarski explained, the burden of forensic methodology would require the firmware to exchange hands several times:

Once the tool is ready, it must be tested and validated by a third party. In this case, it would be NIST/NIJ (which is where my own tools were validated). NIST has a mobile forensics testing and validation process by which Apple would need to provide a copy of the tool (which would have to work on all of their test devices) for NIST to verify.

[…]

If evidence from a device ever leads to a case in a court room, the defense attorney will (and should) request a copy of the tool to have independent third party verification performed, at which point the software will need to be made to work on another set of test devices. Apple will need to work with defense experts to instruct them on how to use the tool to provide predictable and consistent results.

If Apple creates what the FBI is demanding the firmware would almost certainly end up in the hands of NIST, the defense attorney, and another third party hired by the defense attorney to verify the firmware. As Benjamin Franklin said, “Three can keep a secret, if two of them are dead.” With the firmware exchanging so many hands it will almost certainly end up leaked to the public.

After pointing this out a common followup question is, “So what? How much damage could this firmware cause?” To illustrate this I will use an example from the physical world.

The Transportation Security Administration (TSA) worked with several lock manufacturers to create TSA recognized locks. These are special locks that TSA agents can bypass using master keys. To many this doesn’t sound bad. After all, the TSA tightly guards these master keys, right? Although I’m not familiar with the TSA’s internal policies regarding the management of their master keys I do know the key patterns were leaked to the Internet and 3D printer models were created shortly thereafter. And those models produce keys that work.

The keys were leaked, likely unintentionally, by a TSA agent posting a photograph of them online. With that single leak every TSA recognized lock was rendered entirely useless. Now anybody can obtain the keys to open any TSA recognized lock.

It only takes one person to leak a master key, either intentionally or unintentionally, to render every lock that key unlocks entirely useless. Leaking a compromised version of iOS could happen in many ways. The defendant’s attorney, who may not be well versed in proper security practices, could accidentally transfer the firmware to a third party in an unsecured manner. If that transfer is being monitored the person monitoring it would have a copy of the firmware. An employee of NIST could accidentally insert a USB drive with the firmware on it into an infected computer and unknowingly provide it to a malicious actor. Somebody working for the defendant’s third party verifier could intentionally leak a copy of the firmware. There are so many ways the firmware could make its way to the Internet that the question isn’t really a matter of if, but when.

Once the firmware is leaked to the Internet it would be available to anybody. While Apple could design the firmware to check the identity of the phone to guard against it working on any phone besides the one the FBI wants unlocked, it could be possible to spoof those identifies to make any iPhone 5C look like the one the FBI wants unlocked. It’s also possible that a method to disable a fully updated iPhone 5C’s signature verification will be found. If that happens a modified version of the compromised firmware, which would contain an invalid signature, that doesn’t check the phone’s identifiers could be installed.

The bottom line is that the mere existence of a compromised firmware, a master key if you will, puts every iPhone 5C at risk just as the existence of TSA master keys put everything secured with a TSA recognized lock at risk.