Welcome to Postliterate America

In my opinion the United States shows all the signs of a society beginning a descent into postliteracy. One of the biggest signs is the rapidly declining lack of interest in recreational reading:

The share of Americans who read for pleasure on a given day has fallen by more than 30 percent since 2004, according to the latest American Time Use Survey from the Bureau of Labor Statistics.

In 2004, roughly 28 percent of Americans age 15 and older read for pleasure on a given day. Last year, the figure was about 19 percent.

That steep drop means that aggregate reading time among Americans has fallen, from an average of 23 minutes per person per day in 2004 to 17 minutes per person per day in 2017.

I can’t say that I’m surprised by these results. The idea behind a postliterate society is that multimedia technology has advanced to the point where the ability to read and write is unnecessary. In our age of cheap data storage, data transmission, and devices capable of rendering high-definition sound and video, many of which fit in a pocket, we are less reliant on written information than we once were. Moreover, voice dictation is advancing rapidly. When I first tried voice dictation on a computer I wrote it off as useless because at the time it was. Today my phone’s voice dictation is actually pretty decent. What’s probably more amazing than the improvement of voice dictation software is the fact that it’s not nearly as important as it once was because I can just send the audio clip itself to somebody.

Will literacy go the way of shorthand and cursive? It very well could. The technology is already at a point where literacy isn’t as important as it once was. In a few more years it will probably advance to the point where literacy is almost an entirely unnecessary skill. Once that happens it may take only one or two generations until literacy is a skill held exclusive by a handful of individuals who have an interest in archaic knowledge.

George Orwell Wasn’t Cynical Enough

George Orwell’s Nineteen Eighty-Four either served as a dire warning or as a blueprint depending on what side of the state you occupy. The Party, the ruling body of Oceania, established a pervasive surveillance state. Helicopters flew around peeking into people’s windows, every home had a two way television that couldn’t be turned off and allowed government agents to snoop on you, children were encourage from a young age to rat out their parents if they did anything seditious, etc. However, as cynical as George Orwell’s vision of the future may have been, it wasn’t cynical enough:

In April, California investigators arrested Joseph James DeAngelo for some of the crimes committed by the elusive Golden State Killer (GSK), a man who is believed to have raped over 50 women and murdered at least 12 people between 1978 and 1986. Investigators tracked him down through an open-source ancestry site called GEDMatch, uploading the GSK’s DNA profile and matching it to relatives whose DNA profiles were also hosted on the website. Now, using those same techniques, a handful of other arrests have been made for unsolved cases, some going as far back as 1981.

The New York Times reports that GEDMatch has been used to track down suspects involved in a 1986 murder of a 12-year-old girl, a 1992 rape and murder of a 25-year-old schoolteacher, a 1981 murder of a Texas realtor and a double murder that took place in 1987. It was even used to identify a man who died by suicide in 2001 but had remained unnamed until now. Many of these suspects were found by CeCe Moore, a genetic genealogist working with forensic consulting firm Parabon, who has previously helped adoptees find their biological relatives. “There are so many parallels,” she told the New York Times about the process of finding a suspect versus a relative.

Genetic databases are a boon for law enforcers. While most people are worried about the commercial databases like Ancestry.com and 23andMe, there is an open source genetics database called GEDMatch that, unlike the commercial products, doesn’t even require a warrant to access. What makes genetic databases even more frightening from a privacy standpoint is that you don’t have to submit your genetics. If a family member submits their genetics, that’s enough for law enforcers to identify you.

Since law enforcers are using this database to go after murderers, rapists, and other heinous individuals, it’s likely that many people will see this strategy as a positive thing. But government agencies have a tendency to expand their activities. While they’ll start using a new technology to identify legitimately terrible people, they quickly begin using the technology to go after people who broke the law but didn’t actually hurt anybody. The scary part about law enforcers using tools like GEDMatch is that they will eventually use it to go after everybody.

Another Processor Vulnerability

Hardware has received far less scrutiny in the past than software when it comes to security. That has changed in recent times and, not surprisingly, the previous lack of scrutiny has resulted in a lot of major vulnerabilities being discovered. The latest vulnerability relates to a feature found in Intel processors referred to as Hyperthreading:

Last week, developers on OpenBSD—the open source operating system that prioritizes security—disabled hyperthreading on Intel processors. Project leader Theo de Raadt said that a research paper due to be presented at Black Hat in August prompted the change, but he would not elaborate further.

The situation has since become a little clearer. The Register reported on Friday that researchers at Vrije Universiteit Amsterdam in the Netherlands have found a new side-channel vulnerability on hyperthreaded processors that’s been dubbed TLBleed. The vulnerability means that processes that share a physical core—but which are using different logical cores—can inadvertently leak information to each other.

In a proof of concept, researchers ran a program calculating cryptographic signatures using the Curve 25519 EdDSA algorithm implemented in libgcrypt on one logical core and their attack program on the other logical core. The attack program could determine the 256-bit encryption key used to calculate the signature with a combination of two milliseconds of observation, followed by 17 seconds of machine-learning-driven guessing and a final fraction of a second of brute-force guessing.

Like the last slew of processor vulnerabilities, the software workaround for this vulnerability involves a performance hit. Unfortunately, the long term fix to these vulnerabilities involves redesigning hardware, which could destroy an assumptions on which modern software development relies: hardware will continue to become faster.

This assumption has been at risk for a while because chip designers are running into transistor size limitations, which could finally do away with Moore’s Law. But designing secure hardware may also require surrendering a bit on the performance front. It’s possible that the next generation of processors won’t have the same raw performance as the current generation of processors. What would this mean? Probably not much for most users. However, it could impact software developers to some extent. Many software development practices are based on the assumption that the next generation of hardware will be faster and it is therefore unnecessary to focus on writing performant code. If the next generation of processors have the same performance as the current generation or, even worse, less performance, an investment in performant code could pay dividends.

Obviously this is pure speculation on my behalf but it’s an interesting scenario to consider.

Avoid E-Mail for Security Communications

The Pretty Good Privacy (PGP) protocol was created to provide a means to securely communicate via e-mail. Unfortunately, it was a bandage applied to a protocol that has only increased significantly in complexity since PGP was released. The ad-hoc nature of PGP combined with the increasing complexity of e-mail itself has lead to rather unfortunate implementation failures that have left PGP users vulnerable. A newly released attack enables attackers to spoof PGP signatures:

Digital signatures are used to prove the source of an encrypted message, data backup, or software update. Typically, the source must use a private encryption key to cause an application to show that a message or file is signed. But a series of vulnerabilities dubbed SigSpoof makes it possible in certain cases for attackers to fake signatures with nothing more than someone’s public key or key ID, both of which are often published online. The spoofed email shown at the top of this post can’t be detected as malicious without doing forensic analysis that’s beyond the ability of many users.

[…]

The spoofing works by hiding metadata in an encrypted email or other message in a way that causes applications to treat it as if it were the result of a signature-verification operation. Applications such as Enigmail and GPGTools then cause email clients such as Thunderbird or Apple Mail to falsely show that an email was cryptographically signed by someone chosen by the attacker. All that’s required to spoof a signature is to have a public key or key ID.

The good news is that many PGP plugins have been updated to patch this vulnerability. The bad news is that this is the second major vulnerability found in PGP in the span of about a month. It’s likely that other major vulnerabilities will be discovered in the near future since the protocol appears to be receiving a lot of attention.

PGP is suffering from the same fate as most attempts to bolt security onto insecure protocols. This is why I urge people to utilize secure communication technology that was designed from the start to be secure and has been audited. While there are no guarantees in life, protocols that were designed from the ground up with security in mind tend to fair better than protocols that were bolted on after the fact. Of course designs can be garbage, which is where an audit comes in. The reason you want to rely on a secure communication tool only after it has been audited is because an audit by an independent third-party can verify that the tool is well designed and provides effective security. And audit isn’t a magic bullet, unfortunately those don’t exist, but it allows you to be reasonably sure that the tool you’re using isn’t complete garbage.

When Your Smart Lock Isn’t Smart

My biggest gripe with so-called smart products is that they tend to not be very smart. For example, the idea of a padlock that can be unlocked with your phone isn’t a bad idea in of itself. It would certainly be convenient since most people carry a smartphone these days. However, if it’s designed by people who paid no attention to security, the lock quickly because convenient for unauthorized parties as well:

Yes. The only thing we need to unlock the lock is to know the BLE MAC address. The BLE MAC address that is broadcast by the lock.

I was so astounded by how bad the security was that I ordered another and emailed Tapplock to check the lock and app were genuine.

I scripted the attack up to scan for Tapplocks and unlock them. You can just walk up to any Tapplock and unlock it in under 2s. It requires no skill or knowledge to do this.

I wish that this was one of those findings that is so rare that it’s newsworthy. Unfortunately, a total lack of interest in security seems to be a defining characteristic for developers of “smart” products. While this lack of awareness isn’t unexpected for a company developing, say, a smart thermostat (after all, I wouldn’t expect somebody who is knowledgeable about thermostats to necessarily be an expert in security as well), it’s an entirely different matter when the product being developed is itself a security product.

The problem with this attack is how trivial it is to perform. The author of the article notes that they’re porting the script they developed to unlock these “smart” locks to Android. Once the attack is available for smartphones, anybody can potentially unlock any of these locks with a literal tap of a button. This makes them even easier to bypass than those cheap Masterlock padlocks that are notorious for being insecure.

The End of Enforceable Prohibitions

I’m fond of pointing out to prohibitionists that the era of enforceable prohibitions is over:

In the very near future, governments will lose the ability to keep guns, drones, and other forbidden goods out of the hands of their subjects. They’ll also be rendered impotent to enforce trade and technology embargoes. Power is shifting from the state to individuals and small groups courtesy of additive manufacturing—aka 3D printing—technology.

Additive manufacturing is poised to revolutionize whole industries—destroying some jobs while creating new opportunities. That’s according to a recent report from the prestigious RAND Corporation, and there’s plenty of evidence to support the dynamic and “disruptive” view of the future that the report promises.

Throughout history power has ebbed and flowed. At times centralized authorities are able to wield their significant power to oppress the masses. At other times events weaken those centralized authorities and the average person once again finds themselves holding a great deal of power.

Technological advancements are quickly weakening the power of the centralized nation-states. Encryption technology is making their surveillance apparatus less effective. Cryptocurrencies are making it difficult for nation-states to monitor and block transactions. Manufacturing technology is allowing individuals to make increasingly complex objects from the comfort of their own homes. The Internet has made freely trading information so easy that censorship is quickly becoming impossible.

We live in exciting times.

You Must Guard Your Own Privacy

People often make the mistake of believing that they can control the privacy for content they post online. It’s easy to see why they fall into this trap. Facebook and YouTube both offer privacy controls. Facebook along with Twitter also provide private messaging. However, online privacy settings are only as good as the provider makes them:

Facebook disclosed a new privacy blunder on Thursday in a statement that said the site accidentally made the posts of 14 million users public even when they designated the posts to be shared with only a limited number of contacts.

The mixup was the result of a bug that automatically suggested posts be set to public, meaning the posts could be viewed by anyone, including people not logged on to Facebook. As a result, from May 18 to May 27, as many as 14 million users who intended posts to be available only to select individuals were, in fact, accessible to anyone on the Internet.

Oops.

Slip ups like this are more common than most people probably realize. Writing software is hard. Writing complex software used by billions of people is really hard. Then after the software is written, it must be administered. Administering complex software used by billions of people is also extremely difficult. Programmers and administrators are bound to make mistakes. When they do, the “confidential” content you posted online can quickly become publicly accessible.

Privacy is like anything else, if you want the job done well, you need to do it yourself. The reason services like Facebook can accidentally make your “private” content public is because they have complete access to your content. If you want to have some semblance of control over your privacy, your content must only be accessible to you. If you want that content to be available to others, you must post it in such a way where only you and them can access it.

This is the problem that public key cryptography attempts to solve. With public key cryptography each person has a private and public key. Anything encrypted with the public key can only be decrypted with the private key. Needless to say, as the names implies, you can post your public key to the Internet but must guard the security of your private key. When you want to make material available to somebody else, you encrypt it with their public key so hey can decrypted it with their private key. Likewise, when they want to make content available to you they must encrypt it with your public key so you can decrypt it with your private key. This setup gives you the best ability to enforce privacy controls because, assuming no party’s private key has been compromised, only specifically authorized parties have access to content. Granted, there are still a lot of ways for this setup to fall apart but a simple bad configuration isn’t going to suddenly make millions of people’s content publicly accessible.

The Government Is Us

Worshipers of democracy continue to tell me that the government is us. I’m not sure why they try to drag me into being part of the government but they’re very adamant. Anyways, when somebody tries to claim that the government is us I like to point to stories like this one:

A young hacker reeling from the Philando Castile case and the acquittal of the officer who killed him broke into several state databases last year and boasted about his exploits.

“An innocent man is dead, while a guilty man is free,” the hacker, known as “Vigilance” tweeted in part last year.

Here’s the thing, if Vigilance is the government (because, after all, he’s part of “us”), he would have every right to access any government computer he so desired. It is, according to worshipers of democracy, his computer after all. But the fact that he’s been arrested for accessing those computers indicates that he isn’t the government, especially in the eyes of the government.

You aren’t he government. If you disagree with me, try strolling into a National Security Agency (NSA) building. You’ll be provided a free education regarding your misunderstanding.

How Things Change

The big news in developer circles this week is that Microsoft acquired GitHub. I admit that the news didn’t fill me with happiness since I’m not a fan of the trend of everything being gobbled up by a handful of big companies. But Microsoft has been making a rather dramatic shift in recent years. The company has becoming far friendlier towards the open source community and has been releasing a lot of terrific developer tools. This shift has made me hopeful that Microsoft will be a good steward for GitHub. Moreover, things could have turned out far worse:

Microsoft was not alone in chasing GitHub, which it agreed to acquire for $7.5 billion on Monday. Representatives from Alphabet’s Google were also talking to the company about an acquisition in recent weeks, according to people familiar with the deal talks.

Not too long ago if you had told me that both Microsoft and Google were looking to acquire GitHub, I’d have hoped for Google. But today I’m happy that of the two companies Microsoft ended up buying GitHub.

The biggest problem I have with Google, besides its business model based on surveilling users, is its habit of abandoning products. Google Reader, Google Talk, Google Health, Google Wave, and more have been discontinued by Google. Some of the products were discontinued shortly after they were released and/or were discontinued with little notice given to users. Microsoft, on the other hand, is well-known for supporting products for a long time and giving reasonable notice when it does decide to discontinue a product. If Google had acquired GitHub, there’s no telling how long it would have been kept around. Since Microsoft acquired GitHub, it’ll probably be around for a long time.

It’s funny how things can change so rapidly. Google was the darling child of the technology industry but now its star is descending. Meanwhile, Microsoft went from the epitome of evil but is now improving its reputation.

You Can’t Commit Suicide If You’re in a Full Body Cast

Stop me if you’ve heard this one before. Cops are called to deal with an apparently suicidal person. When they arrive on the scene they decide to shoot or beat the shit out of the suicidal man:

Two police officers in Paterson, New Jersey, went to a local hospital March 5 on reports of a man who attempted suicide, video showed. But after the man threw an object into the hallway and insulted one of the cops, the officers grabbed the man’s wheelchair, punched him in the face and pushed him to the ground, according to a federal criminal complaint.

Another video (this one allegedly recorded by Officer Roger Then, 29, on his cellphone) caught the second stage of the assault, prosecutors said. At that point, the victim was in his hospital bed, video showed. Lying on his back, the suicidal man hurled an insult at an unidentified police officer. In response, that officer grabbed a pair of hospital gloves, put them on and “violently struck” the man two times, prosecutors said.

You can’t commit suicide if you’re in a full body cast!

While the story itself isn’t surprising, what is surprising is that Officer Then has been arrested by the Federal Bureau of Investigations. I was expecting to read that the officer was enjoying a paid vacation since that is such a common outcome of stories like this. But apparently an officer recording himself beating a man lying in a hospital bed is stupid enough that federal law enforcers feel the need to actually do something.

Once again I’m left wondering exactly how many isolated incidents perpetrated by bad apples are needed to establish a trend.