It’s Not Your Car

I think the technology behind modern electric cars is really cool. What I don’t like though is that electric car manufacturers don’t seem satisfied with simply replacing gasoline engines with electric motors, they are also trying to replace the owner as the decision maker:

Hurricane Florence is approaching the East Coast of the US, and is predicted to bring with it catastrophic flooding, high winds, as well as a life-threatening storm surge and rain in North and South Carolina. As a result, both GM and Tesla have remotely activated features in their cars that could be of use in an evacuation.

Since OnStar is a subscription service, I at least understand why GM has control over whether or not certain features are available to users. But why should Tesla owners require the manufacturer to decide they need access to the extra battery capacity in order to utilize it? Why can’t the car have a button that enables and disables the capacity lock?

More and more consumers are losing control over devices that are supposedly theirs. Consumers are being treated like children who are incapable of making rational decisions and must therefore be guided by the manufacturer. This doesn’t sit well with me. When I buy something, I want complete control over it. If there is extra capacity in my vehicle’s battery, I want to have the ability to decide whether or not it’s being utilized. Unfortunately, it appears that I’m in the minority because most consumers appear to welcome having an overlord dictate what they can and cannot do with their devices.

The Power of Public Shaming

Every major security breach is followed by calls for politicians to enact more stringent regulations. When I see people demanding additional government regulations I like to point out that there is a list of alternative solutions that can yield far better results (especially since regulations, being a product of government, are extremely rigid and slow to change, which makes them a solution ill-suited to fast moving markets). One of those solutions is public shaming. It turns out that public shaming is often a viable solution to security issues:

See the theme? Crazy statements made by representatives of the companies involved. The last one from Betfair is a great example and the entire thread is worth a read. What it boiled down to was the account arguing with a journalist (pro tip: avoid arguing being a dick to those in a position to write publicly about you!) that no, you didn’t just need a username and birth date to reset the account password. Eventually, it got to the point where Betfair advised that providing this information to someone else would be a breach of their terms. Now, keeping in mind that the username is your email address and that many among us like cake and presents and other birthday celebratory patterns, it’s reasonable to say that this was a ludicrous statement. Further, I propose that this is a perfect case where shaming is not only due, but necessary. So I wrote a blog post..

Shortly after that blog post, three things happened and the first was that it got press. The Register wrote about it. Venture Beat wrote about it. Many other discussions were held in the public forum with all concluding the same thing: this process sucked. Secondly, it got fixed. No longer was a mere email address and birthday sufficient to reset the account, you actually had to demonstrate that you controlled the email address! And finally, something else happened that convinced me of the value of shaming in this fashion:

A couple of months later, I delivered the opening keynote at OWASP’s AppSec conference in Amsterdam. After the talk, a bunch of people came up to say g’day and many other nice things. And then, after the crowd died down, a bloke came up and handed me his card – “Betfair Security”. Ah shit. But the hesitation quickly passed as he proceeded to thank me for the coverage. You see, they knew this process sucked – any reasonable person with half an idea about security did – but the internal security team alone telling management this was not cool wasn’t enough to drive change.

As I mentioned above, regulations tend to be rigid and slow to change. Public shaming on the other hand is often almost instantaneous. It seldom takes long for a company tweet that makes an outrageous security claim to be bombarded with criticism. Within minutes there are retweets by people mocking the statement, replies from people explaining why the claim is outrageous, and journalists writing about how outrageous the claim is. That public outrage, unlike C-SPAN, quickly reaches the public at large. Once the public becomes aware of the company’s claim and why it’s bad, the company has to being worrying about losing customers and by extent profits.

From Their Beloved to Their Bitter Enemy

Remember just a few weeks ago when the European Union passed the General Data Protection Regulation (GDPR) and became the beloved of Internet activists across the globe? In the wake of GDPR’s passage I saw a ton of European peasants claim that the passage of the law demonstrated that the European Union, unlike the United States government, actually represents and watches out for its people.

A rule I live by is if you see a government do something you like, stick around for a short while longer because it’ll soon do something you really don’t like. The European Union just proved this rule. Within a few short weeks it went from the beloved of Internet activists to their bitter enemy:

The EU has voted on copyright reform (again), with members of European Parliament this time voting in favor of the extremely controversial Articles 11 and 13. The 438 to 226 vote, described as “the worst possible outcome” by some quarters, could have significant repercussions on the way we use the internet.

The Copyright Directive, first proposed in 2016, is intended to bring the issue of copyright in line with the digital age. Articles 11 and 13 have caused particular controversy, with many heralding their adoption as the death of the internet. Article 11, also known as the “link tax”, would require online platforms such as Google and Facebook to pay media companies to link to their content, while Article 13, the “upload filter”, would force them to check all content uploaded to their sites and remove any copyrighted material. How this will affect regular internet users is still subject to debate, but it could seriously limit the variety of content available online — and it could pretty much spell the end of memes.

Excuse me for a minute while I laugh at all of the suckers who claimed that the European Union represents and watches out for its people.

The Internet started off as a strongly decentralized network. Eventually it turned into the highly centralized mess that we’re dealing with now. Soon it may return to its decentralized nature as international companies find themselves having to abandon regions because they cannot comply with all of the different legal frameworks. Google and Facebook make a lot of money off of Europe but do they make enough money to justify paying link taxes? Do small content hosting sites have the spare resources to scan every file that has been uploaded for copyrighted material?

Moreover, legislation like this will push more Internet traffic “underground.” As long ago as the Napster lawsuit it became obvious that people on the Internet weren’t going to comply with copyright laws. Instead when one system of bypassing copyright laws is destroyed by the State, another is created in its place. So sharing memes online, at least for European peasants, might require the Tor Browser in order to access hidden image sharing sites but they will continue to share memes.

Uncontrolled Release of Energy

Your smartphone has a rather sizable appetite for energy. To keep it running just for one day it needs a battery that is capable of storing a rather notable amount of energy. The same is true for your laptop, tablet, smartwatch, and any other sophisticated portable electronic device. For the most part we never think about the batteries that power our portable electronics until they degrade to such a point that we find ourselves recharging them more often than we’re comfortable with. But what happens when something besides the usual wear and tear goes wrong with our batteries? What happens if a battery decides to release its stored energy all at once? This is a problem plaguing companies that specialize in recycling electronics:

MADISON, Wis. — What happens to gadgets when you’re done with them? Too often, they explode.

As we enter new-gadget buying season, spare a moment to meet the people who end up handling your old stuff. Isauro Flores-Hernandez, who takes apart used smartphones and tablets for a living, keeps thick gloves, metal tongs and a red fireproof bin by his desk here at Cascade Asset Management, an electronics scrap processor. He uses them to whisk away devices with batteries that burst into flames when he opens them for recycling.

One corner of his desk is charred from an Apple iPhone that began smoking and then exploded after he opened it in 2016. Last year, his co-worker had to slide away an exploding iPad battery and evacuate the area while it burned out.

Due to their popularity, lithium-ion batteries are receiving a lot of attention at the moment but the problem of uncontrolled energy release isn’t unique to them. Anything capable of storing energy so that it can be released in a controlled manner can suffer a failure that causes the energy to be released in an uncontrolled manner. Consider the gas tank in your vehicle. Under normal operating conditions the energy stored in your gas tank is released in a controlled manner by your engine. But a crash can cause the energy to be released in an uncontrolled manner, which results in a fire or explosion.

Anything that can store a large quantity of energy should be treated with respect. If you’re repairing your smartphone or laptop, be careful around the battery. If you smell something odd coming from one of your battery-powered devices, put some distance between it and yourself (and anything that can catch fire and burn).

Don’t Trust Snoops

Software that allows family members to spy on one another is big business. But how far can you trust a company that specializes in enabling abusers to keep a constant eye on their victims? Not surprisingly, such companies can’t be trusted very much:

mSpy, the makers of a software-as-a-service product that claims to help more than a million paying customers spy on the mobile devices of their kids and partners, has leaked millions of sensitive records online, including passwords, call logs, text messages, contacts, notes and location data secretly collected from phones running the stealthy spyware.

Less than a week ago, security researcher Nitish Shah directed KrebsOnSecurity to an open database on the Web that allowed anyone to query up-to-the-minute mSpy records for both customer transactions at mSpy’s site and for mobile phone data collected by mSpy’s software. The database required no authentication.

Oops.

I can’t say that I’m terribly surprised by this. Companies that make software aimed at allowing family members to spy on one another already have, at least in my opinion, a pretty flexible moral framework. I wouldn’t be surprised if all of the data collected by mSpy was stored in plaintext in order to make it easily accessible to other buyers.

You Are Responsible for Your Own Security

One of the advertised advantages of Apple’s iOS platform is that all software loaded onto iOS devices has to be verified by Apple. This so-called walled garden is meant to keep the bad guys out. However, anybody who studies military history quickly learns that sitting behind a wall is usually a death sentence. Eventually the enemy breaches the wall. Enemies have breached Apple’s walls before and they continue to do so:

In a blog post entitled “Location Monetization in iOS Apps,” the Guardian team detailed 24 applications from the Apple iOS App Store that pushed data to 12 different “location-data monetization firms”—companies that collect precise location data from application users for profit. The 24 identified applications were found in a random sampling of the App Store’s top free applications, so there are likely many more apps for iOS surreptitiously selling user location data. Additionally, the Guardian team confirmed that one data-mining service was connected with apps from over 100 local broadcasters owned by companies such as Sinclair, Tribune Broadcasting, Fox, and Nexstar Media.

iOS has a good permission system and users can prevent apps from accessing location information but far too many people are willing to grant access to their location information to any application that asks. If a walled garden were perfectly secure, users wouldn’t have to worry about granting unnecessary permissions because the wall guards wouldn’t allow anything malicious inside. Unfortunately, the wall guards aren’t perfect and malicious stuff does get through, which brings me to my second point.

What happens when a malicious app manages to breach Apple’s walled garden? Ideally it should be immediately removed but the universe isn’t ideal:

Adware Doctor is a top app in Apple’s Mac App Store, sitting at number five in the list of top paid apps and leading the list of top utilities apps, as of writing. It says it’s meant to prevent “malware and malicious files from infecting your Mac” and claims to be one of the best apps to do so, but unbeknownst to its users, it’s also stealing their browser history and downloading it to servers in China.

In fairness to Apple, the company did eventually remove Adware Doctor from its app store. Eventually is the keyword though. How many other malicious apps have breached Apple’s walled garden? How long do they manage to hide inside of the garden until they are discovered and how quickly do the guards remove them once they are discovered? Apparently Apple’s guards can be a bit slow to react.

Even in a walled garden you are responsible for your own security. You need to know how to defend yourself in case a bad guy manages to get inside of the defensive walls.

The People Who Decide Legality

Anybody who has looked into the history of the politics and legalities of firearms knows that the people who write and interpret laws regarding firearms are generally clueless about the subject matter. The same is true for technology (and possibly more so). The people who write and interpret laws regarding technology are almost always completely clueless about the subject matter. But what happens when you combine firearms and technology? An entirely new level of ignorance is unlocked:

On Monday, a federal court in Washington state blocked Cody Wilson and his company Defense Distributed from putting his 3D-printed gun schematic online. The court’s order—the latest in a years-long legal tussle that has picked up this summer—largely focuses on government rulemaking procedures, but a number of times it has to consider how technology works. When it does, it manages to get the technology remarkably wrong.

Perhaps the most comical of these is when the decision considers whether letting the schematic go online will cause “irreparable harm.” Most of the files are already online, Wilson’s attorneys argued, so what’s the harm in putting them up yet again? Yet the court disagreed, saying those online copies might be hard to find—only “a cybernaut with a BitTorrent protocol” could locate them “in the dark or remote recesses of the internet.”

If you think downloading a schematic for a firearm is insane, just want until you see what else I can do with a BitTorrent protocol! You’ll have to wait though since I’m short on BitTorrent protocols at the moment (please donate).

In addition to the use of the word cybernaut, I find it comical that the Internet Archive is considered a dark and remote recess of the Internet by this judge.

What should really stand out about this story though is that court officials who are entirely ignorant about the subject matter that they’re ruling on are allowed to make official rulings. When this judge issued their spiel about cybernauts using BitTorrent protocols to obtain schematics from the dark and remote recesses of the Internet, it had the force of law. If Defense Distributed violated this ruling, armed thugs with badges could be sent out to kidnap Cody Wilson or even kill him if he resisted their kidnapping attempt because an idiot in a magic muumuu has the power to make whatever they say an enforceable law. If that isn’t a great case against statism, I don’t know what is.

Creating New Definitions

I’ve often heard people say “words have meanings” when they believe somebody is using a word incorrectly (especially in a debate). It’s true, words do have meanings. Unfortunately, many words have multiple meanings. What makes this matter even more complicated is that words often have different meanings when used in a legal context. For example, a monopoly is generally considered an entity that operates without competition. However, according to the Fascist Communications Club (FCC) and a court that backed it, an entity that operates without competition isn’t necessarily a monopoly:

An appeals court has upheld a Federal Communications Commission ruling that broadband markets can be competitive even when there is only one Internet provider.

The real tragedy here isn’t that the FCC and a court have decided that the absence of competition is a competitive market, it’s the fact that the ruling backs a regulatory environment that the government created.

The lack of competition in the Internet Service Provider (ISP) market isn’t due to market phenomenon, it’s due to regulations put in place by government officials to protect their favored ISPs from competition. But nobody (besides government officials and monopolists) likes monopolies so in order to appeal to the stupid sheep that continue to vote for them, government officials have had to create a new definition of monopoly that allows them to grant monopolies without actually calling the companies that receive their grants monopolists. It’s a complicated business. You should probably just pick up the newest version of the Newspeak dictionary and learn the new definitions and roll with them.

Why Connecting Things to the Internet Doesn’t Give Me Warm Fuzzies

The tend in seemingly every market is to take features that function perfectly well without an Internet connection and make them dependent on an Internet connection. Let’s consider two old automobile features: remote door unlocking and engine starting. Most modern vehicles have the former and many now come equipped with the latter. These features are usually activated by a remote control that is attached to your key chain and have a decent range (the remote for my very basic vehicle can reliably start the engine through several walls). Tesla decided that such a basic feature wasn’t good enough for its high-tech cars and instead tied those features to the Internet. Needless to say, the inevitable happened:

Tesla’s fleet network connection is currently down, which means that owners of the EV brand of cars aren’t able to sign into the mobile app. Unfortunately, this means that they can’t remote start or remote unlock their cars, and they’re also unable to monitor their car’s charging status.

In all fairness, this isn’t an issue unique to Tesla. Any product that makes features dependent on an Internet connection will run into a service outages at one point or another. Your “smart” coffee maker’s service will eventually go down, which will force you to walk over and press the brew button like a goddamn barbarian instead of kicking off the brew cycle from an app as you continue lying in bed.

When these Internet dependent features really bite you in the ass though is when the service provider goes out of business, especially if the product itself cannot operate without the Internet service. There are a lot of current “smart” devices that will soon end up in a landfill not because they mechanically failed but because their service provider went bankrupt. While the features that became unavailable when Tesla’s service went down weren’t critical for the functionality of the vehicle, no longer being able to remotely unlock doors, start the engine, or check the charging status would really degrade the overall user experience of the company’s vehicles.

Cody Wilson Is the Most Uppity Slave

A federal judge may have told Defense Distributed that it couldn’t provide its already widely available 3D printer files but the saga hasn’t ended. Since Defense Distributed can no longer provide its files for free, it will sell them on a USB drive:

AUSTIN, Texas—During what he called his first ever press conference, Defense Distributed founder Cody Wilson announced Tuesday that he would continue to comply with a federal court order forbidding him from internationally publishing CAD files of firearms. Wilson said he would also begin selling copies of his 3D-printed gun files for a “suggested price” of $10 each.

The files, crucially, will be transmitted to customers “on a DD-branded flash drive” in the United States. Wilson also mentioned looking into customer email and secure download links.

Now that the files aren’t leaving the United States, the primary argument being used to censor Defense Distributed is no longer in play.

What I find just as funny as Wilson’s unwillingness to roll over like a good little slave is how he has also become the biggest thorn in the side of gun control advocates seemingly out of nowhere. For decades gun control advocates have focused all of their attention on the National Rifle Association (NRA). While the NRA has acted as the 800 pound gorilla in the room, it has also been an extremely moderate organization. The NRA never pushed anything truly radical. Then along came Cody Wilson. He advocated something truly radical, the complete abolish of the State and by extent gun control. He also showed the world the biggest weakness in the concept of gun control: that guns a mechanically simple devices that can be manufactured with relative ease. While gun control advocates are trying to censor him, he has already done is damage. The world knows that firearms can be easily manufactured. Moreover, the designs for some basic firearms that can be created with a 3D printer have been released to the Internet and are therefore impossible to censor.