Dangers of Closed Platforms

I advocate for open decentralized platforms like Mastodon, Matrix, and PeerTube over closed centralized platforms like Facebook, Twitter, and YouTube. While popular open platforms don’t have the reach and user base of popular closed platforms, they also lack many of the dangers.

Two recent stories illustrate some of the bigger dangers of closed platforms. The first was Meta (the new name Facebook chose in its attempt to improve its public image) announcing that it will demand a near 50 percent cut of all digital goods sold on its platform:

Facebook-parent Meta is planning to take a cut of up to 47.5% on the sale of digital assets on its virtual reality platform Horizon Worlds, which is an an integral part of the company’s plan for creating a so-called “metaverse.”

Before Apple popularized completely locked down platforms, software developers were able to sell their wares without cutting in platform owners. For example, if you sold software that ran on Windows, you didn’t have to hand over a percentage of your earnings to Microsoft. This was because Windows, although a closed source platform, didn’t restrict users’ ability to install whatever software they wanted from whichever source they chose. Then Apple announced the App Store. As part of that announcement Apple noted that the App Store would be the only way (at least without jailbreaking) to install additional software on iOS devices and that Apple would claim a 30 percent cut of all software sold on the App Store.

Google announced a very similar deal for Android Devices, but with a few important caveats. The first caveat was that side loading, the act of installing software outside of the Google Play Store, would be allowed (unless a device manufacturer disallowed it). The second caveat was that third-party stores like F-Droid would be supported. The third caveat was that since Android is an open source project, even if Google did away with the first two caveats, developers were free to fork Android and release versions that restored the functionality.

The iOS model favors the platform owner over both third-party software developers and users. The Android model at least cuts third-party software developers and users a bit of slack by giving them alternatives to the officially support platform owner app store (although Google makes an effort to ensure its Play Store is favored over side loading and third-party stores). Meta has chosen the Apple model, which means anybody developing software for Horizon Worlds will be required to hand nearly half of their earnings to Meta. This hostility to third-party developers and users is compounded by the fact that Meta could at any point change the rules and demand an even larger cut.

The second story illustrating the dangers of closed centralized platforms is Elon Musk’s attempt to buy Twitter:

Elon Musk on Wednesday offered to personally acquire Twitter in an all-cash deal valued at $43 billion. Musk laid out the terms of the proposal in a letter to Twitter Chairman Bret Taylor that was reproduced in an SEC filing.

This announcement has upset a lot of Twitter users (especially those who oppose the concept of free speech since Musk publicly support the concept). Were Twitter an open decentralized platform, Musk’s announcement would have less relevance. For example, if Twitter were a federated social media service like Mastodon, users on Twitter could simply migrate to another instance. Federation would allow them to continue interacting with Twitter’s users (unless Twitter block federation, of course), but from an instance not owned and controlled by Musk. But Twitter isn’t open or decentralized. Whoever owns Twitter gets to make the rules and users have no choice but to accept those rules (or migrate to a completely different platform and deal with the Herculean challenge of convincing their friends and followers to migrate with them).

I often point out that if you don’t own a service, you’re at the mercy of whoever does. As an end user you have no power on closed platforms like iOS and Twitter. With open platforms you always have the option to self-host or to find an instance run in a manner you find agreeable.

They’re Called Dumbbells for a Reason

Before I begin my rant, I want to note that the etymology of dumbbell is more interesting than “stupid barbell,” but I’m allowed a bit of artistic license on my own blog. With that out of the way, let me get into this rant.

I still don’t (and likely never will) understand the modern obsession of taking perfectly functional things and making them dysfunctional by connecting them to the Internet. Nike still holds the crowning achievement for its “smart” shoes that became bricked by a firmware update. But the quest to match or exceed Nike continues. Nordictrack is obviously gunning for the crown with its “smart” dumbbells:

There are two things that make the iSelect dumbbells “smart.” The first is that these use an electronic locking mechanism, as opposed to pins or end screws. The second is that you can change the weights using voice commands to Alexa. Though, fortunately, you don’t have to since there’s also a knob that lets you change the weights manually.

[…]

Setting up the dumbbells is easy. All you’ve got to do is download the iSelect app for iOS or Android and then follow the prompts to pair the dumbbells over Bluetooth and Wi-Fi. (The latter is for firmware updates.)

Perhaps I’m showing my age, but why in the hell would anybody want to take perfectly functional weighted chunks of metal and complicate them by adding wireless connectivity, voice commands, a phone app, and firmware updates? Changing weights on adjustable dumbbells isn’t complicated or time consuming. And if you, like the author of the linked article, are concerned about the ruggedness of a physical retaining mechanism, why would you have any faith in a mechanism that is electronically controlled?

If you want adjustable dumbbells, there are a lot of excellent options on the market. Rouge Fitness makes dumbbell bars that accept plate weights. Powerblocks are oddly shaped, but built like tanks. There is also the Nüobell, which maintains a classic dumbbell profile. All of these options are within $100 (after the addition of weights for the Rouge bell and assuming you get the 50 lbs. version of the Nüobell) of the Nordictrack iSelect, are built significantly better, and won’t stop working because the manufacturer pushed out a botched firmware update. There are also adjustable dumbbells on Amazon that are much cheaper than any of these.

There’s no reason to make dumbbells “smart.” The feature set of the iSelect demonstrates that. The only thing the “smarts” let you do is adjust the weight of the dumbbells with Alexa voice commands (and brick the dumbbells with a bad firmware update, of course). And according to the article, the voice commands are slower than using the physical knob on the stand so that single feature is more of a hindrance than a benefit.

As another aside, I chuckled when the article listed “No mandatory subscription” under the pros. The prevalence of tying “smarts” to subscriptions is so great that a “smart” device can earn points by simply continuing to function if you don’t pay a subscription fee. That tells you more than you might realize about “smart” devices.

Averages Apply to Criminals Too

George Carlin once said, “Think of how stupid the average person is, and realize half of them are stupider than that.” This applies to criminals as well.

If you believed the claims of politicians and law enforcers, you’d think that the invention of encryption and the tools it enables, like Tor and Bitcoin, is the end of law enforcement. We’re constantly told that without backdoor access to all encryption, the government is unable to thwart the schemes of terrorists, drug dealers, and child pornographers. Their claims assume that everybody using encryption is knowledgeable about it and technology in general. But real world criminals aren’t James Bond supervillains. They’re human beings, which means most of them are of average or below average intelligence.

The recent high profile child pornography site bust is a perfect example of this point:

He was taken aback by what he saw: Many of this child abuse site’s users—and, by all appearances, its administrators—had done almost nothing to obscure their cryptocurrency trails. An entire network of criminal payments, all intended to be secret, was laid bare before him.

[…]

He spotted what he was looking for almost instantly: an IP address. In fact, to Gambaryan’s surprise, every thumbnail image on the site seemed to display, within the site’s HTML, the IP address of the server where it was physically hosted: 121.185.153.64. He copied those 11 digits into his computer’s command line and ran a basic traceroute function, following its path across the internet back to the location of that server.

Incredibly, the results showed that this computer wasn’t obscured by Tor’s anonymizing network at all; Gambaryan was looking at the actual, unprotected address of a Welcome to Video server. Confirming Levin’s initial hunch, the site was hosted on a residential connection of an internet service provider in South Korea, outside of Seoul.

[…]

Janczewski knew that Torbox and Sigaint, both dark-web services themselves, wouldn’t respond to legal requests for their users’ information. But the BTC-e data included IP addresses for 10 past logins on the exchange by the same user. In nine out of 10, the IP address was obscured with a VPN or Tor. But in one single visit to BTC-e, the user had slipped up: They had left their actual home IP address exposed. “That opened the whole door,” says Janczewski.

Despite the use of several commonly cited tools that supposedly thwart law enforcement efforts, law enforcers were able to discover the location of the server hosting the site and identity of suspected administrators using old fashioned investigative techniques. This was possible because criminals are human beings with all the flaws that entails.

One thing this story illustrates is that it takes only a single slip up to render an otherwise effective security model irrelevant. It also illustrates that just because one is using a tool doesn’t mean they’re using it effectively. Despite what politicians and law enforcers often claim, Bitcoin makes no effort to anonymize transactions. If, for example, law enforcers know the identity of the owner of some Bitcoin and that individual knows the identify of the person buying some of that Bitcoin, it’s simple for law enforcers to identify the buyer. Popular legal crypto exchanges operating in the United States are required to follow know your customer laws, which means they know the real world identity of their users. If you setup an account with one of those exchanges and buy some Bitcoin, then law enforcers can determine your identity by subpoenaing the exchange. Even if the exchange you’re using doesn’t follow know your customer laws, if you connect to it without obscuring your IP address even once, it’s possible for law enforcers to identify you if they can identify and put pressure on the exchange.

No fewer than three mistakes were made by the criminals in this case. First, they falsely believed that Bitcoin anonymizes transactions. Second, they failed to obscure the real world location of the server. Third, one of the individuals involved connected to their Bitcoin exchange without a VPN once. These mistakes made their efforts to secure themselves against law enforcers useless.

When politicians and law enforcers tell you that the government requires backdoor access to encryption in order to thwart terrorists, drug dealers, and child pornographers, they’re lying. Their claims might have some validity in a world where every criminal was as brilliant as a James Bond supervillain, but we don’t live in that world. Here criminals are regular humans. They’re usually of average or below average intelligence. Even though they may know that tools to assist their criminal efforts exist, they likely don’t know how to employ them correctly.

Jury Rules Fairly in FBI Fabricated Plot to Kidnap the Michigan Governor

Back in 2020 when the news broke that a handful of militiamen had been arrested for plotting to kidnap the Michigan governor, my first assumption was that the plot was likely fabricated by undercover Federal Bureau of Investigations (FBI) agents. This is because many, if not most, of the high profile terror cases seemingly thwarted by the FBI were in fact created by the FBI in the first place.

If you delve into the details of these cases, you quickly learn that no serious plot would have ever developed had the FBI not gotten involved. Therefore, I’ve argued that these cases are entrapment and the arrested suspects should be found not guilty. Unfortunately, juries usually side with the state in these cases, which encourages the FBI to fabricate more of them. Fortunately, the jury for the Michigan kidnapping plot acted against the norm:

A US federal jury has acquitted two men accused of plotting to kidnap Michigan’s governor and failed to reach a verdict for two other defendants.

[…]

Jurors began deliberating this week after 14 days of testimony and had indicated earlier on Friday that they were deadlocked on some of the charges.

They ultimately reached no verdict against Mr Fox, who was alleged to be the group’s ringleader, and Mr Croft, both of whom were also facing an additional count each of conspiracy to use a weapon of mass destruction.

I would have rather seen a not guilty verdict, but I find a deadlock fair enough since the suspects still beat the charges.

Although I suspect this is decision a statistical anomaly, the optimist in me hopes that it’s the beginning of a trend where juries rule against the state in these kinds of cases. The FBI should not get credit for thwarting plots it creates. I will even argue that, if anything, the agency should be punished severely for doing so (but I know that will never happen).

If you’d like to learn more about the FBI’s tendency for fabricating terror plots, there is a good albeit a bit dated book titled The Terror Factory by Trevor Aaronson that details this strategy up to the 2014 publication date.

Securing Financial Applications Behind Secondary Accounts

Many people run their entire lives from their mobile devices. Unfortunately, this makes mobile devices prime targets for malicious actors. Apple and Google have responded to this by continuously bolstering the security of their respective mobile operating systems (although the openness of Android means device manufacturers can and often do undo a lot of that security work). One major security improvement has been the optional use of biometrics to unlock devices. Before fingerprint and facial recognition on mobile devices, you had to type in a password (or optionally draw a pattern on Android) every time you wanted to unlock your device. This dissuaded people from setting an unlock password on their devices. Now that mobile devices can be quickly unlocked with fingerprint or facial recognition, implementing a proper unlock password on a device isn’t as inconvenient. With this increase in convenience came an increase in the number of people properly locking their devices.

Setting a proper unlock password protects the owner from the consequences of their mobile device being stolen. A thief might get the device, but if it’s a properly locked (which implies all security updates are installed and the device is actively supported by the manufacturer) device, the thief will be blocked from accessing data on the device such as any financial applications.

Now that locked devices are more prevalent, thieves are resorting to new forms of trickery to gain access to the valuable information on devices:

Most scams that utilize payment apps involve a range of tricks to get you to send money. But some criminals are now skipping that step; they simply ask strangers to use their phones and then send the money themselves.

The victim often doesn’t realize what’s happened until hours or even days later. And by that point, there’s very little they can do about it.

If somebody asks to borrow your phone, tell them no. But asking to borrow a phone isn’t the only way thieves acquire access to unlocked devices. Thieves are also targeting people who are actively using their devices (and since those people often aren’t paying attention to their surrounding, they’re easy targets). If a thief steals an unlocked device from somebody, they can gain access to the information on the device until it is locked again.

Most financial applications offer the ability to set an application specific password, which you should do. However, Android offers another level of security. Android supports multiple user accounts. Applications and data in one user account cannot be accessed by other user accounts (an application can be installed in multiple accounts, but each installation is unique to an account). A user can add a separate user and install their financial applications in that account. When they’re using their main account for things like making calls and instant messaging, their financial accounts remained locked behind the secondary account. So long as the user isn’t actively using the secondary account, any thief who swipes the device while it’s unlocked will not even be able to see which, if any, financial applications are installed.

Financial applications aren’t the only ones that you can hide behind secondary user accounts, but they’re good candidates because unauthorized access to those applications can result in real world consequences. Furthermore, financial applications usually aren’t accessed frequently. They’re accessed when a user needs to check the status of an account or make a transaction.

Malicious Automatic Updates

The early days of the Internet demonstrated both the importance and lack of computer security. Versions of Windows before XP had no security to speak off. But even by the time Windows XP was released, your could still easily compromise your entire system by visiting a malicious site (while this is still a possibility today, it was a guarantee back then). It was during the reign of Windows XP when Microsoft started taking security more seriously. Windows XP Service Pack 2 included a number of security improvements to the operating system. However, this didn’t solve the problem of woeful computer security because even the best security improvements are worthless if nobody actually installs them.

Most users won’t manually check for software updates. Even if the system automatically checks for updates and notifies users when they’re available, those users often still won’t install those updates. This behavior lead to the rise of automatic updates.

In regards to security, automatic updates are good. But like all good things, automatic updates are also abused by malicious actors. Nowhere is this more prominent than with smart appliances. Vizio recently released an update for some of their smart televisions. The update included a new “feature” that spies on what you’re watching and displays tailored ads over that content:

The Vizio TV that you bought with hard-earned cash has a new feature; Jump Ads. Vizio will first identify what is on your screen and then place interactive banner ads over live TV programs.

[…]

It is based on Vizio’s in-house technology from subsidiary company Inscape that uses automatic content recognition (ACR) to identify what is on your screen at any given moment. If the system detects a specific show on live TV it can then show ads in real-time.

Vizio isn’t unique in this behavior. Many device manufacturers use automatic updates to push out bullshit “features.” This strategy is especially insidious because the malicious behavior isn’t present when the device is purchased and, oftentimes, the buyer has no method to stop the updates from being installed. Many smart devices demand an active Internet connection before they’ll provide any functionality, even offline functionality. Some smart devices when not given Internet access will scan for open Wi-Fi networks and automatically connect to any one they find (which is a notable security problem). And as the price of machine to machine cellular access continues to drop, more manufacturers are going to cut out the local network requirement and setup their smart devices to automatically connect to any available cellular network.

This pisses me off for a number of reasons. The biggest reason is that the functionality of the device is being significantly altered after purchase. S consumer may buy a specific device for a reason that ceases to exists after an automatic update is pushed out by the manufacturer. The second biggest reason this behavior pisses me off is because it taints the idea of automatic updates in the eyes of consumers. Automatic updates are an important component in consumer computer security, but consumers will shy away from them if they are continually used to provide a negative experience. Hence this behavior is a detriment to consumer computer security.

As an aside, this behavior illustrates another important fact that I’ve ranted about numerous times: you don’t own your smart devices. When you buy a smart device, you’re paying money to grant a manufacturer the privilege to dictate how you will use that device. If the manufacturer decides that you need to view ads on the screen of your smart oven in order to use it, there is nothing you as an end consumer can do (if you’re sufficiently technical you might be able to work around it, but then you’re just paying money to suffer the headache of fighting your own device).

Once again I encourage everybody reading this to give serious consideration to the dwindling number of dumb devices. Even if a smart device offers features that are appealing to your use case, you have to remember that the manufacturer can take those features away at any time without giving you any prior notice. Moreover, they can also add features you don’t want at any time without any notice (such as spyware on your television).