Bitcoin Bad, War Bucks Good

The trick to discrediting a new idea or technology is crafting a criticism onto which supporters or people at least open to the idea or technology will latch. A lot of effort has gone into discrediting cryptocurrencies, but most of them have fallen flat because they haven’t spoken to supporters or people open to the idea of cryptocurrencies. However, what I will call the energy scare seems to be gaining some traction. A short while back Mozilla announced that it would stop accepting proof-of-work cryptocurrencies ostensibly for environmental reasons. Now Wikimedia has made a similar announcement:

Wikimedia, the non-profit foundation that runs Wikipedia, has decided to stop accepting cryptocurrency donations following a three-month debate in which the environmental impact of bitcoin (BTC) was a major discussion point.

I’ve previously touched on the energy use of Bitcoin and how it compares to the US dollar. However, since the topic is being brought up again, I feel the need to make some more criticisms of the current critics of Bitcoin.

Mozilla and Wikimedia may not accept your Bitcoin, but both will happily accept your United States dollars. This is baffling because both organizations cite environmental reasons for not accepting Bitcoin, but the United States military is one of the largest polluters in the world:

Research by social scientists from Durham University and Lancaster University shows the US military is one of the largest climate polluters in history, consuming more liquid fuels and emitting more CO2e (carbon-dioxide equivalent) than most countries.

Why does this matter? Because one cannot claim to oppose Bitcoin for environmental reasons while also not opposing United States dollars for the same reasons. The United States dollar is inseparable from the United States military because the latter is necessary to maintain the value of the former:

The world relies on the U.S. dollar and U.S. treasuries, giving America unparalleled and outsized economic dominance. Nearly 90% of international currency transactions are in dollars, 60% of foreign exchange reserves are held in dollars and almost 40% of the world’s debt is issued in dollars, even though the U.S. only accounts for around 20% of global GDP. This special status that the dollar enjoys was born in the 1970s through a military pact between America and Saudi Arabia, leading the world to price oil in dollars and stockpile U.S. debt. As we emerge from the 2020 pandemic and financial crisis, American elites continue to enjoy the exorbitant privilege of issuing the ultimate monetary good and numéraire for energy and finance.

The dollar is backed by one thing: military might. Its value cannot be separated from the United States military anymore than Bitcoin’s value can be separated from the energy usage of its miners. Bitcoin’s current contribution to global pollution is a tiny fraction of the current contribution of the United States military. Therefore, if an organization wants to encourage the use of more environmentally friendly currencies, it would dump the dollar before Bitcoin.

But the here and now isn’t the only consideration. Let’s consider the future. Bitcoin miners have been transitioning towards renewable energy for quite some time. The United States military on the other hand has made no efforts towards doing the same. While Bitcoin miners are already working to become more environmentally friendly, the Commander and Chief of the United States military is only talking about how the military needs to become more environmentally friendly at some undetermined future date.

In conclusion the claims made and actions taken by Mozilla and Wikimedia are disingenuous at best. If either organization has real environmental concerns about the currencies they accept, they have a funny way of demonstrating it.

Dangers of Closed Platforms

I advocate for open decentralized platforms like Mastodon, Matrix, and PeerTube over closed centralized platforms like Facebook, Twitter, and YouTube. While popular open platforms don’t have the reach and user base of popular closed platforms, they also lack many of the dangers.

Two recent stories illustrate some of the bigger dangers of closed platforms. The first was Meta (the new name Facebook chose in its attempt to improve its public image) announcing that it will demand a near 50 percent cut of all digital goods sold on its platform:

Facebook-parent Meta is planning to take a cut of up to 47.5% on the sale of digital assets on its virtual reality platform Horizon Worlds, which is an an integral part of the company’s plan for creating a so-called “metaverse.”

Before Apple popularized completely locked down platforms, software developers were able to sell their wares without cutting in platform owners. For example, if you sold software that ran on Windows, you didn’t have to hand over a percentage of your earnings to Microsoft. This was because Windows, although a closed source platform, didn’t restrict users’ ability to install whatever software they wanted from whichever source they chose. Then Apple announced the App Store. As part of that announcement Apple noted that the App Store would be the only way (at least without jailbreaking) to install additional software on iOS devices and that Apple would claim a 30 percent cut of all software sold on the App Store.

Google announced a very similar deal for Android Devices, but with a few important caveats. The first caveat was that side loading, the act of installing software outside of the Google Play Store, would be allowed (unless a device manufacturer disallowed it). The second caveat was that third-party stores like F-Droid would be supported. The third caveat was that since Android is an open source project, even if Google did away with the first two caveats, developers were free to fork Android and release versions that restored the functionality.

The iOS model favors the platform owner over both third-party software developers and users. The Android model at least cuts third-party software developers and users a bit of slack by giving them alternatives to the officially support platform owner app store (although Google makes an effort to ensure its Play Store is favored over side loading and third-party stores). Meta has chosen the Apple model, which means anybody developing software for Horizon Worlds will be required to hand nearly half of their earnings to Meta. This hostility to third-party developers and users is compounded by the fact that Meta could at any point change the rules and demand an even larger cut.

The second story illustrating the dangers of closed centralized platforms is Elon Musk’s attempt to buy Twitter:

Elon Musk on Wednesday offered to personally acquire Twitter in an all-cash deal valued at $43 billion. Musk laid out the terms of the proposal in a letter to Twitter Chairman Bret Taylor that was reproduced in an SEC filing.

This announcement has upset a lot of Twitter users (especially those who oppose the concept of free speech since Musk publicly support the concept). Were Twitter an open decentralized platform, Musk’s announcement would have less relevance. For example, if Twitter were a federated social media service like Mastodon, users on Twitter could simply migrate to another instance. Federation would allow them to continue interacting with Twitter’s users (unless Twitter block federation, of course), but from an instance not owned and controlled by Musk. But Twitter isn’t open or decentralized. Whoever owns Twitter gets to make the rules and users have no choice but to accept those rules (or migrate to a completely different platform and deal with the Herculean challenge of convincing their friends and followers to migrate with them).

I often point out that if you don’t own a service, you’re at the mercy of whoever does. As an end user you have no power on closed platforms like iOS and Twitter. With open platforms you always have the option to self-host or to find an instance run in a manner you find agreeable.

They’re Called Dumbbells for a Reason

Before I begin my rant, I want to note that the etymology of dumbbell is more interesting than “stupid barbell,” but I’m allowed a bit of artistic license on my own blog. With that out of the way, let me get into this rant.

I still don’t (and likely never will) understand the modern obsession of taking perfectly functional things and making them dysfunctional by connecting them to the Internet. Nike still holds the crowning achievement for its “smart” shoes that became bricked by a firmware update. But the quest to match or exceed Nike continues. Nordictrack is obviously gunning for the crown with its “smart” dumbbells:

There are two things that make the iSelect dumbbells “smart.” The first is that these use an electronic locking mechanism, as opposed to pins or end screws. The second is that you can change the weights using voice commands to Alexa. Though, fortunately, you don’t have to since there’s also a knob that lets you change the weights manually.

[…]

Setting up the dumbbells is easy. All you’ve got to do is download the iSelect app for iOS or Android and then follow the prompts to pair the dumbbells over Bluetooth and Wi-Fi. (The latter is for firmware updates.)

Perhaps I’m showing my age, but why in the hell would anybody want to take perfectly functional weighted chunks of metal and complicate them by adding wireless connectivity, voice commands, a phone app, and firmware updates? Changing weights on adjustable dumbbells isn’t complicated or time consuming. And if you, like the author of the linked article, are concerned about the ruggedness of a physical retaining mechanism, why would you have any faith in a mechanism that is electronically controlled?

If you want adjustable dumbbells, there are a lot of excellent options on the market. Rouge Fitness makes dumbbell bars that accept plate weights. Powerblocks are oddly shaped, but built like tanks. There is also the Nüobell, which maintains a classic dumbbell profile. All of these options are within $100 (after the addition of weights for the Rouge bell and assuming you get the 50 lbs. version of the Nüobell) of the Nordictrack iSelect, are built significantly better, and won’t stop working because the manufacturer pushed out a botched firmware update. There are also adjustable dumbbells on Amazon that are much cheaper than any of these.

There’s no reason to make dumbbells “smart.” The feature set of the iSelect demonstrates that. The only thing the “smarts” let you do is adjust the weight of the dumbbells with Alexa voice commands (and brick the dumbbells with a bad firmware update, of course). And according to the article, the voice commands are slower than using the physical knob on the stand so that single feature is more of a hindrance than a benefit.

As another aside, I chuckled when the article listed “No mandatory subscription” under the pros. The prevalence of tying “smarts” to subscriptions is so great that a “smart” device can earn points by simply continuing to function if you don’t pay a subscription fee. That tells you more than you might realize about “smart” devices.

Averages Apply to Criminals Too

George Carlin once said, “Think of how stupid the average person is, and realize half of them are stupider than that.” This applies to criminals as well.

If you believed the claims of politicians and law enforcers, you’d think that the invention of encryption and the tools it enables, like Tor and Bitcoin, is the end of law enforcement. We’re constantly told that without backdoor access to all encryption, the government is unable to thwart the schemes of terrorists, drug dealers, and child pornographers. Their claims assume that everybody using encryption is knowledgeable about it and technology in general. But real world criminals aren’t James Bond supervillains. They’re human beings, which means most of them are of average or below average intelligence.

The recent high profile child pornography site bust is a perfect example of this point:

He was taken aback by what he saw: Many of this child abuse site’s users—and, by all appearances, its administrators—had done almost nothing to obscure their cryptocurrency trails. An entire network of criminal payments, all intended to be secret, was laid bare before him.

[…]

He spotted what he was looking for almost instantly: an IP address. In fact, to Gambaryan’s surprise, every thumbnail image on the site seemed to display, within the site’s HTML, the IP address of the server where it was physically hosted: 121.185.153.64. He copied those 11 digits into his computer’s command line and ran a basic traceroute function, following its path across the internet back to the location of that server.

Incredibly, the results showed that this computer wasn’t obscured by Tor’s anonymizing network at all; Gambaryan was looking at the actual, unprotected address of a Welcome to Video server. Confirming Levin’s initial hunch, the site was hosted on a residential connection of an internet service provider in South Korea, outside of Seoul.

[…]

Janczewski knew that Torbox and Sigaint, both dark-web services themselves, wouldn’t respond to legal requests for their users’ information. But the BTC-e data included IP addresses for 10 past logins on the exchange by the same user. In nine out of 10, the IP address was obscured with a VPN or Tor. But in one single visit to BTC-e, the user had slipped up: They had left their actual home IP address exposed. “That opened the whole door,” says Janczewski.

Despite the use of several commonly cited tools that supposedly thwart law enforcement efforts, law enforcers were able to discover the location of the server hosting the site and identity of suspected administrators using old fashioned investigative techniques. This was possible because criminals are human beings with all the flaws that entails.

One thing this story illustrates is that it takes only a single slip up to render an otherwise effective security model irrelevant. It also illustrates that just because one is using a tool doesn’t mean they’re using it effectively. Despite what politicians and law enforcers often claim, Bitcoin makes no effort to anonymize transactions. If, for example, law enforcers know the identity of the owner of some Bitcoin and that individual knows the identify of the person buying some of that Bitcoin, it’s simple for law enforcers to identify the buyer. Popular legal crypto exchanges operating in the United States are required to follow know your customer laws, which means they know the real world identity of their users. If you setup an account with one of those exchanges and buy some Bitcoin, then law enforcers can determine your identity by subpoenaing the exchange. Even if the exchange you’re using doesn’t follow know your customer laws, if you connect to it without obscuring your IP address even once, it’s possible for law enforcers to identify you if they can identify and put pressure on the exchange.

No fewer than three mistakes were made by the criminals in this case. First, they falsely believed that Bitcoin anonymizes transactions. Second, they failed to obscure the real world location of the server. Third, one of the individuals involved connected to their Bitcoin exchange without a VPN once. These mistakes made their efforts to secure themselves against law enforcers useless.

When politicians and law enforcers tell you that the government requires backdoor access to encryption in order to thwart terrorists, drug dealers, and child pornographers, they’re lying. Their claims might have some validity in a world where every criminal was as brilliant as a James Bond supervillain, but we don’t live in that world. Here criminals are regular humans. They’re usually of average or below average intelligence. Even though they may know that tools to assist their criminal efforts exist, they likely don’t know how to employ them correctly.

Securing Financial Applications Behind Secondary Accounts

Many people run their entire lives from their mobile devices. Unfortunately, this makes mobile devices prime targets for malicious actors. Apple and Google have responded to this by continuously bolstering the security of their respective mobile operating systems (although the openness of Android means device manufacturers can and often do undo a lot of that security work). One major security improvement has been the optional use of biometrics to unlock devices. Before fingerprint and facial recognition on mobile devices, you had to type in a password (or optionally draw a pattern on Android) every time you wanted to unlock your device. This dissuaded people from setting an unlock password on their devices. Now that mobile devices can be quickly unlocked with fingerprint or facial recognition, implementing a proper unlock password on a device isn’t as inconvenient. With this increase in convenience came an increase in the number of people properly locking their devices.

Setting a proper unlock password protects the owner from the consequences of their mobile device being stolen. A thief might get the device, but if it’s a properly locked (which implies all security updates are installed and the device is actively supported by the manufacturer) device, the thief will be blocked from accessing data on the device such as any financial applications.

Now that locked devices are more prevalent, thieves are resorting to new forms of trickery to gain access to the valuable information on devices:

Most scams that utilize payment apps involve a range of tricks to get you to send money. But some criminals are now skipping that step; they simply ask strangers to use their phones and then send the money themselves.

The victim often doesn’t realize what’s happened until hours or even days later. And by that point, there’s very little they can do about it.

If somebody asks to borrow your phone, tell them no. But asking to borrow a phone isn’t the only way thieves acquire access to unlocked devices. Thieves are also targeting people who are actively using their devices (and since those people often aren’t paying attention to their surrounding, they’re easy targets). If a thief steals an unlocked device from somebody, they can gain access to the information on the device until it is locked again.

Most financial applications offer the ability to set an application specific password, which you should do. However, Android offers another level of security. Android supports multiple user accounts. Applications and data in one user account cannot be accessed by other user accounts (an application can be installed in multiple accounts, but each installation is unique to an account). A user can add a separate user and install their financial applications in that account. When they’re using their main account for things like making calls and instant messaging, their financial accounts remained locked behind the secondary account. So long as the user isn’t actively using the secondary account, any thief who swipes the device while it’s unlocked will not even be able to see which, if any, financial applications are installed.

Financial applications aren’t the only ones that you can hide behind secondary user accounts, but they’re good candidates because unauthorized access to those applications can result in real world consequences. Furthermore, financial applications usually aren’t accessed frequently. They’re accessed when a user needs to check the status of an account or make a transaction.

Malicious Automatic Updates

The early days of the Internet demonstrated both the importance and lack of computer security. Versions of Windows before XP had no security to speak off. But even by the time Windows XP was released, your could still easily compromise your entire system by visiting a malicious site (while this is still a possibility today, it was a guarantee back then). It was during the reign of Windows XP when Microsoft started taking security more seriously. Windows XP Service Pack 2 included a number of security improvements to the operating system. However, this didn’t solve the problem of woeful computer security because even the best security improvements are worthless if nobody actually installs them.

Most users won’t manually check for software updates. Even if the system automatically checks for updates and notifies users when they’re available, those users often still won’t install those updates. This behavior lead to the rise of automatic updates.

In regards to security, automatic updates are good. But like all good things, automatic updates are also abused by malicious actors. Nowhere is this more prominent than with smart appliances. Vizio recently released an update for some of their smart televisions. The update included a new “feature” that spies on what you’re watching and displays tailored ads over that content:

The Vizio TV that you bought with hard-earned cash has a new feature; Jump Ads. Vizio will first identify what is on your screen and then place interactive banner ads over live TV programs.

[…]

It is based on Vizio’s in-house technology from subsidiary company Inscape that uses automatic content recognition (ACR) to identify what is on your screen at any given moment. If the system detects a specific show on live TV it can then show ads in real-time.

Vizio isn’t unique in this behavior. Many device manufacturers use automatic updates to push out bullshit “features.” This strategy is especially insidious because the malicious behavior isn’t present when the device is purchased and, oftentimes, the buyer has no method to stop the updates from being installed. Many smart devices demand an active Internet connection before they’ll provide any functionality, even offline functionality. Some smart devices when not given Internet access will scan for open Wi-Fi networks and automatically connect to any one they find (which is a notable security problem). And as the price of machine to machine cellular access continues to drop, more manufacturers are going to cut out the local network requirement and setup their smart devices to automatically connect to any available cellular network.

This pisses me off for a number of reasons. The biggest reason is that the functionality of the device is being significantly altered after purchase. S consumer may buy a specific device for a reason that ceases to exists after an automatic update is pushed out by the manufacturer. The second biggest reason this behavior pisses me off is because it taints the idea of automatic updates in the eyes of consumers. Automatic updates are an important component in consumer computer security, but consumers will shy away from them if they are continually used to provide a negative experience. Hence this behavior is a detriment to consumer computer security.

As an aside, this behavior illustrates another important fact that I’ve ranted about numerous times: you don’t own your smart devices. When you buy a smart device, you’re paying money to grant a manufacturer the privilege to dictate how you will use that device. If the manufacturer decides that you need to view ads on the screen of your smart oven in order to use it, there is nothing you as an end consumer can do (if you’re sufficiently technical you might be able to work around it, but then you’re just paying money to suffer the headache of fighting your own device).

Once again I encourage everybody reading this to give serious consideration to the dwindling number of dumb devices. Even if a smart device offers features that are appealing to your use case, you have to remember that the manufacturer can take those features away at any time without giving you any prior notice. Moreover, they can also add features you don’t want at any time without any notice (such as spyware on your television).

Ode to the Dumb Car

I own three vehicles. The newest one was built in 2008. They’re all dumb vehicles. They have gauges on the dashboard and the only “screen” any of them have are primitive segmented LED displays on their radios. The clocks only know how to display hours and minutes and need to be manually set whenever daylight savings time changes (or the battery is disconnected).

To me a vehicle is a long term purchase. When I buy one, I assume that I’ll be driving it until is stops functioning. I want at least a decade and always hope for more. Because I tend to drive vehicles for a long time, I avoid vehicles that have built-in navigation, touch screens, or infotainment systems. Vehicle manufacturers are notoriously bad at software. Not only do they tend to write software poorly, they also don’t provide updates for very long. That can lead to awkward situations like your clock rolling back 1024 weeks:

The Jalopnik inbox has been lit up with a number of reports about clocks and calendars in Honda cars getting stuck at a certain time in the year 2002. The spread is impressive, impacting Honda and Acura models as old as 2004 and as new as 2012. Here’s what might be happening.

If you scroll through a Honda or Acura forum right now, chances are you’re going to run into a bunch of confused owners. When they hopped into their cars on January 1 they found the clocks on their navigation systems frozen at a certain time. And the calendar date? 2002, or 20 years ago.

[…]

Drive Accord forum user Jacalar went into the navigation system’s diagnostic menu on Sunday and discovered that the GPS date was set to May 19, 2002, or exactly 1024 weeks in the past.

Global Positioning Systems measure time from an epoch, or a specific starting point used to calculate time. The date is broadcasted including a number representing the week, coded in 10 binary digits. These digits count from 0 to 1023 then roll over on week 1024. GPS weeks first started on January 6, 1980 before first zeroing out on midnight August 21, 1999. It happened again April 6, 2019. The next happens in 2038.

Synchronizing time with GPS is an intelligent choice. But you have to understand the specification. Since the week counter for GPS rolls over every 1024 weeks, you need your system to take that into account and adjust accordingly. Honda didn’t take that into consideration so now the clock on a bunch of their vehicles is stuck 20 years in the past. Making the matter worse is that Honda hasn’t provided a fix and, if history is any indicator, may never provide a fix (or at least not provide a fix for vehicles past a certain manufacturing date).

This problem is just another on the long list of what I like to call software based obsolescence. Software based obsolescence isn’t necessarily planned obsolescence. I doubt anybody at Honda implemented a plan to cause this issue. In all likelihood the software developers were ignorant of the fact that the GPS week counter rolls over every 1024 weeks. Because they were ignorant of that behavior, the didn’t take it into consideration when they wrote the software (in fact the developers may have been using a third-party library for syncing time with GPS and that library didn’t take the rollover into consideration).

As a general rule software doesn’t age well. The more complex a piece of software is, the worse it will age (obviously exceptions to the rule exist). So software written to control a specific process in your engine may age fine, but software that handles time synchronization (a surprisingly complex task) will likely age poorly. This is why software patches exist. However, when you combine increasingly complex software with systems that cannot be updated or will not be updated after a specific period of time, that product, if it’s dependent on software, will have the same life expectancy as the software. In the case of the Honda vehicles mentioned in the story, the rest of the vehicle is able to operate properly even if the time synchronization is broken. But if a system depends on an accurate clock, then improper time synchronization will break that system.

This is why I prefer to avoid systems that are reliant on software unless I only plan to use the platform for a specific period of time or the platform is open to user modification and the software it depends on is open source.

Always On Microphones are Always On

Reader Steve T. sent me a link to story confirming my decision to not own smart speakers. A woman going by the name my.data.not.yours on TikTok (I guess this is the new hip surveillance social media network) sent a request to Amazon for all of the data the company had on her. The result? Exactly what you would expect (I sanitize the TikTok link embedded in the source so I’ll apologize here if it doesn’t work):

TikToker my.data.not.yours explained: “I requested all the data Amazon has on me and here’s what I found.”

She revealed that she has three Amazon smart speakers.

Two are Amazon Dot speakers and one is an Echo device.

Her home also contains smart bulbs.

She said: “When I downloaded the ZIP file these are all the folders it came with.”

The TikToker then clicked on the audio file and revealed thousands of short voice clips that she claims Amazon has collected from her smart speakers.

Smart speakers like the ones provided by Amazon have an always on microphone to listen for voice commands. The problem isn’t necessarily the always on microphone but the fact that most smart speakers don’t perform on-site audio analysis (or only perform very limited on-site analysis). Instead they record audio and send it to an off-site server for processing. Why is the audio moved off-site? Ostensibly it’s because an embedded device like a smart speaker doesn’t have the same processing power as a data center full of computers. Though I suspect that gaining access to valuable information like household conversations has more to do with the data being moved off-site than the accuracy of the audio analysis.

The next question one might ask is, why is the data being stored? This is why I suspect moving the data off-site has more to do with gaining access to valuable information. Once the audio has been analyzed and the commands to be executed transmitted back to the smart speaker, the audio recording could be deleted. my.data.not.yours discovered that the audio isn’t deleted or at least not all of the audio is deleted. But even if Amazon promised to delete all of the audio sent to its servers, there would be no way for you as an end user to verify whether the company actually followed through. Once the data leaves your network, you lose control over it.

The problem with Amazon’s smart speakers is exacerbated by their proprietary nature. While Amazon provides the source code necessary to comply with the licenses of the open source components it uses, much of the stack involved with its smart speakers is proprietary. This means you have no insight into what your Amazon smart speaker is actually doing. You have a black box and promises from Amazon that it isn’t doing any shady shit. That’s not much of a guarantee. Especially when dealing with a device that is designed to listen to everything you say.

One VPN Provider to Rule Them All

When somebody first develops an interest in privacy, the first piece of advice they usually come across is to use a virtual private network (VPN). Because their interest in privacy is newly developed, they usually have little knowledge beyond that they “need a VPN.” So they do a Google (again, their interest in privacy is new) search for VPN and find a number of review sites and providers. Being a smart consumer they read the review sites and choose a provider that consistently receives good reviews. What the poor bastard doesn’t know is that many of those review sites and providers are owned by the same company (a company, I will add, that is shady as fuck):

Kape Technologies, a former malware distributor that operates in Israel, has now acquired four different VPN services and a collection of VPN “review” websites that rank Kape’s VPN holdings at the top of their recommendations. This report examines the controversial history of Kape Technologies and its rapid expansion into the VPN industry.

If you’re not familiar with Kape Technologies, the linked report provides a good overview. If you want a TL;DR, Kape Technologies has a history of distributing malware and now owns ExpressVPN, CyberGhost, Private Internet Access, and Zenmate. Because of Kape Technologies’ history, I would advise against using one of its VPN providers. It’s not impossible for a company to turn over a new leaf, but with other options available (at least until Kape buys them all), why take chances?

If you’re a person with a newfound interest in privacy and looking for recommendations, I unfortunately don’t have any good recommendations for review sites. The handful of review sites that I used to trust have either disappeared or been bought by VPN providers (which by itself doesn’t necessary make a review site untrustworthy, but I’m always wary of such conflicts of interest).

As far as VPN providers go, I use Mullvad and I like it. It supports WireGuard (my preferred VPN protocol), doesn’t ask for any personally identifiable information when signing up for an account, accepts anonymous forms of payment (including straight cash mailed in an envelope), and seems determined to remain independent (at least for now).

It’s a Tracking Device, Not a Smartphone

I like to refer smartphones as voluntary tracking devices. Cellular technology provides your location to the network provide as a side effect. Smartphones can also leak your location through other means. But location isn’t the only type of information collected by smartphones. Android has a sordid reputation when it comes to data collection. Part of this is because Google’s primary business is collecting information to sell to advertisers. Another part is that handset manufacturers can bake additional data collection into their Android devices. Another part is that Android lacked granular application permissions until more recent versions, which allowed application developers to collect more information.

Apple on the other hand has enjoyed a much better reputation. Part of this is because Apple’s primary business model was selling hardware (now its primary business model is selling services). But Apple also invested a lot in securing its platform. iOS provided users more granular control over what applications could access earlier than Android. It also included a lot of privacy enhancements. However, Apple’s reputation isn’t as deserved as one might think. Research shows that iOS collects a lot of information:

“Both iOS and Google Android share data with Apple/Google on average every 4.5 [minutes],” a research paper published last week by Trinity College in Dublin says. “The ‘essential’ data collection is extensive, and likely at odds with reasonable user expectations.”

Much of this data collection takes place after the phone is first turned on, before the user logs into an Apple or Google account, and even when all optional data-sharing settings are disabled.

“Both iOS and Google Android transmit telemetry, despite the user explicitly opting out of this,” the paper adds. “However, Google collects a notably larger volume of handset data than Apple.”

I can’t say that this surprises me. Apple is a publicly traded company, which means its executives are beholden to share holders interested almost exclusively in increasing the price of their shares. That means Apple’s executives needs to constantly increase the company’s revenue. User information is incredibly valuable. Mark Zuckerberg made a multi-billion dollar company out of collective user information. So it was unrealistic to expect Apple to leave that kind of potential revenue on the table. Even if Apple isn’t currently selling the information, it can start at any time. Moreover, if it has the information, it can be obtained by state agents via a warrant.

This brings up an obvious question. What smartphone should individuals concerned about privacy get? Unfortunately, Android and iOS are the two biggest players in the smartphone market. They are also the only two players readily available to consumers who aren’t tech savvy. GrapheneOS is an example of an Android version that offers better privacy than the stock versions found on most devices. But using it requires buying a supported Pixel and flashing GrapheneOS to it yourself. There are also phones that run mainline Linux such as the PinePhone and Librem 5. The problem with those devices is the state of the available software. Mainline Linux distributions designed for those phones are still in development and likely won’t meet the needs of most consumers.

Right now the market looks grim if you want a smartphone, are concerned about privacy, and aren’t tech savvy enough to flash third-party firmware to your phone.