A Glimmer of Hope for a Decentralized Internet

If you don’t own your online services, you’re at the mercy of whoever does. This rule has always been true, but hasn’t been obvious until recently. Service providers have become increasingly tyrannical and arbitrary with the exercise of their control. More and more people are finding themselves banned from services like Facebook and YouTube. Compounding the issue is that the reasons given for the bans are often absurd and that’s assuming any any reasons is given at all.

This type of abusive relationship isn’t good for anybody, but is especially dangerous to individuals with money on the table. Imagine investing years of your life in building up a profitable business on a service like YouTube only to have Google take it away without providing so much as a reason. Some content creators on YouTube are beginning to acknowledge that risk and are taking actions to gain control over their fate:

Whether he’s showing off astronomically expensive computer gaming hardware or dumpster-diving for the cheapest PC builds possible, Linus Sebastian’s videos always strike a chord, and have made him one of the most popular tech personalities on YouTube.

But Google-owned YouTube gets most episodes of Linus Tech Tips a week late.

Now, they debut on his own site called Floatplane, which attracts a much smaller crowd.

A handful of content creators are mentioned in the article. Most of them are too nice or perhaps timid to state the real reasons they’re seeking alternatives to YouTube: YouTube has become a liability. Google; like Facebook, Amazon, Twitter, and other large online service providers; has been hard at work destroying all of the goodwill it built up over its lifetime. There’s no way to know whether a video you upload to YouTube today will be available tomorrow. There isn’t even a guarantee that your account will be around tomorrow. If you post something that irritates the wrong person, or more accurately the wrong machine learning algorithm, it will be removed and your account may be suspended for a few days if you’re lucky or deleted altogether if you’re unlucky. And when your content and account are removed, you have little recourse. There’s nobody you can call. The most you can do is send an e-mail and hope that either a person or machine learning algorithm sees it and have a bit of pity on you.

I’m ecstatic that this recent uptick in censorship is happening. In my opinion centralization of the Internet is dangerous. Large service providers like Google are proving my point. They are also forcing people to decentralize, which advances my goals. So less anybody think I’m ungrateful I want to close this post by giving a sincere thank you to companies like Google, Facebook, Twitter, and Amazon for being such complete bastards. Their actions are doing wonders for my cause of decentralizing the Internet.

Maybe Connecting Everything to the Internet Isn’t a Great Idea

I’ve made my feelings about the so-called Internet of things (IoT) abundantly clear over the years. While I won’t dismiss the advantages that come with making devices Internet accessible, I’m put off by industry’s general apathy towards security. This is especially true when critical infrastructure is connected to the Internet. Doing so can leads to stories like this:

Someone broke into the computer system of a water treatment plant in Florida and tried to poison drinking water for a Florida municipality’s roughly 15,000 residents, officials said on Monday.

The intrusion occurred on Friday evening, when an unknown person remotely accessed the computer interface used to adjust the chemicals that treat drinking water for Oldsmar, a small city that’s about 16 miles northwest of Tampa. The intruder changed the level of sodium hydroxide to 11,100 parts per million, a significant increase from the normal amount of 100 ppm, Pinellas County Sheriff Bob Gualtieri said in a Monday morning press conference.

The individuals involved with the water treatment plant have been surprisingly dismissive about this. They’ve pointed out that there was never any danger to the people of Oldsmar because treated water doesn’t hit the supply system for 24 to 36 hours and there procedures in place that would have caught the dangerous levels of sodium hydroxide in the water before it could be release. I believe both claims. I’m certain there are a number of water quality sensors involved in verifying that treated water is safe before it is released into the supply system. However, they’re not mentioning other dangers.

Poisoning isn’t the only danger of this kind of attack. What happens when treated water can’t be released into the supply system? If an attacker poisons some of the treated water, is there isolated surplus that can be released into the supply system instead? If not, this kind of attack is can work as a denial of service against the city’s water supply. What can be done with poisoned water? It can’t be released into the supply system and I doubt environmental regulations will allow it to be dumped into the ground. Even if it could be dumped into the ground, doing so would risk poisoning groundwater supplies. It’s possible that a percentage of the plant’s treatment capacity becomes unavailable for an extended period of time while the poisoned water is purified.

What’s even more concerning is that this attack wasn’t detected by an intrusion detection system. It was detected by dumb luck:

Then, around 1:30 that same day, the operator watched as someone remotely accessed the system again. The operator could see the mouse on his screen being moved to open various functions that controlled the treatment process. The unknown person then opened the function that controls the input of sodium hydroxide and increased it by 111-fold. The intrusion lasted from three to five minutes.

This indicates that the plant’s network security isn’t adequate for the task at hand. Had the operator not been at the console at the time, it’s quite possible that the attacker would have been able to poison the water. There is also a valid question about the user interface. Why does it apparently allow raising the levels of sodium hydroxide to a dangerous amount? If there are valid reasons for doing so (which there absolutely could be), why doesn’t doing so at least require some kind of supervisory approval?

It’s not uncommon for people involved in industries to cite the lack of budget necessary to address the issues I’ve raised. But if there isn’t a sufficient budget to address important security concerns when connecting critical infrastructure to the Internet, I will argue that it shouldn’t be done at all. The risks of introducing remote access to a system aren’t insignificant and the probability of an attack occurring are extremely high.

Whenever somebody discussing connecting a device to the Internet, I immediately ask what benefits doing so will provide. I then ask which of those benefits can be realized with a local automation system. For example, a Nest thermostat offers some convenient features, but many of those features can be realized with a local Home Assistant controller.

Google Suspends Element from Its Play Store

The developers of Element; a decentralized, federated, and secure messaging client; were just informed that their application has been suspended from the Google Play Store, which means Android users cannot currently install Element unless they do it through F-Droid or side loading. Why did Google suspend the app? At first Element’s developers weren’t given a reason but they were eventually informed the suspension was because of abusive content. Both the lack of transparency and citing abusive content have become staples of application store suspensions, which are two of many things that make centralized application stores like the Apple App Store and Google Play Store so frustrating for both users and developers.

The abusive content justification is bullshit because Element is no different than any other messaging application in that all content is user created. If Element is removed due to showing abusive content then by that very same justification Signal, Facebook Messenger, Instagram, and Google’s own Gmail should be removed. Furthermore, Element actually has a pretty complete set of moderation tools so Google can’t even argue that the lack of moderation is the culprit. But this doesn’t matter because there are no consequences for Google if it suspends an application for incorrect reasons. Agreements between developers and Google (and Apple for that matter) are one-sided. The only option for developers when their applications are suspended is to beg for clemency.

The suspension of Element is yet another example on the already extensive list that shows why centralized application stores and closed platforms are bad ideas. Without prior notice or (initially) any reason Google made it so Android users can no longer install Element unless they jump through some hoops (fortunately, unlike with iOS, Android generally gives you some options for installing applications that aren’t in the Play Store). Google might decide to be magnanimous and change its mind. Or it might not. In any case there’s very little that Element’s developers or Android users can do about it.

Fleeing Facebook

Another election is on the horizon, which can only mean Facebook is clamping down on wrongthink in the futile hope that doing so will appease Congress enough that it won’t say mean things about the company that might hurt its stock price. This week’s clamp down appears to be more severe than others. I have several friends who received temporary bans for making posts or comments that expressed apparently incorrect, albeit quite innocent, opinions. A lot of them also reported that some of their friends received permanent bans for posting similar content.

In the old days of the Internet when websites were dispersed you usually had friends from forums, game servers, and various instant messenger clients added on other services. Because of that, getting banned for any single account wasn’t usually a big deal. However, with the centralization that Facebook has brought, losing your Facebook account can mean losing access to a large number of your contacts.

If you are at risk of losing your Facebook account (and if you hold political views even slightly right of Karl Marx, you are), you need to start establishing your contacts on other services now. If you’re like me and have friends that predominantly lean more libertarian or anarchist, you’ve probably seen a number of services being recommended such as MeWe, Parler, and Gab. The problem with these services is that they, like Facebook, are centralized. That means one of two outcomes is likely. If they’re successful, they will likely decide to capitalize by going public. Once that happens, they will slowly devolve into what Facebook has become today because their stock holders will demand it in order to maximize share prices. If they’re not successful, they’ll likely disappear in the coming years, forcing you to reestablish all of your contacts on another service again.

I’m going to recommend two services that will allow you to nip this problem in the bud permanently. The first is a chat service called Element (which was formerly known as Riot). The second is a Twitter-esque service called Mastodon. The reason I’m recommending these two services is because they share features that are critical if you want to actually socialized freely.

The most important feature is that both services can be self-hosted. This means that in the worst case scenario, if no existing servers will accept you and your friends, you can setup your own server. If you’re running your own server, the only people you have to answer to are yourselves. However, you may want to socialize with people outside of your existing friend groups. That’s where another feature called federation comes in. Federation is a feature that allows services on one server to connect with services on another server. This allows the users on one Element or Mastodon instance to socialize with users on another instance. Federation means not having to put all of your eggs in one basket. If you and your friends sign up on different servers, no one admin can ban you all. Moreover, you can setup backup accounts that your friends can add so if you are banned on one server, your friends already have your alternate account added to their contact list.

The reason I’m recommending two services is because Element and Mastodon offer different features that are geared towards different use cases. Element offers a similar experience to Internet Rely Chat (IRC) and various instant messenger protocols (such as Facebook Messenger). It works well if you and your friends want to have private conversations (you can create public chat rooms as well, if you want anybody to be able to join in the conversation). It also offers end-to-end encrypted chat rooms. End-to-end encrypted rooms cannot be surveilled by outside parties meaning even the server administrators can’t spy on your conversation. It’s much harder for a server administrator to ban you and your friends if they’re entirely ignorant of your conversations.

Mastodon offers an experience similar to Twitter (although with more privacy oriented features). You can create public posts that can be viewed by anybody with a web browser and with which anybody with a Mastodon account can interact. This works great if you have a project that requires a public face. For example, you and your friends may work on an open source project about which you provide periodic public updates. Mastodon enables that. Users can also comment on posts, which allows your posts to act as a public forum. Since Mastodon can be self-hosted, you can also setup a private instance that isn’t federated. Thus you could create a private space for you and your friends.

It’s critical to establish your existing contacts on another service now so you don’t find yourself suddenly unable to communicate with them because you expressed the wrong opinion. Even if you don’t choose Element and/or Mastodon, pick a service that you and your friends can tolerate and at least sign up for accounts and add each other to your contact lists. That way if you disappear down Zuckerberg’s memory hole, you can still keep in contact with your friends.

Error Indicators of Limited Value

When I moved into this house, I decided to use UniFi gear for my entire network because I wanted to centrally manage it (I, like most people who work in the technology field, am lazy by nature). This house doesn’t have Ethernet running through the walls so I (again, being lazy) opted to rely on a mesh network for most of my networking needs. My mesh network consists of three UAP-AC-M access points.

Like most other people working in the technology field, I’ve been working from home since COVID-19 started making headlines. This means my in-person meetings have mostly been done via remote video conferences. My setup ran smoothly until a few weeks ago when I started experiencing a strange issue where I’d periodically lose my video conference feeds for 10 to 30 seconds. Since I first setup my mesh network my UniFi Controller has reported a large number (as in several hundred per 24-hour period) of DHCP Timeout errors along with a handful of WPA Authentication Timeout errors. It also reported long access point association times for my two mesh nodes (the other node is wired to my switch). Searching Ubiquiti’s online support forum returned a lot of results for individuals experiencing these errors without any resolution. In fact several comments made by Ubiquiti employees stated that the DHCP Timeout errors can be ignored so long as the network is performing well. I ended up ignoring the errors because at the time my network was performing well and nobody seemed to have a resolution to the errors.

I began looking into the problem again when the video conferencing problems I mentioned started to manifest. To make a long story short, I finally figured out my problem. UAP-AC-M access points use the 5Ghz spectrum for mesh communications so they all operate on the same 5Ghz channel, but it’s expected that they utilize different 2.4Ghz channels. My mesh nodes were setup to automatically select their 2.4Ghz and 5Ghz channels during boot up. I assumed this was safe because I boot them up in stages one after the other. That should have caused them to see each other when they booted up and select a different 2.4Ghz channel. According to my UniFi controller, all three 2.4Ghz channels (one, six, and 11 are the only channels that don’t overlap with other channels) were being utilized so I assumed the access points were operating as I expected. After trying to few different settings I decided to manually select the 2.4Ghz channels for my access points. I put one access point on channel one, one on channel six, and one on channel 11.

Since doing that I haven’t experienced any video conferencing problems. Moreover, my DHCP Timeout errors have dropped to almost nothing (I now experience between two and four per 24-hour period), the WPA Authentication Timeout errors have remained at one or two per 24-hour period, and I no longer see any errors about access points taking longer than expected to associate.

If you’re one of the many people experiencing a massive number of DHCP Timeout errors with UniFi access points and you haven’t already manually selected non-overlapping 2.4Ghz channels for your access points, give it a try. I will note that since I live in the country and there are no other visible Wi-Fi networks anywhere on my property, your experience may differ if you’re in an environment with a lot of competing Wi-Fi networks.

The Way It Should Always Have Been

I received my PinePhone last week. The model I ordered was the UBPorts Community Edition. My initial thoughts on the phone are that the build quality is actually very solid, but otherwise it behaves like a $150 phone. The performance isn’t great, but acceptable; the battery life, which is a known issue, is pretty terrible; and the software is in a pretty rough state (easily beta quality, maybe even late alpha quality). All of these were what was promised and what I expected so none of this should be considered criticism. I’m actually impressed by what the manufacturers and software creators managed to pull off so far.

However, after playing with UBPorts I wanted to try some other operating systems. This is where the PinePhone shines since it doesn’t lock you into any specific operating system. The next released of the Community Edition of the PinePhone will come with postmarketOS so I loaded postmarketOS onto an MicroSD card (you can also flash it to the internal eMMC chip) and booted it on the phone. postmarketOS has a utility that builds an image for you. That utility also allow you to customize a number of things including using full-disk encryption (which I haven’t played with yet since it’s experimental) and choosing your user interface. I chose Phosh for the user interface because I wanted to see what the Librem team has been working on. My experience with postmarketOS was similar to UBPorts. Performance was sluggish, but acceptable and the software is still in a rough state. However, postmarketOS makes it easy to install regular Linux desktop and command line applications so I installed and tried a few applications that I use regularly on the desktop. Unfortunately, most of the available graphical software doesn’t yet support screen scaling so applications are too big for the PinePhone’s screen. With that said, progress is being made in that direction and once more applications support screen scaling there should be a decent number of apps available.

Being able to boot up a different operating system on my phone is the way it should always have been. On my desktop and laptops computers I have always been able to choose what operating system to run, but my mobile devices have always been locked down. Some Android devices do allow you to unlock the boot loader and install a different Android image, but often doing so it’s officially supported by the manufacturer (so it’s often a pain in the ass). It’s nice to finally see a mobile phone that is designed for tinkerers and people who want to actually own their hardware.

Mullvad VPN

Periodically I’m asked to recommend a good Virtual Private Network (VPN) provider. I admit that I don’t spend a ton of time researching VPN providers because my primary use case for VPNs is to access my local network and secure my communications when traveling so most of the time I use my own VPN server. When I want to guard my network traffic against my Internet Service Provider (ISP), I use Tor. With that said, I do try to keep at least one known decent VPN provider in my back pocket to recommend to friends.

In the past I have usually recommended Private Internet Access because it’s ubiquitous, affordable, and its claim that it doesn’t keep logs has been proven in court. However, Private Internet Access is based in the United States, which means it can be subject to National Security Letters (NSL). Moreover, Private Internet Access was recently acquired by Kape Technologies. Kape Technologies has a troubling past and you can never guarantee that a company will maintain the same policies after it has been purchased so I’ve been looking at some alternative recommendations.

Of the handful with which I experimented, I ended up liking Mullvad VPN the most. In fact I ended up really liking it (for me finding a decent VPN provider is usually an exercise in finding the least terrible option).

Mullvad is headquartered in Sweden, which means it’s not subject to NSLs or other draconian United States laws (it’s subject to Swedish laws, but I’m outside of that jurisdiction). But even if it’s subjected to some kind of surveillance law, Mullvad goes to great length to enable you to be anonymous, which greatly hinders its ability to surveil you. To start with your account is just a pseudorandomly generated number. You don’t need to provide any identifiable information, not even an e-mail address. When you want to log in to pay your account, you simple enter your number. The nice thing about this is that the number is also easily disposed of. Since you can generate a new account by simply clicking on a link, you can throw away your account whenever you want. You can even generate accounts via its onion service (this link will only work if you’re using the Tor Browser).

Mullvad’s pricing is €5 (roughly $5.50 when I last paid) per month. Paying per month allows you to change accounts every month if you want. Payments can be made using more traditional services such as credit cards and PayPal, but you can also use more anonymous payment options such as Bitcoin and Bitcoin Cash (I would like to see the option of using Monero since it has anonymity built-in).

The thing that initially motivated me to test Mullvad was the fact that it uses WireGuard. WireGuard is our new VPN overlord. If you’re new to WireGuard or less technically inclined, you can download and use Mullvad’s app. If you’re familiar with WireGuard or willing to learn about it, you can use Mullvad’s configuration file generator to generate WireGuard configuration files for your system (this is how I used it). Mullvad also supports OpenVPN, but I didn’t test it because it’s 2020 and WireGuard is our new VPN overlord.

Like most decent VPN providers, Mullvad also has a page to check if your Mullvad connection is setup correctly. It performs the usual tasks of reporting if you’re connecting through a Mullvad server and if your Domain Name System (DNS) requests are leaking. It also attempts to check if your browser is leaking information through WebRTC. You can also test your torrent client in case you want to download Linux distros (because that’s the only thing anybody downloads via BitTorrent) more securely.

I didn’t come across anything egregious with Mullvad, but don’t take my recommendation too seriously (this is the caveat I give to everybody who asks me to recommend a VPN provider). My VPN use case isn’t centered around maintaining anonymity and I didn’t perform thorough testing in that regard. Instead I tested it based on my use case, which is mostly protecting my connection from local actors when traveling. As with anything, you should test the service yourself.

The Users and the Used

I’m happy that computer technology (for the purpose of this post, I mean any device with a computer in it, not a traditional desktop or laptop) has become ubiquitous. An individual who wants a computer no longer has to buy a kit and solder it together. Instead they can go to the store and pick up a device that will be fully functional out of the box. This has lead to a revolution in individual capabilities. Those of us who utilize computers can access a global communication network from almost anywhere using a device that fits in our pocket. We can crank out printed documents faster than any other time in human history. We can collect data from any number of sources and use it to perform analysis that was impractical before ubiquitous access to computers. In summary life is good.

However, the universe is an imperfect place and few things are without their downsides. The downside to the computer revolution is that there are, broadly speaking, different classes of users. They are often divided into technical and non-technical users, but I prefer to refer to them as users and used. My categorization isn’t so much based on technical ability (although there is a strong correlation) as by whether one is using their technology or being used by it.

Before I continue, I want to note that this categorization, like all attempts to categorize unique individuals, isn’t black and white. Most people will fall into the gray area in between the categories. The main question is whether they fall more towards the user category of the used.

It’s probably easiest to explain the used category first. The computing technology market is overflowing with cheap devices and free services. You can get a smartphone for little or even nothing from some carriers, an Internet connected doorbell for a pittance, and an e-mail account with practically unlimited storage for free. On the surface these look like amazing deals, but they come with a hidden cost. The manufacturers of those devices and providers of those services, being predominantly for-profit companies, are making their money in most cases by collecting your personal information and selling it to advertisers and government agencies (both of which are annoying, but the latter can be deadly). While you may think you’re using the technology you’re actually being used through it by the manufacturers and providers.

A user is the opposite. Instead of using technology that uses them, they use technology that they dominate. For example, Windows 10 was a free upgrade for users of previous versions of Windows. Not surprisingly, Windows 10 also collects a lot of personal information. Instead of using Windows 10, users of that operating system are being used by it. The opposite side of the spectrum is something like Linux from Scratch, where a user creates their own Linux distro from the ground up so they know every component that makes up their operating system. As I stated earlier most people fall into the gray area between the extremes. I predominantly run Fedora Linux on my systems. As far as I’m aware there is no included spyware and the developers aren’t otherwise making money by exploiting my use of the operating system. So it’s my system, I’m using it, not being used through it.

Another example that illustrates the user versus the used categories is online services. I sometimes think everybody on the planet has a Gmail account. Its popularity doesn’t surprise me. Gmail is a very good e-mail service. However, Gmail is primarily a mechanism for Google to collect information to sell to advertisers. People who use Gmail are really being used through it by Google. The opposite side of the spectrum (which is where I fall in this case) is self-hosting an e-mail server. I have a physical server in my house that runs an e-mail server that I setup and continue to maintain. I am using it rather than being used by it.

I noted earlier in this article that there is a strong correlation between technical people and users as well as non-technical people and those being used. It isn’t a one-to-one correlation though. I know people with little technical savvy who utilize products and services that aren’t using them. Oftentimes they have a technical friend who assists them (I’m often that friend), but not always. I would actually argue that the bigger correlation to users and those being used is those who are curious about technology versus those who aren’t. I know quite a few people with little technical savvy who are curious about technology. Their curiosity leads them to learn and they oftentimes become technically savvy in time. But before they do they often make use of technology rather than be used by it. They may buy a laptop to put Linux on it without having the slightest clue at first how to do it. They may setup a personal web server poorly, watch it get exploited, and then try again using what they learned from their mistakes. They may decide to use Signal instead of WhatsApp not because they understand the technical differences between the two but because they are curious about the “secure communications app” that their technical friends are always discussing.

Neither category is objectively better. Both involve trade-offs. I generally encourage people to move themselves more towards the user category though because it offers individuals more power over the tools they use and I’m a strong advocate for individual power. If you follow an even slightly radical philosophy though, I strongly suggest that you to move towards the user category. The information being collected by those being used often finds its way into the hands of government agents and they are more than happy to make use of it to suppress dissidents.

Upgrading My Network

The network at my previous dwelling evolved over several years, which made it a hodgepodge of different gear. Before I moved out the final form of it was a Ubiquiti EdgeMax router, a Ubiquiti Edge Switch, and an Apple Airport Extreme (I got a good deal on it, but it was never something I recommended to people). When I bought my new house I decided to upgrade my network to Ubiquiti UniFi gear. For those who are unaware UniFi gear fits into that niche between consumer and enterprise networking gear (it’s often touted as enterprise gear, but I have my doubts that it would work as well on a massive network spanning multiple locations as more traditional enterprise gear) often referred to as prosumer or SOHO (Small Office/Home Office).

Because I live out in the boonies, my Internet connection is pretty lackluster so I opted for a Security Gateway 3P for my router (it’s generally agreed that the hardware is too slow to keep up with the demands of many modern Internet connections, but I don’t have to worry about that). If I had built a new house, I’d have put Ethernet drops in every room, but I bought a preexisting house with no Ethernet drops, which meant Wi-Fi was going to be my primary form of network connectivity. I still needed Ethernet connections for my servers though so I opted for a 24-port switch as my backbone and AP-AC-M access points for Wi-Fi. The AP-AC-M access points provide mesh networking, which is nice in a house without Ethernet drops because you can extend your Wi-Fi network by connecting new access points to already installed access points. Moreover, they’re rated for outdoor use so I can use them to extend my Wi-Fi network across my property.

A UniFi network is really a software defined network, which means that there is a central controller that you enter your configuration information into and it pushes the required settings out to the appropriate devices. Ubiquiti provides the Cloud Key as a hardware controller, but I already have virtual machine hosts aplenty so I decided to setup a UniFi Controller in a virtual machine.

Previously I was resistant to the idea of having to have a dedicated controller for my network. However, after experiencing software defined networking, I don’t think I could ever go back. Making a single change in one location and having that change propagated out to my entire network is a huge time saver. For example, I decided that I wanted to setup a guest Wi-Fi network. Without a central controller this would have required me to log into the web interface of each access point and enter the new guest network configuration. With a software defined network I merely add the new guest network configuration into my UniFi Controller and it pushes that configuration to each of my access points. If I want to change the Wi-Fi Protected Access (WPA) password for one of my wireless networks, I can change it in the UniFi Controller and each access point will receive the update.

The UniFi Controller also provides a lot of valuable information. I initially setup my wireless network with two access points, but the statistics in the UniFi Controller indicated that my wireless coverage wasn’t great in the bedroom, was barely available on my three season porch, and was entirely unavailable out by my fire pit. I purchased a third access point and rearranged the other two and now have excellent Wi-Fi coverage everywhere I want it. While I could have gathered the same information on a network without a centralized controller by logging into each access point individually, it would have been a pain in the ass. The UniFi Controller also allows you to upload the floor plan of your home and it will show you the expected Wi-Fi coverage based on where you place your access points. I haven’t used that feature yet (I need to create the floor plan in a format that the controller can use), but I plan on playing with it in the future.

Overall the investment into more expensive UniFi gear has been worth it to me. However, most people probably don’t need to spend so much money on their home network. I know many people are able to do everything they want using nothing more than the all in one modem/switch/Wi-Fi access point provided by their Internet Service Provider (admittedly I don’t trust such devices and always place them outside of my network’s firewall). But if you need to setup a network that is more complex than the average home network, UniFi gear is something to consider.

The Importance of Open Platforms

Late last week I pre-ordered the UBports Community Edition PinePhone. It’s not ready for prime time yet. Neither of the cameras work and the battery life from what I’ve read is around four to five hours and there are few applications available at the moment. So why did I pre-order it? Because UBports has been improving rapidly, my iPhone is the last closed platform I run regularly (I keep one macOS machine running mostly so I can backup my iPhone to it), and open platforms may soon be our only option for secure communications:

Signal is warning that an anti-encryption bill circulating in Congress could force the private messaging app to pull out of the US market.

Since the start of the coronavirus pandemic, the free app, which offers end-to-end encryption, has seen a surge in traffic. But on Wednesday, the nonprofit behind the app published a blog post, raising the alarm around the EARN IT Act. “At a time when more people than ever are benefiting from these (encryption) protections, the EARN IT bill proposed by the Senate Judiciary Committee threatens to put them at risk,” Signal developer Joshua Lund wrote in the post.

I used Signal as an example for this post, but in the future when (it’s not a matter of if, it’s a matter of when) the government legally mandates cryptographic back doors in consumer products (you know the law will have an exception for products sold to the government) it’ll mean every secure communication application and platform will either have to no longer be made available in the United States or will have to insert a back door that allows government agents and anybody else who can crack the back door complete access to our data.

On an open platform such a Linux this isn’t the end of the world. I can source both my operating system and my applications from anywhere. If secure communication applications are made illegal in the United States, I have the option of downloading and use an application made in a freer area or better yet developed anonymously (it’s much harder to enforce these laws if the government can’t identify and locate the developers). Closed platforms such as iOS and Android (although Android to a lesser extent since it still allows side loading of applications and you can download an image built off of the Android Open Source Project) require you to download software from their walled garden app stores. If Signal is no longer legally available in the United States, people running iOS and Android will no longer be able to use Signal because those apps will no longer be available in the respective United States app stores.

As the governments of the world continue to take our so-called civil rights behind a shed and unceremoniously put a bullet in their heads closed platforms will continue to become more of a liability. Open platforms on the other hand can be developed by anybody anywhere. They can even be developed anonymously (Bitcoin is probably the most successful example of a project whose initial developer remains anonymous), which makes it difficult for governments to put pressure on the developers to comply with laws.

If you want to ensure your ability to communicate securely in the future and you haven’t already transitioned to open platforms, you should either begin your transition or at least begin to plan your transition. Not all of the pieces are ready yet. Smartphones remain one area where open platforms are lagging behind, but there is a roadmap available so you can at least begin planning a move towards open an smartphone (and at $150 the PinePhone is a pretty low risk platform to try).