Apple Adds Big Brother to iOS

There are two dominate smartphone operating systems: Google’s Android and Apple’s iOS. Google’s business model depends on surveilling users. Apple has exploited this fact by making privacy a major selling point in its marketing material. When it comes to privacy, iOS is significantly better than Android… at least it was. Today it was revealed that Apple plans to add a feature to iOS that surveils users:

Child exploitation is a serious problem, and Apple isn’t the first tech company to bend its privacy-protective stance in an attempt to combat it. But that choice will come at a high price for overall user privacy. Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.

[…]

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

When Apple releases these “client-side scanning” functionalities, users of iCloud Photos, child users of iMessage, and anyone who talks to a minor through iMessage will have to carefully consider their privacy and security priorities in light of the changes, and possibly be unable to safely use what until this development is one of the preeminent encrypted messengers.

I’ve been pleasantly surprised by the amount of outrage I’ve seen online about this feature. I expected most people to praise this feature out of fear of being labeled a defender of child pornography if they criticized it. But even comments on Apple fanboy sites seem to be predominantly against this nonsense.

This move once again demonstrates the dangers of proprietary platforms. If, for example, a Linux distro decided to include a feature like this, users would have a number of options. They could migrate to another distro. They could rip the feature out. They could create a fork of the distro that didn’t include the spyware. This is because Linux is an open system and users maintain complete control over it.

Unfortunately, there aren’t a lot of options when it comes to open smartphones. The options that do exist aren’t readily accessible to non-technical users. Android Open Source Projects, which are versions of Android without Google’s proprietary bits, like LineageOS and GrapheneOS don’t come preinstalled on devices. Users have to flash those distros to supported devices. Smartphones developed to run mainline Linux like the PinePhone and Librem 5 still lack stable software. Most people are stuck with spyware infested smartphone. Exacerbating this issue is the fact that smartphones, unlike traditional x86-based computers, are themselves closed platforms (which is not to say x86-based platforms are entirely open, but they are generally much more open that embedded ARM devices) so developing open source operating systems for them is much harder.

Server Migration Complete

When I first started self-hosting my blog, I was using a 2010 MacMini running Mac OS X 10.6. When Apple released 10.7, it did away with the server edition and instead replaced it with an app that didn’t work. That forced me to migrate my blog to Linux. I used Ubuntu Server LTS and it worked very well. However, I didn’t utilize any automation so whenever I wanted to do maintenance on or rebuild my server, I had to do so manually. This meant that maintenance didn’t get done when life got too busy. Over the last year I’ve been slowly migrating my manually built infrastructure to fully automated builds using Ansible. While I have a number of grievances with Ansible (YAML is an awful automation language and whoever decided to use it should be crucified), it is the least awful automation system that I’ve tested.

As with any major project I started with the easy things. First I automated building my DHCP and DNS servers. Then I moved on to automating the building of my VPN, NAS, and so on. I finally got around to writing an Ansible playbook to build this blog.

Previously I followed conventional wisdom that servers should be run on long-term support distributions. But I started to question whether that wisdom was appropriate for what I do. Whenever I had to upgrade the server running this blog from one Ubuntu LTS to another, the upgrade itself went well. But I always ended up having to manually fix a number of things that broke due to the significance of the changes that occurred between the two LTS releases. Those breakages often weren’t trivial to fix. They would eat up a lot of my time. So I started to experiment with more bleeding edge distros. I settled on Fedora Server since I already run Fedora on my laptop and have become familiar with it. Major version upgrades haven’t resulted in major breakages. When something does break, it usually takes a minute or two for me to fix.

So this blog is now running on Fedora Server 34. And I can rebuild it by issuing a single command.

I’m guessing there will still be a few issues that need to be resolved. I changed quite a bit on the back end so I’m expecting a few breakages here and there and I’m sure I’ll have to make a few performance tweaks (not that you’ll likely notice issues regarding performance since my Internet connection sucks). But the site largely appears functional.

My Review of the Sennheiser HD 450BT

My rule of thumb for adapting new technologies is that the technology must provide a net gain to my quality of life. I haven’t jumped onto the Internet of Things bandwagon in part because the added headaches outweigh the benefits. Being able to change the color my lights output would be mildly useful to me, but having to worry about the security issues involved with an Internet connected device, the possibility of not being able to configure my lights if the Internet goes down, etc. greatly outweigh the benefit.

This brings me to Bluetooth headphones. Ever since Apple had the “courage” to remove the standard headphone jack from the iPhone, Bluetooth headphones started seeing a rapid increase in adoption (at least as far as I can tell). I stuck with wired headphones because my use case made Bluetooth headphones a bigger headache than the benefits warranted. Apple’s “courage” did benefit me in one major way though, Bluetooth headphones improved rapidly and have finally reached a point where they offer more benefits to me than headaches.

I settled on buying a pair of Sennheiser HD 450BT for reasons I’ll get into in a bit. This review isn’t going to delve too deeply into the usual considerations for Bluetooth headphones such as sound quality, noise cancelling effectiveness, etc. More qualified individuals have already expounded on those features in great detail. Instead this review is going to be based heavily on my use case, which has a few oddball specifics. So before I begin, I’m going to explain my use case.

My Use Case

During the day I primarily use two computers. The first is my ThinkPad P52s running Fedora Linux, the second is my iPhone SE (2020). I do most of my work on the ThinkPad and listen to music and podcasts on my iPhone. Even though most of my audio output comes from my iPhone, I periodically needs to hear the audio on my ThinkPad. This need to jump between two devices is what has kept me using wired headphones. It’s easy to unplug a headphone jack from my iPhone (which relies on a Lightning to headphone jack adapter because of Apple’s “courage”) and plug it into my ThinkPad and vice versa. Disconnecting a pair of Bluetooth headphones from my iPhone and connecting them to my ThinkPad is a much bigger pain in the ass that involves going a couple of layers deep into Bluetooth settings on both devices.

So my use case requires the ability to easily switch between two devices and compatibility with both Linux and iOS.

Not My Use Case

It’s also worth noting what my use case isn’t. Many Bluetooth headphones offer some kind of active noise cancellation. I don’t like active noise cancellation because I prefer to maintain some audio awareness of my environment so I always turn it off if it’s present. I also don’t commute on public transit, don’t wear headphones when out and about (due to my preference for maintaining audio awareness), and work primarily from a desk. When I do travel, I always take my laptop bag, which is big and already packed with gear. A pair headphones isn’t much extra to carry when considered along with all of the other gear I carry. If portability is one of your primary criterion, I’m the worst person to ask.

Selection Criteria

I have several preferences when it comes to headphones in general. Closed studio style over-ear headphones are my favorite. In-ear ear buds are also acceptable to me so long as they don’t rely on a component that rests around my neck. Wired ear buds with equal length wires (I really hate the style where the wires going to the ear buds are different lengths) and so-called true wireless are both good in my book. I dislike on-ear headphones because I get a headache from wearing them for too long and open studio style never appealed to me because, even though I want to maintain audio awareness, I like having some amount of isolation as well.

I also have several preferences when it comes to Bluetooth headphones specifically. One of my favorite things about wired headphones is that they don’t rely on an internal battery that needs to be recharged periodically. For Bluetooth headphones I’d prefer having a battery life measured in days rather than in hours. Knowing that Bluetooth headphones do need to be recharged, I’d prefer a USB-C charging port (but will consider all standardized connectors other than micro-USB) since that is becoming the powering standard for a wide range of devices.

While I avoid video conferences and talking on the phone as much as possible, built-in microphones for those occasions when I can’t avoid either is a definite plus. So long as the microphones are good enough that the person(s) with whom I’m conversing can understand me, they’re acceptable to me.

Audio playback controls are a must. I hate having to turn on my phone screen to pause music or skip a song. This preference is so strong that my favorite pair of headphones, my Sennheiser HD 280 PROs, see very little actual use anymore. They sound great and they’re very comfortable, but they lack audio playback controls. Instead I usually use my ear buds, which do have built-in audio playback controls.

Because of the number of shoddy products on the market, I gravitate towards products made by companies with which I have had positive experiences. The downside to this strategy is that a lot of great options released by new companies fall off of the radar. The upside is that I get burned far less often by shoddy products. For similar reasons I tend to shy away from newly released products even when they’re manufactured by companies I trust. When I was young, I was willing to be the guinea pig for new products. Now that I’m older and have less free time, I prefer to let other people be the guinea pigs.

Since Bluetooth headphones, unlike traditional headphones, are necessarily a disposable product due to both their built-in battery (which wears out and usually isn’t replaceable) and continuously aging technology (for example, you usually can’t add new Bluetooth features to old headphones), I didn’t want to spend a fortune on a pair. I capped my budget at $150.

Based on my preferences I narrowed down my options to a handful of products. My three favorite options were the Sony WH-CH710N, Sony WH-XB900N, and Sennheiser HD 450BT. I eliminated the Sony WH-XB900N because of its focus on bass, which isn’t my thing, and opted for the Sennheiser HD 450BT over the Sony WH-CH710N because the former supports more high quality Bluetooth codecs.

My Review

That was a lot of preamble for a review, but I believe a review is far more useful if you understand both the use case of the reviewer and their preferences.

As I noted above, I’m not going to delve too deeply into the usual consideration for headphone reviews like sound quality and the effectiveness of the active noise cancellation. Far more qualified individuals have already written extensively on those topics. Suffice to say these headphones sound good to my ears. I haven’t tested the active noise cancellation to any extent so I won’t say anything about its effectiveness.

The three most appealing features of Bluetooth headphones for me are that Bluetooth is built into most modern laptops and smartphones (I had dongles), there are no wires to get tangled, and you’re not tethered the the audio source. My office is in the basement of my house. If I go upstairs, I can get to the furthest edge of my kitchen, a distance of approximately 60 feet with several walls and a floor in between, before the HD 450BT loses its connect to my laptop. I will also note that I live in the country so there is very little electromagnetic interference in my house on the wavelengths used by Bluetooth other than my Wi-Fi network and one or two other Bluetooth devices I use such as my Apple Watch. I’m not sure whether the range I’m experiencing is considered good for a pair of Bluetooth 5.0 headphones, but I’m more than happy with it.

My biggest gripe with Bluetooth headphones was solved by the introduction of multipoint connectivity, which allows a single pair of Bluetooth headphones to simultaneously connect to two or more source devices. Unfortunately, multipoint support is a bit of a mess. I’m happy to report that the HD 450BT multipoint support when simultaneously connected to my laptop and phone has fulfilled my needs. As I noted above in my use case, I periodically need to switch my audio source between those two devices. What I don’t need to do is get audio output from both devices at the same time. When connected to my laptop and phone, the multipoint support provides output from one of the two devices at a time. If I’m playing music on my phone, I don’t get audio from my laptop and vice versa. To switch between the two devices I only need to pause the audio on one device, wait a second or two, and start playing audio on the other device.

I have experienced a couple of multipoint hiccups. The first is that sometimes when a notification is created on the device not currently playing audio, it’ll cause the playing audio to pause for a second or two (the notification sound may or may not play through the headphones). The second is that after pausing the audio on one device and attempting to restart it using the built-in audio playback controls, the command sometimes goes to the other device (so if I pause the music on my phone and press the headphone’s play button to restart it, that play command may go to my laptop instead). These hiccups manifest infrequently enough that it hasn’t motivated me to return to hard wired headphones.

Another quirk that I’ve experienced is that when somebody calls my phone, before answering the call the microphones activate and route the the sounds to the speakers. If somebody calls when I’m typing, I can suddenly hear my mechanical keyboard very clearly. I’d prefer the microphones not activate unless I answer the call and maybe this is a but that will be fixed in a future firmware update.

Speaking of firmware updates, one gripe I do have with these headphones is that firmware upgrades can only be applied using the Sennheiser Smart Control app. This gripe applies to most Bluetooth headphones so it shouldn’t be seen as a criticism specific to the HD 450BT, but a criticism of Bluetooth headphones in general. I want to apply firmware updates using fwupd on Linux. But if Sennheiser is going to relegate me to using its app to apply firmware updates, it would be nice if the app wasn’t so bloody slow. The firmware update I recently applied took at least half an hour, which seems like a ridiculous amount of time to apply a firmware update to a pair of headphones. This is easily my least favorite thing about these headphones and the only saving grace is that firmware updates seem far and few between.

Sennheiser advertises 30 hours of battery life for the HD 450BT. That advertised battery life is with active noise cancellation enabled. As I stated above, I don’t like active noise cancellation and always turn it off. When active noise cancellation is disabled, the battery life increases significantly. I last charged my headphones on Friday afternoon and have used them heavily since then including through two working days. While I do turn them off a night, I’d estimate they’ve been running between 30 and 40 hours (not always playing audio, I do pause my music when I have to concentrate on something). As I write this Tuesday afternoon, the Sennheiser app on my iPhone shows the battery charge is still at 90%. When I press the volume up and down buttons simultaneously, the headphones report more than 12 hours of playtime remains (which I believe is the maximum the headphones will report). Needless to say, I’m very happy with the battery life of these.

Pressing the volume up and down buttons simultaneously to get the battery life probably seems a bit intuitive and one of the more common criticisms I’ve read about these headphones is the unintuitive layout of the built-in controls. All of the controls are located on the bottom of the right speaker. From front to back there is the power button that doubles as the active noise cancellation activation and deactivation button, the volume down and up buttons, a three position audio playback control switch, and a button for activating a phone’s voice assistant (such as Siri on the iPhone). I actually like the button layout and for the most part really like the audio playback switch. Pressing down on the switch will play or pause your audio, pressing the switch forward goes back a song, and pressing the switch backwards goes to the next song. The only annoyance for me is that pressing down to pause or play music can be finicky. If the switch isn’t perfectly centered when you press down, the control doesn’t activate. Since the switch is easily moved slightly forward or backward when pressing down on it, it’s pretty easy to press the button without your audio playing or pausing.

The last thing I want cover is comfort. A common criticism of these headphones is that they’re uncomfortable when worn for a long time. Most reviews attribute this to the small holes in the ear cups. My Sennheiser HD 280 PRO headphones have large holes in the ear cups so my ears have plenty of room. The HD 450BT has narrow holes in the ear cups. The holes are slightly wider than my thumbs, which is just barely large enough room for my ears. If I don’t position the headphones with some care, the ear cups will press down on parts of my ears. I did find some aftermarket ear cups that are supposed to be more comfortable and may invest in a pair at some point, but the stock ones are decently comfortable although not nearly as comfortable as ear cups on the HD 280 PRO. Compared to the HD 280 PRO, which has a wide headband with a replaceable thick pad wrapped around the top, the headband on the HD 450BT isn’t nearly as comfortable. It’s narrow and the only padding is a thin integrated strip of rubber on the inside that has no discernible padding. I do like the clamping force of these headphones. It’s strong, but not too strong. To me the clamping force feels lower than on the HD 280 PRO, but it’s not so low that I’m worried about them falling off of my head. Overall, I find the HD 450BT to be adequately comfortable when worn for hours, but a couple of steps below the HD 280 PRO.

Summary

I paid $99 for them and at that price I’m happy with my purchase. The multipoint feature fits my use case, the battery life is great (with the caveat that I disable the active noise cancellation), there are built-in audio playback controls, and the headphones are adequately comfortable. I’m not impressed with the Smart Control app, especially with the speed at which is updates firmware, but that’s an unhappiness I would likely have with any pair of Bluetooth headphones. If you’re looking for a pair of Bluetooth headphones in the $100 ballpark, I recommend considering them.

The Way It Should Always Have Been

I received my PinePhone last week. The model I ordered was the UBPorts Community Edition. My initial thoughts on the phone are that the build quality is actually very solid, but otherwise it behaves like a $150 phone. The performance isn’t great, but acceptable; the battery life, which is a known issue, is pretty terrible; and the software is in a pretty rough state (easily beta quality, maybe even late alpha quality). All of these were what was promised and what I expected so none of this should be considered criticism. I’m actually impressed by what the manufacturers and software creators managed to pull off so far.

However, after playing with UBPorts I wanted to try some other operating systems. This is where the PinePhone shines since it doesn’t lock you into any specific operating system. The next released of the Community Edition of the PinePhone will come with postmarketOS so I loaded postmarketOS onto an MicroSD card (you can also flash it to the internal eMMC chip) and booted it on the phone. postmarketOS has a utility that builds an image for you. That utility also allow you to customize a number of things including using full-disk encryption (which I haven’t played with yet since it’s experimental) and choosing your user interface. I chose Phosh for the user interface because I wanted to see what the Librem team has been working on. My experience with postmarketOS was similar to UBPorts. Performance was sluggish, but acceptable and the software is still in a rough state. However, postmarketOS makes it easy to install regular Linux desktop and command line applications so I installed and tried a few applications that I use regularly on the desktop. Unfortunately, most of the available graphical software doesn’t yet support screen scaling so applications are too big for the PinePhone’s screen. With that said, progress is being made in that direction and once more applications support screen scaling there should be a decent number of apps available.

Being able to boot up a different operating system on my phone is the way it should always have been. On my desktop and laptops computers I have always been able to choose what operating system to run, but my mobile devices have always been locked down. Some Android devices do allow you to unlock the boot loader and install a different Android image, but often doing so it’s officially supported by the manufacturer (so it’s often a pain in the ass). It’s nice to finally see a mobile phone that is designed for tinkerers and people who want to actually own their hardware.

Getting a Static IP Address with a Cheap VPS and WireGuard

I prefer to host my own infrastructure. This website site is on a server in my home as are my e-mail, Nextcloud, and Gitlab servers. I also encourage those interested to do the same. However, many Internet Service Providers (ISP) are reluctant to issue static IP address to residential customers and often block commonly used ports (especially port 25, which is needed if you want to self-host an e-mail server). While Dynamic Domain Name System (DDNS) mitigates the static IP address problem, it’s not perfect. Moreover, it won’t open ports that your ISP is blocking.

Fortunately, there’s another option. Many Virtual Private Server (VPS) providers happily issue static IP addresses for VPS instances. They also tend to be much less apt to block ports (although port 25 is commonly blocked and often requires submitting a support ticket and jumping through hoops to get unblocked). Wouldn’t it be convenient to use a VPS as a front end for your self-hosted network?

One of the things I played around with a lot recently is WireGuard. WireGuard is a new Virtual Private Network (VPN) technology that is both easy to setup and performant. In this guide I will explain how to use WireGuard to bridge your self-hosted network with a VPS (in my case I’m using this setup with one of my servers through a $5 per month DigitalOcean droplet) so that you can access your servers from the VPS’s static IP address.

This guide will not explain how to install WireGuard or the specifics of WireGuard since that information is better provided by WireGuard’s official website. What this guide will explain is how to create a WireGuard configuration file that can be used via the wg-quick command to forward traffic received via the VPS’s static IP address to individual servers on your self-hosted network.

As a quick aside, the reason I opted to use a configuration file and the wg-quick command instead of setting up a WireGuard interface and iptables on the Linux system directly is because one of my goals is portability. It’s a trivial matter to spin up a new VPS, install WireGuard, upload your configuration file, and run wg-quick. This way if you lose access to your VPS, you can get everything up and running again by spinning up a new VPS and updating your DNS records.

Once your VPS is up and running and WireGuard is installed, go to the /etc directory and, if it doesn’t already exist, create a wireguard directory. This is the directory where your configuration file will be store.

You will need a private and public key, which can both be generate via the wg command. First issue the umask 077 command. This will provide read and write access only to the user account creating the files (which is likely root). Now issue the wg genkey | tee privatekey | wg pubkey > publickey command. The first part of this command generates your private key and pipes it into the tee command, which outputs the key into a file named privatekey and pipes it into the wg pubkey command, which derives a public key from the private key and writes that public key to a file named publickey.

At this point you should have two files in /etc/wireguard, privatekey and publickey. Now it’s time to create your initial configuration file. Open your text editor of choice and save the file as wg0.conf (the file name doesn’t have to be wg0, but the extension has to be .conf). You’ll start by adding the following lines:


[Interface]
Address = 172.16.1.1/16
SaveConfig = false
ListenPort = 65000
PrivateKey =

[Interface] indicates that the lines that follow are related to the creation of a WireGuard interface.

Address indicates the IP address that will be assigned to the WireGuard interface. Note that you can assign multiple IP addresses to a WireGuard interface so if you also wanted to give it an IPv6 address you could add the line Address = fd00:cafe:dead:babe::1/64. If you’d rather have a single line containing all of the assigned IP addresses you can use a comma separated list such as Address = 172.16.1.1/16, fd00:cafe:dead:babe::1/64 (personally I find adding a single item per line more readable so I will stick to that convention for this guide). Also note that the subnet of the WireGuard interface’s IP address should differ from the one you use on your home network. In this example I’m using a 172.16.x.x address because 192.168.x.x and 10.x.x.x addresses are the ones I most commonly encounter on home and corporate networks.

SaveConfig = false will prevent WireGuard from automatically saving additional information to the wg0.conf file. I prefer this because otherwise WireGuard has a habit of generating a new fe80:: IPv6 address and saving it to wg0.conf every time the interface is brought up with the wg-quick command. Moreover, if SaveConfig is set to true any comments you add to the wg0.conf file will be erased when the wg-quick command is called (I tend to heavily comment my files so this is a big issue to me).

ListenPort indicates what port you want the WireGuard interface to be accessible on. The default listen port for WireGuard is 51820. However, I actually have my VPS’s WireGuard interface setup to foward traffic received on port 51820 to another WireGuard server on my home network so I’m using a non-default port in this example.

PrivateKey indicates the interface’s private key. Paste the private key stored in the privatekey file here (note that you have to enter the private key itself, not the path to the file containing the private key).

Now you need to configure WireGuard on one of your self-hosted servers. First, generate a private and public key pair on your self-hosted server the same way you generated them on your VPS instance. Then create wg0.conf and enter the following information:


[Interface]
Address = 172.16.1.2/16
SaveConfig = false
PrivateKey =

There are two changes to note. The first change is the IP address. The IP address of the WireGuard interface on your self-hosted server should be different (but in the same subnet) than the IP address of your VPS’s WireGuard interface. The second change should be obvious. You will enter the private key you generated on your self-hosted server.

Now both your VPS and your self-hosted servers should have WireGuard configuration files. The next step is to tell them about each other. This is done using the [Peer] directive in the configuration file. Let’s open the wg0.conf file on your VPS and add the [Peer] information for your self-hosted WireGuard interface. The file should look like this:


[Interface]
Address = 172.16.1.1/16
SaveConfig = false
ListenPort = 65000
PrivateKey =

[Peer]
PublicKey =
AllowedIPs = 172.16.1.2/32

PublicKey indicates the public key for your self-hosted server. This should be the key stored in the publickey file on your self-hosted server.

AllowedIPs serves two purposes. The first purpose is to inform the VPS what IP addresses can be used as source addresses when communicating with the peer. The line AllowedIPs = 172.16.1.2/32 indiates that the address 172.16.1.2 is the only source IP address allowed to encrypt traffic that can be decrypted with provided public key. The second purpose is to act as a routing table. Traffic destined for 172.16.1.2 will be encrypted with the provided public key so it can be decrypted by the WireGuard interface on your self-hosted server.

Depending on your goals, you may wish to use less strict values for AllowedIPs. However, I prefer to keep them strict when setting up a WireGuard connection to bridge between a VPS and my self-hosted network because I actually configure multiple peers and forward different traffic to different peers.

Now edit the wg0.conf file on your self-hosted WireGuard server so it looks like this:


[Interface]
Address = 172.16.1.2/16
SaveConfig = false
PrivateKey =

[Peer]
PublicKey =
AllowedIPs = 172.16.1.1/32
Endpoint = :65000
PersistentKeepalive = 25

The PublicKey directive should obviously have the public key generated on the VPS and AllowedIPs directives should have the IP address of the VPS’s WireGuard interface. However, there are two new entries.

Endpoint tells the WireGuard interface the IP address to which it should communicate. This should be set to the static IP address assigned to the VPS. Why wasn’t the Endpoint directive needed in the VPS’s configuration file? Because WireGuard has a neat feature, which enables roaming. When a WireGuard interface receives a packet, it notes the source address and uses that address when sending replies. This means that should the IP address of your self-hosted network change (which can happen periodically on networks not assigned a static IP address) the VPS WireGuard interface will note the new source address and use it as the new destination address.

PersistentKeepalive indicates how frequently the WireGuard interface should send keep alive messages to its peer. WireGuard is a pretty quiet protocol by default. It abstains from sending unnecessary traffic. While this makes for a more efficient protocol, it causes issues with peers behind a Network Address Translation (NAT) device. When a peer behind a NAT device connects to an external server, the NAT device keeps track of the connection. If no traffic is observed on the connection for a while, the connection is timed out and the NAT device forgets it. Setting PersistentKeepalive ensures traffic is periodically sent (once every 25 seconds in the case of the above configuration) across the connection and thus will not be forgotten by a NAT device middle man.

With these configurations in place, we can bring up both WireGuard connections. On the VPS issue the command systemctl start wg-quick@wg0 then issue the same command on your self-hosted server. From your VPS you should be able to ping 172.16.1.2 and from your self-hosted server you should be able to ping 172.16.1.1. If you can’t ping either peer, there’s likely an error in one of your configuration files or a firewall hindering the communications.

If both peers can ping each other, congratulations, you’ve successfully setup a WireGuard connection and are done with the self-hosted side of your configuration. There remains one last thing to do on your VPS though. At the moment traffic received on the VPS’s static IP address isn’t forwarded through the WireGuard interface and thus will never reach your self-hosted server.

To correct this, you first need to enable packet forwarding on your VPS. On most Linux distributions this is done by adding the lines net.ipv4.ip_forward=1 and, if you want the capability to forward IPv6 packets, net.ipv6.conf.all.forwarding=1 to the sysctl.conf file then issuing the sysctl --system command. Once packet forwarding is enabled, you can start adding packet forwarding capabilities to your WireGuard configuration file.

wg-quick recognizes a few useful directives. Two of them that we will use here are PostUp and PostDown. PostUp executes commands immediately after your WireGuard interface is brought up. PostDown executes commands immediately after your WireGuard interface has been torn down. These are convenient moments to either add port forwarding information or remove it.

To start with enable port forwarding by adding the following lines under the [Interface] section of the VPS’s wg0.conf file (as with the Address directive, the PostUp and PostDown directives can appear in the file multiple times so you can split up a lengthy list of commands onto separate lines):


PostUp = iptables -A FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostUp = iptables -A FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

These two lines enable packet forwarding from interface eth0 to wg0 and from wg0 to eth0 for established and related connections. Once a connection has been established, all packets for that connection will be properly forwarded by WireGuard to the appropriate peer. Do note that the interface name of your VPS’s network connection may not be eth0. To find the name use the ip address list command. The interface that has your VPS’s assigned static IP address is the one you want to put in place of eth0. %i will be replaced by the name of your WireGuard interface when the wg-quick is run.

If the WireGuard interface isn’t up, there’s no reason to keep forwarding enabled. In my configuration files I prefer to undo all of the changes I made with the PostUp directives when the interface is torn down with PostDown directives. To do this add the following lines:


PostDown = iptables -D FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostDown = iptables -D FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

These commands look almost identical except they use the -D flag (delete) instead of the -A flag (append).

Now we need to tell the VPS to change the source address of all forwarded packets to the IP address of the WireGuard interface, which can be done by adding the following line:


PostUp = iptables -t nat -A POSTROUTING -o %i -j MASQUERADE;

If the source address isn’t changed on forwarded packets, the responses to those packets won’t be properly forwarded back through the WireGuard interface (and thus will never reach the client that connected to your server). To undo this when the WireGuard interface goes down, add the following line:


PostDown = iptables -t nat -D POSTROUTING -o %i -j MASQUERADE;

Now that forwarding has been enabled we can start adding commands to forward packets to their appropriate destinations. For this example I’m going to assume your self-hosted server is running a Hypertext Transfer Protocol (HTTP) server. HTTP servers listen on two ports. Port 80 is used for unsecured traffic and port 443 is used for secured traffic. To forward traffic on ports 80 and 443 from your VPS to your self-hosted server add the following lines:


PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

The first command will forward all packets received on port 80 of eth0 through the WireGuard interface. The second command changes the designation IP address to the self-hosted WireGuard peer, which will ensure the packet is properly routed to the self-hosted server. Lines three and four do the same for traffic on port 443. To undo these changes when the WireGuard interface goes down, add the following lines:


PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

At this point the configuration file on your VPS should look like this:


[Interface]
Address = 172.16.1.1/16
SaveConfig = false
ListenPort = 65000
PrivateKey =

PostUp = iptables -A FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostUp = iptables -A FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

PostUp = iptables -t nat -A POSTROUTING -o %i -j MASQUERADE;

PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

PostDown = iptables -D FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostDown = iptables -D FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

PostDown = iptables -t nat -D POSTROUTING -o %i -j MASQUERADE;

PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

[Peer]
PublicKey =
AllowedIPs = 172.16.1.2/32

After the changes are made you’ll need to restart your WireGuard interface on your VPS. Do this by issuing the systemctl restart wg-quick@wg0 command. You may also need to restart the WireGuard interface on your self-hosted server (I’ve had mixed luck with this and usually restart both to be sure). Now you should be able to access your self-hosted HTTP server from your VPS’s static IP address.

I prefer not having to manually start my WireGuard interfaces every time I restart a server. You can make your WireGuard interfaces come online automatically when you system starts by issuing the systemctl enable wg-quick@wg0 command on both your VPS and your self-hosted server.

Linux on a 2010 Mac Mini Part Two

Last week I mentioned my adventure of installing Linux on a 2010 Mac Mini. Although Ubuntu 18.10 did install and was working for a few days an update left the system unusable. After an update towards the end of last week the system would only boot to a black screen. From what I gathered online I wasn’t the only person who ran into this problem. Anyways, I ended up digging into the matter further.

I once again tried installing Fedora. When I tried to install Fedora 29, I was unable to stop it from booting to a black screen so I decided to try Fedora 28. Using basic graphics mode I was able to get Fedora 28 to boot to the live environment and from there install Fedora on the Mac Mini. After installation I was able to get my Fedora installation to boot. However, when I tried to install the Nvidia driver from RPM Fusion, the system would only boot to a black screen afterwards. I tried installing the Nvidia driver via the negativo17 repository but didn’t expect it to work since the driver distributed from that repository is based on version 418 and the last driver to support the Mac Mini’s GeForce 320M was version 340. Things went as expected. I then tried installing the Nvidia driver manually using a patched version of the 340 driver from here. Unfortunately, that driver doesn’t work with the 4.20 kernel so that was a no go as well.

The reason I hadn’t tried to install the Nvidia driver manually before was because I didn’t want to deal with supporting the setup in the future. As I was trying to install it using the previously linked instructions I felt justified because the guide isn’t nearly as straight forward as installing the driver from a repository. It became a moot point since manual installation didn’t work but it did make me think about the fact that any solution I settled upon would need to be maintained, which lead me to the idea of using Ubuntu 18.04 LTS. The LTS versions of Ubuntu are supported by Canonical for five years so if I could get 18.04 installed, the setup would have a decent chance of working for five years.

After passing the kernel the “nouveau.modeset=0” argument, just as I had to do with 18.10, I was able to boot into a live environment and install 18.04 to the hard drive. Likewise, I had to use the “nouveau.modeset=0” argument to boot into the installation. Once I was booted into the installation I was able to use “sudo apt install nvidia-340” to install the 340 version of the Nvidia driver. After rebooting everything worked properly. I’m hoping that future updates will be less likely to break this setup since the LTS releases of Ubuntu tend to be more stable than non-LTS versions.

So, yeah, if you want to get a currently supported Linux distro running on a 2010 Mac Mini, take a look at Ubuntu 18.04. It might be your best bet (if it continues to run properly for the next month or so, I’ll say it is your best bet).

Linux on a 2010 Mac Mini

I prefer repurposing old computers to throwing them away. A while ago I acquired a 2010 Mac Mini for $100. It has worked well. I even managed to install macOS Mojave on it using this patcher. However, I wanted to try installing Linux on it.

I first tried installing my go-to distro, Fedora (version 29 to be specific). Unfortunately, I immediately ran into problems. The Mac Mini has an Nvidia card that doesn’t play nicely with the nouveau driver in the kernel so I couldn’t bring up a graphical environment (I just got a black screen with a blinking cursor in the upper left corner). I tried booting the Fedora live distro with the “nouveau.modeset=0” parameter but to no avail.

So I decided to try Ubuntu (18.10). Ubuntu also initially failed to boot but it at least gave me an error message (related to the nouveau driver). When I booted it with the “nouveau.modeset=0” parameter I was able to get to the graphical interface and install Ubuntu. After installation I once again booted with the “nouveau.modeset=0” parameter and install Nvidia’s proprietary driver. After that the system now boots into Ubuntu without any trouble (installing the Nvidia driver also enabled audio output through HDMI).

If you’re having trouble installing Linux on a 2010 Mac Mini, try Ubuntu and try passing the “nouveau.modeset=0” parameter when booting and you may have better luck.

Some Thoughts After Moving from macOS to Linux

It has been two weeks and change since I moved from my MacBook Pro to a ThinkPad P52s running Linux. Now that I have some real use time under my belt I thought it would be appropriate to give some of my thoughts.

The first thing I’d like to note is that I have no regrets moving to Linux. My normal routine is to use my laptop at work and whenever I’m away at home and use another computer at home (because I’m too lazy to pull my laptop out of my laptop bag every night). The computer I was using at home was a 2010 Mac Mini. I replaced it with my old MacBook Pro when I got my ThinkPad. I realized the other day that I haven’t once booted up my MacBook Pro since I got my ThinkPad. Instead I have been pulling my ThinkPad out of its bag and using it when I get home. At no point have I felt that I need macOS to get something done. That’s the best testament to the transition that I can give. That’s not to say Linux can do anything that macOS can. I’m merely fortune in that the tools I need are either available on Linux or have a viable alternative.

I’m still impressed with the ThinkPad’s keyboard. One of my biggest gripes about the new MacBooks is the ultra slim keyboards. I am admittedly a bit of a barbarian when it comes to typing. I don’t so much type as bombard my keyboard from orbit. Because of this I like keys with a decent amount of resistance and depth. The keyboard on my 2012 MacBook Pro was good but I’m finding the keyboard on this ThinkPad to be a step up. The keys offer enough resistance that I’m not accidentally pressing them (a problem I have with keyboards offering little resistance) and enough depth to feel comfortable.

With that said the trackpad is still garbage when compared to the trackpad on any MacBook. My external trackball has enough buttons where I can replicate the gestures I actually used on the MacBook though and I still like the TrackPoint enough to use it when I don’t have an external mouse connected.

Linux has proven to be a solid choice on this ThinkPad as well. I bought it with Linux in mind, which means I didn’t get features that weren’t supported in Linux such as the fingerprint reader or the infrared camera for facial recognition (which is technically supported in Linux but tends to show up as the first camera so apps default to it rather than the 720p webcam). My only gripe is the Nidia graphics card. The P52s includes both an integrated Intel graphics card and an Nvidia Quadro P500 discrete graphics card, which isn’t supported by the open source Nouveau driver. In order to make it work properly, you need to install Nvidia’s proprietary drivers. Once that’s installed, everything works… except secure boot. In order to make the P52s boot after installing the Nvidia driver, you need to go into the BIOS and disable secure boot. I really wish there was a laptop with an discrete AMD graphics card that fit my needs on the market.

One thing I’ve learned from my move from macOS to Linux is just how well macOS handled external monitors. My P52s has a 4k display but all of the external monitors I work with are 1080p. Having different resolution screens was never a problem with macOS. On Linux it can lead to some rather funky scaling issues. If I leave the built-in monitors resolution at 4k, any app that opens on that display looks friggin’ huge when moved to an external 1080p display. This is because Linux scales up apps on 4k displays by a factor of two by default. Unfortunately, scaling isn’t done per monitor by default so when the app is moved to the 1080p display, it’s still scaled by two. Fortunately, a 4k display is exactly twice the resolution as a 1080p display so changing the built-in monitor’s resolution to 1080p when using an external display is an easy fix that doesn’t necessitate everything on the built-in display looking blurry.

I’ve been using Gnome for my graphical environment. KDE seems to be the generally accepted “best” desktop environment amongst much of the Linux community these days. While I do like KDE in general, I find that application interfaces are inconsistent whereas Gnome applications tend to have fairly consistent interfaces. I like consistency. I also like that Gnome applications tend to avoid burying features in menus. The choice of desktop environment is entirely subjective but so far my experience using Gnome has been positive (although knowing that I have a ship to which I can jump if that changes is reassuring).

As far as applications go, I used Firefox and Visual Studio Code on macOS and they’re both available on Linux so I didn’t have to make a change in either case. I was using Mail.app on macOS so I had to find a replacement e-mail client. I settled on Geary. My experience with Geary has been mostly positive although I really hate that there is no way, at least that I’ve found, to quickly mark all e-mails as read. I used iCal on macOS for calendaring and Gnome’s Calendar application has been a viable replacement for it. My luck at finding a replacement for my macOS task manager, 2Do, on Linux hasn’t been a positive experience. I’m primarily using Gnome’s ToDo application but it lacks a feature that is very important to me, repeating tasks. I use my task manager to remind me to pay bills. When I mark a bill as paid, I want my task manager to automatically create as task for next month. 2Do does this beautifully. I haven’t found a Linux task manager that can do this though (and in all fairness, Apple’s Reminder.app doesn’t do this well either). I was using Reeder on macOS to read my RSS feeds. On Linux I’m using FeedReader. Both work with Feedbin and both crash at about the same rate. I probably shouldn’t qualify that as a win but at least it isn’t a loss.

The biggest change for me has probably been moving from VMWare Fusion to Virtual Machine Manager, which utilized libvirt (and thus KVM and QEMU). Virtualizing Linux with libvirt is straight forward. Virtualizing Windows 10 wasn’t straight forward until I found SPICE Windows guest tools. Once I installed that guest tool package, the niceties that I came to love about VMWare Fusion such as shared pasteboards and automatically changing the resolution of the guest machine when the virtual machine window is resized worked. libvirt also makes it dead simple to set a virtual machine to automatically start when the system boots.

One major win for Linux over macOS is software installation. Installing software from the Mac App Store is dead simple but installing software from other sources isn’t as nice of an experience. Applications installed from other sources have to include their own update mechanism. Most have have taken the road of including their own embedded update capabilities. While these work, they can usually only run when the application is running so if you haven’t used the application for some time, the first thing you end up having to do is update it. Lots of packages still don’t include automatic update capabilities so you have to manually check for new releases. Oftentimes these applications are available via MacPorts or Homebrew. On the Linux side of things almost every software package is available via a distro’s package manager, which means installation and updates are handled automatically. I prefer this over the hodgepodge of update mechanisms available on macOS.

So in closing I’m happy with this switch, especially since I didn’t have to drop over $3,000 on a laptop to get what I wanted.

Jumping Ship

I’ve been running Apple computers for more than a decade now. While I really like macOS, anybody who knows me knows that I’ve been less than enthusiastic with the direction Apple has taken on the hardware front. My biggest gripe with Apple hardware is that it can no longer be serviced. My 2012 MacBook Pro is probably one of the easiest laptops that I’ve ever worked on. The entire back pops off and all of the frequently replaced parts are readily accessible. Part of the reason that I have been able to run that computer since 2012 is because I’ve been able to repair or upgrade components when necessary.

I usually run my laptops between four or five years. I’ve been running that MacBook Pro for six years. I was ready to upgrade last year but Apple had no laptops that appealed to me so I decided to wait a year to see if the situation would improve. When Apple announced its 2018 MacBook Pro line, it had everything I hated. All of the components, including the RAM and SSD, are soldered to the main board. Since the MacBook Pro line can no longer be upgraded, I’d have to order the hardware that I’d want to use for the next four or five years, which would cost about $3,2000. Worse yet, when something broke (all components will fail eventually), I’d have to pay Apple an exorbitant fee to fix it. And if that weren’t bad enough, the 2018 MacBook Pro still has that god awful slim keyboard. While Apple has attempted to improve the reliability of that keyboard by included a rubber membrane under the keys, typing on it is, at least in my opinion, a subpar experience.

I also have some concerns about Apple’s future plans. One of my biggest worries are the rumors of Apple transitioning its Macs to ARM processors. ARM processors are nice but I rely on virtualized x86 environments in my day to day work. If Apple transitioned to ARM processors, I wouldn’t be able to utilize my x86 virtual environments (virtualization turns into emulation when the guest and host architectures differ and emulation always involves a performance hit and usually a lot of glitches), which means I wouldn’t be able to do my work. I’m also a bit nervous about the rumors that Apple is planning to make app notarization mandatory in a future macOS release. Much of the software I rely on isn’t signed and probably never will be. Additionally, building and testing iOS software is a pain in the ass because even test builds need to be signed before they’ll work on an iOS device (anybody who has ran into code signing problems with Xcode will tell you that resolving those problems is often a huge pain in the ass) and I don’t want to bring that “experience” to my other development work. While I would never jump ship over rumors, when there are already reasons I want to jump ship, rumors act as additional low level incentives.

Since Apple didn’t have an upgrade that appealed to me and I’m not entirely comfortable with the rumors of the directions the company maybe going, I decided to look elsewhere. I’ve been running Linux in some capacity for longer than I’ve been running Apple computers. Part of my motivation for adopting macOS in the first place was because I wanted a UNIX system on my laptop (Linux on laptops back then was a dumpster fire). So when I decided to jump ship Linux became the obvious choice, which meant I was looking at laptops with solid Linux support. I also wanted a laptop that was serviceable. I found several solid options and narrowed it down to a Lenovo ThinkPad P52s because it was certified by both Red Hat and Ubuntu, sanely priced, and serviceable (in fact Lenovo publishes material that explains how to service it).

Every platform involves trade-offs. With the exception of Apple’s trackpad, every trackpad that I’ve used has been disappointing. The ThinkPad trackpad is no different in this regard. However, the ThinkPad line includes a TrackPoint, which I’ve always preferred as a mobile mouse solution to trackpads (I still miss Apple’s trackpad gestures though). There also isn’t a decent to do application on Linux (I use 2Do on both iOS and macOS and nothing on Linux is comparable) and setting up Linux isn’t anywhere near as streamlined as setting up a Mac (which involves almost no setup). With that said, I usually use an external trackball so the quality of the trackpad isn’t a big deal. My to do information syncs with my Nextcloud server so I can use its web interface when on my laptop (and continue to use 2Do on my iPhone). And since I chose a certified laptop, setting up Linux wasn’t too difficult (the hardest part was setting up nVidia’s craptastic Linux driver).

The upside to the transition, besides gaining serviceability, is first and foremost the cost. The ThinkPad P52s is a pretty cost effective laptop and I found a 20 percent off coupon code, which knocked the already reasonable price down further. Since neither the RAM nor the SSD in the P52s are soldered to the main board, I was able to save money by buying both separately and installing them when the computer arrived (which is exactly what I did with all of my Macs). In addition to the hardware being cheaper, I was also able to save money on virtualization software. I use virtualization software everyday and on macOS the only decent solution for me was VMWare Fusion (Parallels has better Windows support than Fusion but no serious Linux support, which I also require). Fedora, the Linux distribution I settled on (I run CentOS on my servers so I opted for the closest thing the included more cutting edge software), comes with libvirt installed. After spending a short while familiarizing myself with the differences between VMWare and libvirt, I can say that I’m satisfied with libvirt. It’s better in some regards, worse in others, and pretty much the same otherwise (as far as a user experience, underneath it’s far different).

I also gained a few things on the hardware side. The P52s has two USB-C and two USB-A (all USB 3) ports. My MacBook Pro only had two USB-A ports and the new MacBook Pros only have USB-C ports. All of my USB devices use USB-A so I’d need a bunch of dongles if I didn’t have USB-A ports (not a deal breaker but annoying nonetheless). In addition to being a very good mobile keyboard, the P52s keyboard also has a 10-digit keypad, which no Mac laptop currently has. Like USB-A ports, the lack of a 10 digit keypad isn’t a deal breaker in my world but its inclusion is always welcomed. If that weren’t enough, the keyboard also includes honest to god function keys instead of a TouchBar (as somebody who uses Vim a lot, the lack of a physical escape key is annoying).

My transition was relatively painless because I keep all of my data on my own servers. I didn’t have to spend hours trying to figure out how to pull data off of iCloud so I could use it on Linux. All I had to do was log into my Nextcloud instance and all of my calendar, contact, and to do information was synced to the laptop. The same was true of my e-mail. In anticipation for my move I also changed password managers from 1Password to a self-hosted instance of Bitwarden (1Password is overall a better experience but it lacks a native Linux app so I’d have been stuck with moving to a subscription plan to utilize a browser plugin that would deliver the same experience as Bitwarden). Keeping your data off of proprietary platforms makes moving between platforms easier. Likewise, keeping your data in open standards makes moving easier. I primarily rely on text files instead of word processor files (I used Markdown or LaTeX for most formatting) and most of my other data is stored in standardized formats (PNG or JPEG for images, ePub or PDF for documents, etc.).

Although I won’t give a final verdict until I’ve used this setup for a few months, my initial impressions of moving from macOS to Linux are positive. The transitions has been relatively painless and I’ve remained just as productive as I was on macOS.