A Geek With Guns

Chronicling the depravities of the State.

Archive for the ‘Linux’ tag

Getting a Static IP Address with a Cheap VPS and WireGuard

without comments

I prefer to host my own infrastructure. This website site is on a server in my home as are my e-mail, Nextcloud, and Gitlab servers. I also encourage those interested to do the same. However, many Internet Service Providers (ISP) are reluctant to issue static IP address to residential customers and often block commonly used ports (especially port 25, which is needed if you want to self-host an e-mail server). While Dynamic Domain Name System (DDNS) mitigates the static IP address problem, it’s not perfect. Moreover, it won’t open ports that your ISP is blocking.

Fortunately, there’s another option. Many Virtual Private Server (VPS) providers happily issue static IP addresses for VPS instances. They also tend to be much less apt to block ports (although port 25 is commonly blocked and often requires submitting a support ticket and jumping through hoops to get unblocked). Wouldn’t it be convenient to use a VPS as a front end for your self-hosted network?

One of the things I played around with a lot recently is WireGuard. WireGuard is a new Virtual Private Network (VPN) technology that is both easy to setup and performant. In this guide I will explain how to use WireGuard to bridge your self-hosted network with a VPS (in my case I’m using this setup with one of my servers through a $5 per month DigitalOcean droplet) so that you can access your servers from the VPS’s static IP address.

This guide will not explain how to install WireGuard or the specifics of WireGuard since that information is better provided by WireGuard’s official website. What this guide will explain is how to create a WireGuard configuration file that can be used via the wg-quick command to forward traffic received via the VPS’s static IP address to individual servers on your self-hosted network.

As a quick aside, the reason I opted to use a configuration file and the wg-quick command instead of setting up a WireGuard interface and iptables on the Linux system directly is because one of my goals is portability. It’s a trivial matter to spin up a new VPS, install WireGuard, upload your configuration file, and run wg-quick. This way if you lose access to your VPS, you can get everything up and running again by spinning up a new VPS and updating your DNS records.

Once your VPS is up and running and WireGuard is installed, go to the /etc directory and, if it doesn’t already exist, create a wireguard directory. This is the directory where your configuration file will be store.

You will need a private and public key, which can both be generate via the wg command. First issue the umask 077 command. This will provide read and write access only to the user account creating the files (which is likely root). Now issue the wg genkey | tee privatekey | wg pubkey > publickey command. The first part of this command generates your private key and pipes it into the tee command, which outputs the key into a file named privatekey and pipes it into the wg pubkey command, which derives a public key from the private key and writes that public key to a file named publickey.

At this point you should have two files in /etc/wireguard, privatekey and publickey. Now it’s time to create your initial configuration file. Open your text editor of choice and save the file as wg0.conf (the file name doesn’t have to be wg0, but the extension has to be .conf). You’ll start by adding the following lines:


[Interface]
Address = 172.16.1.1/16
SaveConfig = false
ListenPort = 65000
PrivateKey =

[Interface] indicates that the lines that follow are related to the creation of a WireGuard interface.

Address indicates the IP address that will be assigned to the WireGuard interface. Note that you can assign multiple IP addresses to a WireGuard interface so if you also wanted to give it an IPv6 address you could add the line Address = fd00:cafe:dead:babe::1/64. If you’d rather have a single line containing all of the assigned IP addresses you can use a comma separated list such as Address = 172.16.1.1/16, fd00:cafe:dead:babe::1/64 (personally I find adding a single item per line more readable so I will stick to that convention for this guide). Also note that the subnet of the WireGuard interface’s IP address should differ from the one you use on your home network. In this example I’m using a 172.16.x.x address because 192.168.x.x and 10.x.x.x addresses are the ones I most commonly encounter on home and corporate networks.

SaveConfig = false will prevent WireGuard from automatically saving additional information to the wg0.conf file. I prefer this because otherwise WireGuard has a habit of generating a new fe80:: IPv6 address and saving it to wg0.conf every time the interface is brought up with the wg-quick command. Moreover, if SaveConfig is set to true any comments you add to the wg0.conf file will be erased when the wg-quick command is called (I tend to heavily comment my files so this is a big issue to me).

ListenPort indicates what port you want the WireGuard interface to be accessible on. The default listen port for WireGuard is 51820. However, I actually have my VPS’s WireGuard interface setup to foward traffic received on port 51820 to another WireGuard server on my home network so I’m using a non-default port in this example.

PrivateKey indicates the interface’s private key. Paste the private key stored in the privatekey file here (note that you have to enter the private key itself, not the path to the file containing the private key).

Now you need to configure WireGuard on one of your self-hosted servers. First, generate a private and public key pair on your self-hosted server the same way you generated them on your VPS instance. Then create wg0.conf and enter the following information:


[Interface]
Address = 172.16.1.2/16
SaveConfig = false
PrivateKey =

There are two changes to note. The first change is the IP address. The IP address of the WireGuard interface on your self-hosted server should be different (but in the same subnet) than the IP address of your VPS’s WireGuard interface. The second change should be obvious. You will enter the private key you generated on your self-hosted server.

Now both your VPS and your self-hosted servers should have WireGuard configuration files. The next step is to tell them about each other. This is done using the [Peer] directive in the configuration file. Let’s open the wg0.conf file on your VPS and add the [Peer] information for your self-hosted WireGuard interface. The file should look like this:


[Interface]
Address = 172.16.1.1/16
SaveConfig = false
ListenPort = 65000
PrivateKey =

[Peer]
PublicKey =
AllowedIPs = 172.16.1.2/32

PublicKey indicates the public key for your self-hosted server. This should be the key stored in the publickey file on your self-hosted server.

AllowedIPs serves two purposes. The first purpose is to inform the VPS what IP addresses can be used as source addresses when communicating with the peer. The line AllowedIPs = 172.16.1.2/32 indiates that the address 172.16.1.2 is the only source IP address allowed to encrypt traffic that can be decrypted with provided public key. The second purpose is to act as a routing table. Traffic destined for 172.16.1.2 will be encrypted with the provided public key so it can be decrypted by the WireGuard interface on your self-hosted server.

Depending on your goals, you may wish to use less strict values for AllowedIPs. However, I prefer to keep them strict when setting up a WireGuard connection to bridge between a VPS and my self-hosted network because I actually configure multiple peers and forward different traffic to different peers.

Now edit the wg0.conf file on your self-hosted WireGuard server so it looks like this:


[Interface]
Address = 172.16.1.2/16
SaveConfig = false
PrivateKey =

[Peer]
PublicKey =
AllowedIPs = 172.16.1.1/32
Endpoint = :65000
PersistentKeepalive = 25

The PublicKey directive should obviously have the public key generated on the VPS and AllowedIPs directives should have the IP address of the VPS’s WireGuard interface. However, there are two new entries.

Endpoint tells the WireGuard interface the IP address to which it should communicate. This should be set to the static IP address assigned to the VPS. Why wasn’t the Endpoint directive needed in the VPS’s configuration file? Because WireGuard has a neat feature, which enables roaming. When a WireGuard interface receives a packet, it notes the source address and uses that address when sending replies. This means that should the IP address of your self-hosted network change (which can happen periodically on networks not assigned a static IP address) the VPS WireGuard interface will note the new source address and use it as the new destination address.

PersistentKeepalive indicates how frequently the WireGuard interface should send keep alive messages to its peer. WireGuard is a pretty quiet protocol by default. It abstains from sending unnecessary traffic. While this makes for a more efficient protocol, it causes issues with peers behind a Network Address Translation (NAT) device. When a peer behind a NAT device connects to an external server, the NAT device keeps track of the connection. If no traffic is observed on the connection for a while, the connection is timed out and the NAT device forgets it. Setting PersistentKeepalive ensures traffic is periodically sent (once every 25 seconds in the case of the above configuration) across the connection and thus will not be forgotten by a NAT device middle man.

With these configurations in place, we can bring up both WireGuard connections. On the VPS issue the command systemctl start wg-quick@wg0 then issue the same command on your self-hosted server. From your VPS you should be able to ping 172.16.1.2 and from your self-hosted server you should be able to ping 172.16.1.1. If you can’t ping either peer, there’s likely an error in one of your configuration files or a firewall hindering the communications.

If both peers can ping each other, congratulations, you’ve successfully setup a WireGuard connection and are done with the self-hosted side of your configuration. There remains one last thing to do on your VPS though. At the moment traffic received on the VPS’s static IP address isn’t forwarded through the WireGuard interface and thus will never reach your self-hosted server.

To correct this, you first need to enable packet forwarding on your VPS. On most Linux distributions this is done by adding the lines net.ipv4.ip_forward=1 and, if you want the capability to forward IPv6 packets, net.ipv6.conf.all.forwarding=1 to the sysctl.conf file then issuing the sysctl --system command. Once packet forwarding is enabled, you can start adding packet forwarding capabilities to your WireGuard configuration file.

wg-quick recognizes a few useful directives. Two of them that we will use here are PostUp and PostDown. PostUp executes commands immediately after your WireGuard interface is brought up. PostDown executes commands immediately after your WireGuard interface has been torn down. These are convenient moments to either add port forwarding information or remove it.

To start with enable port forwarding by adding the following lines under the [Interface] section of the VPS’s wg0.conf file (as with the Address directive, the PostUp and PostDown directives can appear in the file multiple times so you can split up a lengthy list of commands onto separate lines):


PostUp = iptables -A FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostUp = iptables -A FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

These two lines enable packet forwarding from interface eth0 to wg0 and from wg0 to eth0 for established and related connections. Once a connection has been established, all packets for that connection will be properly forwarded by WireGuard to the appropriate peer. Do note that the interface name of your VPS’s network connection may not be eth0. To find the name use the ip address list command. The interface that has your VPS’s assigned static IP address is the one you want to put in place of eth0. %i will be replaced by the name of your WireGuard interface when the wg-quick is run.

If the WireGuard interface isn’t up, there’s no reason to keep forwarding enabled. In my configuration files I prefer to undo all of the changes I made with the PostUp directives when the interface is torn down with PostDown directives. To do this add the following lines:


PostDown = iptables -D FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostDown = iptables -D FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

These commands look almost identical except they use the -D flag (delete) instead of the -A flag (append).

Now we need to tell the VPS to change the source address of all forwarded packets to the IP address of the WireGuard interface, which can be done by adding the following line:


PostUp = iptables -t nat -A POSTROUTING -o %i -j MASQUERADE;

If the source address isn’t changed on forwarded packets, the responses to those packets won’t be properly forwarded back through the WireGuard interface (and thus will never reach the client that connected to your server). To undo this when the WireGuard interface goes down, add the following line:


PostDown = iptables -t nat -D POSTROUTING -o %i -j MASQUERADE;

Now that forwarding has been enabled we can start adding commands to forward packets to their appropriate destinations. For this example I’m going to assume your self-hosted server is running a Hypertext Transfer Protocol (HTTP) server. HTTP servers listen on two ports. Port 80 is used for unsecured traffic and port 443 is used for secured traffic. To forward traffic on ports 80 and 443 from your VPS to your self-hosted server add the following lines:


PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

The first command will forward all packets received on port 80 of eth0 through the WireGuard interface. The second command changes the designation IP address to the self-hosted WireGuard peer, which will ensure the packet is properly routed to the self-hosted server. Lines three and four do the same for traffic on port 443. To undo these changes when the WireGuard interface goes down, add the following lines:


PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

At this point the configuration file on your VPS should look like this:


[Interface]
Address = 172.16.1.1/16
SaveConfig = false
ListenPort = 65000
PrivateKey =

PostUp = iptables -A FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostUp = iptables -A FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

PostUp = iptables -t nat -A POSTROUTING -o %i -j MASQUERADE;

PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

PostDown = iptables -D FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostDown = iptables -D FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

PostDown = iptables -t nat -D POSTROUTING -o %i -j MASQUERADE;

PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

[Peer]
PublicKey =
AllowedIPs = 172.16.1.2/32

After the changes are made you’ll need to restart your WireGuard interface on your VPS. Do this by issuing the systemctl restart wg-quick@wg0 command. You may also need to restart the WireGuard interface on your self-hosted server (I’ve had mixed luck with this and usually restart both to be sure). Now you should be able to access your self-hosted HTTP server from your VPS’s static IP address.

I prefer not having to manually start my WireGuard interfaces every time I restart a server. You can make your WireGuard interfaces come online automatically when you system starts by issuing the systemctl enable wg-quick@wg0 command on both your VPS and your self-hosted server.

Written by Christopher Burg

January 2nd, 2020 at 5:24 pm

Posted in Self-Hosting

Tagged with ,

Linux on a 2010 Mac Mini Part Two

without comments

Last week I mentioned my adventure of installing Linux on a 2010 Mac Mini. Although Ubuntu 18.10 did install and was working for a few days an update left the system unusable. After an update towards the end of last week the system would only boot to a black screen. From what I gathered online I wasn’t the only person who ran into this problem. Anyways, I ended up digging into the matter further.

I once again tried installing Fedora. When I tried to install Fedora 29, I was unable to stop it from booting to a black screen so I decided to try Fedora 28. Using basic graphics mode I was able to get Fedora 28 to boot to the live environment and from there install Fedora on the Mac Mini. After installation I was able to get my Fedora installation to boot. However, when I tried to install the Nvidia driver from RPM Fusion, the system would only boot to a black screen afterwards. I tried installing the Nvidia driver via the negativo17 repository but didn’t expect it to work since the driver distributed from that repository is based on version 418 and the last driver to support the Mac Mini’s GeForce 320M was version 340. Things went as expected. I then tried installing the Nvidia driver manually using a patched version of the 340 driver from here. Unfortunately, that driver doesn’t work with the 4.20 kernel so that was a no go as well.

The reason I hadn’t tried to install the Nvidia driver manually before was because I didn’t want to deal with supporting the setup in the future. As I was trying to install it using the previously linked instructions I felt justified because the guide isn’t nearly as straight forward as installing the driver from a repository. It became a moot point since manual installation didn’t work but it did make me think about the fact that any solution I settled upon would need to be maintained, which lead me to the idea of using Ubuntu 18.04 LTS. The LTS versions of Ubuntu are supported by Canonical for five years so if I could get 18.04 installed, the setup would have a decent chance of working for five years.

After passing the kernel the “nouveau.modeset=0” argument, just as I had to do with 18.10, I was able to boot into a live environment and install 18.04 to the hard drive. Likewise, I had to use the “nouveau.modeset=0” argument to boot into the installation. Once I was booted into the installation I was able to use “sudo apt install nvidia-340” to install the 340 version of the Nvidia driver. After rebooting everything worked properly. I’m hoping that future updates will be less likely to break this setup since the LTS releases of Ubuntu tend to be more stable than non-LTS versions.

So, yeah, if you want to get a currently supported Linux distro running on a 2010 Mac Mini, take a look at Ubuntu 18.04. It might be your best bet (if it continues to run properly for the next month or so, I’ll say it is your best bet).

Written by Christopher Burg

March 4th, 2019 at 10:00 am

Posted in Technology

Tagged with , ,

Linux on a 2010 Mac Mini

without comments

I prefer repurposing old computers to throwing them away. A while ago I acquired a 2010 Mac Mini for $100. It has worked well. I even managed to install macOS Mojave on it using this patcher. However, I wanted to try installing Linux on it.

I first tried installing my go-to distro, Fedora (version 29 to be specific). Unfortunately, I immediately ran into problems. The Mac Mini has an Nvidia card that doesn’t play nicely with the nouveau driver in the kernel so I couldn’t bring up a graphical environment (I just got a black screen with a blinking cursor in the upper left corner). I tried booting the Fedora live distro with the “nouveau.modeset=0” parameter but to no avail.

So I decided to try Ubuntu (18.10). Ubuntu also initially failed to boot but it at least gave me an error message (related to the nouveau driver). When I booted it with the “nouveau.modeset=0” parameter I was able to get to the graphical interface and install Ubuntu. After installation I once again booted with the “nouveau.modeset=0” parameter and install Nvidia’s proprietary driver. After that the system now boots into Ubuntu without any trouble (installing the Nvidia driver also enabled audio output through HDMI).

If you’re having trouble installing Linux on a 2010 Mac Mini, try Ubuntu and try passing the “nouveau.modeset=0” parameter when booting and you may have better luck.

Written by Christopher Burg

February 27th, 2019 at 10:00 am

Posted in Technology

Tagged with , ,

Some Thoughts After Moving from macOS to Linux

with one comment

It has been two weeks and change since I moved from my MacBook Pro to a ThinkPad P52s running Linux. Now that I have some real use time under my belt I thought it would be appropriate to give some of my thoughts.

The first thing I’d like to note is that I have no regrets moving to Linux. My normal routine is to use my laptop at work and whenever I’m away at home and use another computer at home (because I’m too lazy to pull my laptop out of my laptop bag every night). The computer I was using at home was a 2010 Mac Mini. I replaced it with my old MacBook Pro when I got my ThinkPad. I realized the other day that I haven’t once booted up my MacBook Pro since I got my ThinkPad. Instead I have been pulling my ThinkPad out of its bag and using it when I get home. At no point have I felt that I need macOS to get something done. That’s the best testament to the transition that I can give. That’s not to say Linux can do anything that macOS can. I’m merely fortune in that the tools I need are either available on Linux or have a viable alternative.

I’m still impressed with the ThinkPad’s keyboard. One of my biggest gripes about the new MacBooks is the ultra slim keyboards. I am admittedly a bit of a barbarian when it comes to typing. I don’t so much type as bombard my keyboard from orbit. Because of this I like keys with a decent amount of resistance and depth. The keyboard on my 2012 MacBook Pro was good but I’m finding the keyboard on this ThinkPad to be a step up. The keys offer enough resistance that I’m not accidentally pressing them (a problem I have with keyboards offering little resistance) and enough depth to feel comfortable.

With that said the trackpad is still garbage when compared to the trackpad on any MacBook. My external trackball has enough buttons where I can replicate the gestures I actually used on the MacBook though and I still like the TrackPoint enough to use it when I don’t have an external mouse connected.

Linux has proven to be a solid choice on this ThinkPad as well. I bought it with Linux in mind, which means I didn’t get features that weren’t supported in Linux such as the fingerprint reader or the infrared camera for facial recognition (which is technically supported in Linux but tends to show up as the first camera so apps default to it rather than the 720p webcam). My only gripe is the Nidia graphics card. The P52s includes both an integrated Intel graphics card and an Nvidia Quadro P500 discrete graphics card, which isn’t supported by the open source Nouveau driver. In order to make it work properly, you need to install Nvidia’s proprietary drivers. Once that’s installed, everything works… except secure boot. In order to make the P52s boot after installing the Nvidia driver, you need to go into the BIOS and disable secure boot. I really wish there was a laptop with an discrete AMD graphics card that fit my needs on the market.

One thing I’ve learned from my move from macOS to Linux is just how well macOS handled external monitors. My P52s has a 4k display but all of the external monitors I work with are 1080p. Having different resolution screens was never a problem with macOS. On Linux it can lead to some rather funky scaling issues. If I leave the built-in monitors resolution at 4k, any app that opens on that display looks friggin’ huge when moved to an external 1080p display. This is because Linux scales up apps on 4k displays by a factor of two by default. Unfortunately, scaling isn’t done per monitor by default so when the app is moved to the 1080p display, it’s still scaled by two. Fortunately, a 4k display is exactly twice the resolution as a 1080p display so changing the built-in monitor’s resolution to 1080p when using an external display is an easy fix that doesn’t necessitate everything on the built-in display looking blurry.

I’ve been using Gnome for my graphical environment. KDE seems to be the generally accepted “best” desktop environment amongst much of the Linux community these days. While I do like KDE in general, I find that application interfaces are inconsistent whereas Gnome applications tend to have fairly consistent interfaces. I like consistency. I also like that Gnome applications tend to avoid burying features in menus. The choice of desktop environment is entirely subjective but so far my experience using Gnome has been positive (although knowing that I have a ship to which I can jump if that changes is reassuring).

As far as applications go, I used Firefox and Visual Studio Code on macOS and they’re both available on Linux so I didn’t have to make a change in either case. I was using Mail.app on macOS so I had to find a replacement e-mail client. I settled on Geary. My experience with Geary has been mostly positive although I really hate that there is no way, at least that I’ve found, to quickly mark all e-mails as read. I used iCal on macOS for calendaring and Gnome’s Calendar application has been a viable replacement for it. My luck at finding a replacement for my macOS task manager, 2Do, on Linux hasn’t been a positive experience. I’m primarily using Gnome’s ToDo application but it lacks a feature that is very important to me, repeating tasks. I use my task manager to remind me to pay bills. When I mark a bill as paid, I want my task manager to automatically create as task for next month. 2Do does this beautifully. I haven’t found a Linux task manager that can do this though (and in all fairness, Apple’s Reminder.app doesn’t do this well either). I was using Reeder on macOS to read my RSS feeds. On Linux I’m using FeedReader. Both work with Feedbin and both crash at about the same rate. I probably shouldn’t qualify that as a win but at least it isn’t a loss.

The biggest change for me has probably been moving from VMWare Fusion to Virtual Machine Manager, which utilized libvirt (and thus KVM and QEMU). Virtualizing Linux with libvirt is straight forward. Virtualizing Windows 10 wasn’t straight forward until I found SPICE Windows guest tools. Once I installed that guest tool package, the niceties that I came to love about VMWare Fusion such as shared pasteboards and automatically changing the resolution of the guest machine when the virtual machine window is resized worked. libvirt also makes it dead simple to set a virtual machine to automatically start when the system boots.

One major win for Linux over macOS is software installation. Installing software from the Mac App Store is dead simple but installing software from other sources isn’t as nice of an experience. Applications installed from other sources have to include their own update mechanism. Most have have taken the road of including their own embedded update capabilities. While these work, they can usually only run when the application is running so if you haven’t used the application for some time, the first thing you end up having to do is update it. Lots of packages still don’t include automatic update capabilities so you have to manually check for new releases. Oftentimes these applications are available via MacPorts or Homebrew. On the Linux side of things almost every software package is available via a distro’s package manager, which means installation and updates are handled automatically. I prefer this over the hodgepodge of update mechanisms available on macOS.

So in closing I’m happy with this switch, especially since I didn’t have to drop over $3,000 on a laptop to get what I wanted.

Written by Christopher Burg

November 21st, 2018 at 11:00 am

Posted in Side Notes

Tagged with , ,

Jumping Ship

with 2 comments

I’ve been running Apple computers for more than a decade now. While I really like macOS, anybody who knows me knows that I’ve been less than enthusiastic with the direction Apple has taken on the hardware front. My biggest gripe with Apple hardware is that it can no longer be serviced. My 2012 MacBook Pro is probably one of the easiest laptops that I’ve ever worked on. The entire back pops off and all of the frequently replaced parts are readily accessible. Part of the reason that I have been able to run that computer since 2012 is because I’ve been able to repair or upgrade components when necessary.

I usually run my laptops between four or five years. I’ve been running that MacBook Pro for six years. I was ready to upgrade last year but Apple had no laptops that appealed to me so I decided to wait a year to see if the situation would improve. When Apple announced its 2018 MacBook Pro line, it had everything I hated. All of the components, including the RAM and SSD, are soldered to the main board. Since the MacBook Pro line can no longer be upgraded, I’d have to order the hardware that I’d want to use for the next four or five years, which would cost about $3,2000. Worse yet, when something broke (all components will fail eventually), I’d have to pay Apple an exorbitant fee to fix it. And if that weren’t bad enough, the 2018 MacBook Pro still has that god awful slim keyboard. While Apple has attempted to improve the reliability of that keyboard by included a rubber membrane under the keys, typing on it is, at least in my opinion, a subpar experience.

I also have some concerns about Apple’s future plans. One of my biggest worries are the rumors of Apple transitioning its Macs to ARM processors. ARM processors are nice but I rely on virtualized x86 environments in my day to day work. If Apple transitioned to ARM processors, I wouldn’t be able to utilize my x86 virtual environments (virtualization turns into emulation when the guest and host architectures differ and emulation always involves a performance hit and usually a lot of glitches), which means I wouldn’t be able to do my work. I’m also a bit nervous about the rumors that Apple is planning to make app notarization mandatory in a future macOS release. Much of the software I rely on isn’t signed and probably never will be. Additionally, building and testing iOS software is a pain in the ass because even test builds need to be signed before they’ll work on an iOS device (anybody who has ran into code signing problems with Xcode will tell you that resolving those problems is often a huge pain in the ass) and I don’t want to bring that “experience” to my other development work. While I would never jump ship over rumors, when there are already reasons I want to jump ship, rumors act as additional low level incentives.

Since Apple didn’t have an upgrade that appealed to me and I’m not entirely comfortable with the rumors of the directions the company maybe going, I decided to look elsewhere. I’ve been running Linux in some capacity for longer than I’ve been running Apple computers. Part of my motivation for adopting macOS in the first place was because I wanted a UNIX system on my laptop (Linux on laptops back then was a dumpster fire). So when I decided to jump ship Linux became the obvious choice, which meant I was looking at laptops with solid Linux support. I also wanted a laptop that was serviceable. I found several solid options and narrowed it down to a Lenovo ThinkPad P52s because it was certified by both Red Hat and Ubuntu, sanely priced, and serviceable (in fact Lenovo publishes material that explains how to service it).

Every platform involves trade-offs. With the exception of Apple’s trackpad, every trackpad that I’ve used has been disappointing. The ThinkPad trackpad is no different in this regard. However, the ThinkPad line includes a TrackPoint, which I’ve always preferred as a mobile mouse solution to trackpads (I still miss Apple’s trackpad gestures though). There also isn’t a decent to do application on Linux (I use 2Do on both iOS and macOS and nothing on Linux is comparable) and setting up Linux isn’t anywhere near as streamlined as setting up a Mac (which involves almost no setup). With that said, I usually use an external trackball so the quality of the trackpad isn’t a big deal. My to do information syncs with my Nextcloud server so I can use its web interface when on my laptop (and continue to use 2Do on my iPhone). And since I chose a certified laptop, setting up Linux wasn’t too difficult (the hardest part was setting up nVidia’s craptastic Linux driver).

The upside to the transition, besides gaining serviceability, is first and foremost the cost. The ThinkPad P52s is a pretty cost effective laptop and I found a 20 percent off coupon code, which knocked the already reasonable price down further. Since neither the RAM nor the SSD in the P52s are soldered to the main board, I was able to save money by buying both separately and installing them when the computer arrived (which is exactly what I did with all of my Macs). In addition to the hardware being cheaper, I was also able to save money on virtualization software. I use virtualization software everyday and on macOS the only decent solution for me was VMWare Fusion (Parallels has better Windows support than Fusion but no serious Linux support, which I also require). Fedora, the Linux distribution I settled on (I run CentOS on my servers so I opted for the closest thing the included more cutting edge software), comes with libvirt installed. After spending a short while familiarizing myself with the differences between VMWare and libvirt, I can say that I’m satisfied with libvirt. It’s better in some regards, worse in others, and pretty much the same otherwise (as far as a user experience, underneath it’s far different).

I also gained a few things on the hardware side. The P52s has two USB-C and two USB-A (all USB 3) ports. My MacBook Pro only had two USB-A ports and the new MacBook Pros only have USB-C ports. All of my USB devices use USB-A so I’d need a bunch of dongles if I didn’t have USB-A ports (not a deal breaker but annoying nonetheless). In addition to being a very good mobile keyboard, the P52s keyboard also has a 10-digit keypad, which no Mac laptop currently has. Like USB-A ports, the lack of a 10 digit keypad isn’t a deal breaker in my world but its inclusion is always welcomed. If that weren’t enough, the keyboard also includes honest to god function keys instead of a TouchBar (as somebody who uses Vim a lot, the lack of a physical escape key is annoying).

My transition was relatively painless because I keep all of my data on my own servers. I didn’t have to spend hours trying to figure out how to pull data off of iCloud so I could use it on Linux. All I had to do was log into my Nextcloud instance and all of my calendar, contact, and to do information was synced to the laptop. The same was true of my e-mail. In anticipation for my move I also changed password managers from 1Password to a self-hosted instance of Bitwarden (1Password is overall a better experience but it lacks a native Linux app so I’d have been stuck with moving to a subscription plan to utilize a browser plugin that would deliver the same experience as Bitwarden). Keeping your data off of proprietary platforms makes moving between platforms easier. Likewise, keeping your data in open standards makes moving easier. I primarily rely on text files instead of word processor files (I used Markdown or LaTeX for most formatting) and most of my other data is stored in standardized formats (PNG or JPEG for images, ePub or PDF for documents, etc.).

Although I won’t give a final verdict until I’ve used this setup for a few months, my initial impressions of moving from macOS to Linux are positive. The transitions has been relatively painless and I’ve remained just as productive as I was on macOS.

Written by Christopher Burg

October 30th, 2018 at 11:00 am

Posted in Side Notes

Tagged with , ,