Bypassing Online Censorship

This post reiterates a theme this blog had for a long time. If you don’t own your publishing platform, you’re at the mercy of whoever does. I’m bringing this topic up again for two reasons. The first reason is as a response to the number of messages friends keep sending me about individuals or groups they follow, all of whom express opinions not in line with the party in power, being removed from the likes of Twitter and Facebook. The second reason is to give some historical context about the nature of avoiding censorship.

Whenever somebody alerts me that an anarchist, libertarian, Austrian economist, or any other individual outside of the mainstream gets banned from Twitter or Facebook, I roll my eyes. Of course they were removed. Twitter, Facebook, Reddit, Instagram, etc. are all services that depend on having a large user base. Any online service that depends on having a large user base is going to cater to the mainstream. Moreover, the mainstream attitude is very much in favor of censorship. In order to cater to the mainstream, these services will remove anybody who expresses ideals outside of the mainstream.

Censorship isn’t a new phenomenon. I will actually argue that it’s the norm rather than the exception. The concept of free speech as we understand it is the product of Enlightenment thinking. And while the Enlightenment was popular throughout Europe in the 17th and 18th centuries, it wasn’t as popular throughout the rest of the world and its popularity has waned significantly in Europe. But even Enlightenment thinkers often supported censorship of ideas they found especially distasteful.

Just as censorship isn’t a new phenomenon, neither is bypassing censorship. Anarchists are often targets of censorship. Not surprisingly many governments overtly censored anarchists, but even private publishers are often unwilling to publish and distribute material written by anarchists. As a result zines became a popular way for anarchists to publish and distribute their writings. Under the Soviet Union, any literature deemed counterrevolutionary (in other words any literature that showed the communist leadership as anything other than saints) was typically censored. The heavy handed censorship of the Soviet Union gave rise to Samizdat.

Both zines and Samizdat material were self-published works. The author or one of their associates would create copies using whatever means available, usually photocopies or hidden printing presses, to create copies of their works. Those copies were then distributed by hand. Often the copies would circulate from person to person. Zines and Samizdat material were typically crude because they were created with no budget and without the benefit of sophisticated printing equipment. Neither usually circulated far. A handful of copies would usually be traded amongst a handful of like minded individuals.

Today’s modern world has analogs to zines and Samizdat. Self-hosted services such as Mastadon and Element allow like minded individuals to communicate with each other via services that they can control. Peer-to-peer services such as Retroshare allow each individual to completely control their own node. It’s also possible to self-host a website. This blog is hosted on a server in my basement. There are also old school methods such as private e-mail lists that allow anybody with an e-mail client to connect to an e-mail server being hosted by a like minded individual.

The most common criticism of these services is that not everybody is on them. While true, this is a feature, not a bug, for anybody interested in distributing ideas outside of the mainstream. Do you think your grandparents are going to enjoy or be convinced by your radical posts on Facebook? If you do, you’re a fool. The only result of posting your non-mainstream ideas to centralized services used by the masses is its removal because eventually Karen is going to see it, she is going to be offended by it, and she is going to report it. Shortly after she reports it, it will be removed because the service needs her (or more specifically the masses who think like her) more than you.

Dangers of Closed Platforms

I advocate for open decentralized platforms like Mastodon, Matrix, and PeerTube over closed centralized platforms like Facebook, Twitter, and YouTube. While popular open platforms don’t have the reach and user base of popular closed platforms, they also lack many of the dangers.

Two recent stories illustrate some of the bigger dangers of closed platforms. The first was Meta (the new name Facebook chose in its attempt to improve its public image) announcing that it will demand a near 50 percent cut of all digital goods sold on its platform:

Facebook-parent Meta is planning to take a cut of up to 47.5% on the sale of digital assets on its virtual reality platform Horizon Worlds, which is an an integral part of the company’s plan for creating a so-called “metaverse.”

Before Apple popularized completely locked down platforms, software developers were able to sell their wares without cutting in platform owners. For example, if you sold software that ran on Windows, you didn’t have to hand over a percentage of your earnings to Microsoft. This was because Windows, although a closed source platform, didn’t restrict users’ ability to install whatever software they wanted from whichever source they chose. Then Apple announced the App Store. As part of that announcement Apple noted that the App Store would be the only way (at least without jailbreaking) to install additional software on iOS devices and that Apple would claim a 30 percent cut of all software sold on the App Store.

Google announced a very similar deal for Android Devices, but with a few important caveats. The first caveat was that side loading, the act of installing software outside of the Google Play Store, would be allowed (unless a device manufacturer disallowed it). The second caveat was that third-party stores like F-Droid would be supported. The third caveat was that since Android is an open source project, even if Google did away with the first two caveats, developers were free to fork Android and release versions that restored the functionality.

The iOS model favors the platform owner over both third-party software developers and users. The Android model at least cuts third-party software developers and users a bit of slack by giving them alternatives to the officially support platform owner app store (although Google makes an effort to ensure its Play Store is favored over side loading and third-party stores). Meta has chosen the Apple model, which means anybody developing software for Horizon Worlds will be required to hand nearly half of their earnings to Meta. This hostility to third-party developers and users is compounded by the fact that Meta could at any point change the rules and demand an even larger cut.

The second story illustrating the dangers of closed centralized platforms is Elon Musk’s attempt to buy Twitter:

Elon Musk on Wednesday offered to personally acquire Twitter in an all-cash deal valued at $43 billion. Musk laid out the terms of the proposal in a letter to Twitter Chairman Bret Taylor that was reproduced in an SEC filing.

This announcement has upset a lot of Twitter users (especially those who oppose the concept of free speech since Musk publicly support the concept). Were Twitter an open decentralized platform, Musk’s announcement would have less relevance. For example, if Twitter were a federated social media service like Mastodon, users on Twitter could simply migrate to another instance. Federation would allow them to continue interacting with Twitter’s users (unless Twitter block federation, of course), but from an instance not owned and controlled by Musk. But Twitter isn’t open or decentralized. Whoever owns Twitter gets to make the rules and users have no choice but to accept those rules (or migrate to a completely different platform and deal with the Herculean challenge of convincing their friends and followers to migrate with them).

I often point out that if you don’t own a service, you’re at the mercy of whoever does. As an end user you have no power on closed platforms like iOS and Twitter. With open platforms you always have the option to self-host or to find an instance run in a manner you find agreeable.

Server Migration Complete

When I first started self-hosting my blog, I was using a 2010 MacMini running Mac OS X 10.6. When Apple released 10.7, it did away with the server edition and instead replaced it with an app that didn’t work. That forced me to migrate my blog to Linux. I used Ubuntu Server LTS and it worked very well. However, I didn’t utilize any automation so whenever I wanted to do maintenance on or rebuild my server, I had to do so manually. This meant that maintenance didn’t get done when life got too busy. Over the last year I’ve been slowly migrating my manually built infrastructure to fully automated builds using Ansible. While I have a number of grievances with Ansible (YAML is an awful automation language and whoever decided to use it should be crucified), it is the least awful automation system that I’ve tested.

As with any major project I started with the easy things. First I automated building my DHCP and DNS servers. Then I moved on to automating the building of my VPN, NAS, and so on. I finally got around to writing an Ansible playbook to build this blog.

Previously I followed conventional wisdom that servers should be run on long-term support distributions. But I started to question whether that wisdom was appropriate for what I do. Whenever I had to upgrade the server running this blog from one Ubuntu LTS to another, the upgrade itself went well. But I always ended up having to manually fix a number of things that broke due to the significance of the changes that occurred between the two LTS releases. Those breakages often weren’t trivial to fix. They would eat up a lot of my time. So I started to experiment with more bleeding edge distros. I settled on Fedora Server since I already run Fedora on my laptop and have become familiar with it. Major version upgrades haven’t resulted in major breakages. When something does break, it usually takes a minute or two for me to fix.

So this blog is now running on Fedora Server 34. And I can rebuild it by issuing a single command.

I’m guessing there will still be a few issues that need to be resolved. I changed quite a bit on the back end so I’m expecting a few breakages here and there and I’m sure I’ll have to make a few performance tweaks (not that you’ll likely notice issues regarding performance since my Internet connection sucks). But the site largely appears functional.

Silence!

The 2020 presidential election turned out exactly as I and anybody else who has witnessed two children fighting over a toy expected. The only thing missing was Biden giving Trump a wedgie and calling him a poophead after his victory was certified.

What has been far more interesting to me is the response by our technology overlords. It seems that online service providers are participating in a competition to see who can best signal their hatred of Trump and the Republican Party. MSN is acting as the high score record keeper and listing every online service that has banned Trump or anything related to the Republican Party. Some of the entries were predictable. For example, Facebook and Twitter both banned Trump and Reddit announce that it banned /r/DonaldTrump.

Some of the entries are a bit more interesting (although still not surprising). Apple and Google both banned Parler (basically a shittier Facebook marketed at Republicans) from their respective app stores. Then Amazon, not wanting to be shown up, announced it had banned Parler from using its AWS services (which Parler stupidly chose as its hosting provider). For over a decade I’ve been telling anybody who will listen about the dangers of relying on tightly controlled platforms and other people’s infrastructure (often referred to as “the cloud”). These announcements by Apple, Google, and Amazon are why.

If you use an iOS device, you are stuck playing by Apple’s rules. If Apple says you’re no longer allowed to install an app to access an online service, then you’re no longer allowed to install an app that accesses that online service. The same is true, although to a lesser extent (for now), with Android. Although Android is open source Google exercises control over the platform through access to its proprietary apps. If a device manufacturer wants to include Gmail, Google Maps, and other proprietary Google apps on their Android devices, they need to agree to Google’s terms of service. The saving grace with Android is that its open source nature allows unrestricted images such as LineageOS to be released, but they generally only work on a small list of available Android devices and installing them is sometimes challenging. I’ve specifically mentioned iOS and Android, but the same is true for any proprietary platform including Windows and macOS. If Microsoft and Apple want to prohibit an app from running on Windows and macOS, they have a number of options available to them including adding the app to their operating systems’ build-in anti-malware tools. The bottom line is if you’re running a proprietary platform, you don’t own your system.

Anybody who has been reading this blog for any length of time knows that I self-host my online services. This blog for instance is running on a computer in my basement. Self-hosting comes with a lot of downsides, but one significant upside is that the only person who can erase my online presence is me. If you’re relying on a third-party service provider such as Amazon, Digital Ocean, GoDaddy, etc., your online service is entirely at their mercy. Parler wasn’t the first service to learn this lesson the hard way and certainly won’t be the last.

I’ve had to think about these things for most of my life because my philosophical views have almost always been outside the list of acceptable ideas. I developed absolutist views on gun rights, free speech, and the concept of an accused individual being innocent until proven guilty beyond a reasonable doubt in school and continue to maintain those views today. Opposing all forms of gun and speech restrictions doesn’t make you popular in K-12 and especially doesn’t make you popular in college. Being the person who wants a thorough investigation to determine guilt during a witch hunt generally only results in you being called a witch too. However, that list of acceptable ideas has continuously shortened throughout my life. Absolutist or near absolutist views on free speech were common when I was young. They became less common when I was in college, but the general principle of free speech was still espoused by the majority. Today it seems more common to find people who actually believe words can be dangerous and demand rigid controls on speech. It also seems that any political views slightly right of Leninism have been removed from the list.

If you hold views that are outside of the list of acceptable ideas or are in danger of being removed from the list, you need to think about censorship avoidance. If you haven’t already started a plan to migrate away from proprietary platforms, now is a good time to start. Likewise, if you administer any online services and haven’t already developed a plan to migrate to self-hosted infrastructure, now is actually at least a year too late, but still a better time to start than tomorrow. Our technology overlords have made it abundantly clear that they will not allow wrongthink to be produced or hosted on their platforms.

Avoiding Censorship Online

Facebook, Twitter, Reddit, and most other mainstream social media platforms have pledged to increase the speech they censor. This has lead many people, especially those most likely to be censored, to seek greener pastures. They usually tell anybody who will listen to flock to alternate social media platforms such as MeWe, Minds, and Parler. Of course this is an exercise in trading one centrally controlled platform for another. This means users are still at the mercy of the individuals who control the services. Parler has already walked back its commitment to absolute free speech and other alternate platforms will likely do the same.

So is the concept of free speech online hopeless? Not at all. However, you have to take a page from radicals throughout history. If you look at a lot of radicals, they generally owned and operated their own newspapers, magazines, journals, and periodicals. Benjamin Franklin bought a newspaper, Benjamin Tucker printed his own periodical, egoists printed their own journal, and Peter Kropotkin published his own journal. By owning and operating their own print media they were able to say whatever they wanted whenever they wanted.

Today’s Internet has become centralized, corporatized, and sanitized, but that wasn’t always the case. It also doesn’t have to be the case. Anybody can run a server. This blog is hosted on a server sitting in my basement. In fact I self-host most of my online services. This gives me absolute control over my platforms. I can say whatever I want whenever I want.

If you want to express yourself freely, you need to take a page from radicals of yesteryear and own and operate your own platform. Fortunately, it’s easier today than ever before. There are a lot of self-hosted platforms available. For example, if you want something akin to Twitter, there’s Mastodon. If you want something akin to Facebook, there’s Freindica and diaspora*. If you want chatroom functionality, there’s Matrix (which also supports end-to-end encryption so you can speak freely on other people’s servers). In fact there are a ton of self-hosted platforms that cover almost anything you could need. What’s even better is that many of the self-hosted social media platforms can be federated, which means every person in a group could run their own instance and interconnect them.

To quote Max Stirner, “Whoever will be free must make himself free. Freedom is no fairy gift to fall into a man’s lap. What is freedom? To have the will to be responsible for one’s self.”

Advertising Self-Hosted Services

The ceaseless lock down that many states are experiencing has lead to the inevitable push back. Protests have already taken place in a number of states and more protests are being planned. Unfortunately, many of these protests are being organized on Facebook and Facebook has decided to remove them.

It probably doesn’t surprise anybody that I have friends interested or participating in the protests in Minnesota. When I saw them posting on social media saying that the latest protest event had been removed, I saw a number of people recommend other centralized social media sites such as MeWe and Minds. I have a tradition when I see such recommendations. I point out that jumping from one centralized social media site to another simply kicks the can down the road because they could decide to implement restrictions at any point and that the only long term solution is using self-hosted services to advertise events. The usual rebuttal I received is a variation of we have to post the event where the people are (falsely implying that many people use MeWe or Minds). Apparently there is a lot of misunderstanding about using self-hosted services to organize events.

When you use a self-hosted service, you don’t have to isolate it from everything else. You can advertise your self-hosted service on Facebook, Twitter, and other centralized social media sites. The point of a self-hosted service is to be authoritative and under your sole control. When you share a link to your self-hosted service, you note that the website you’re hosting is the place to go for official information. If Facebook removes your post, it doesn’t matter because the people who have already seen it will know where to go for updates to your event and because Facebook cannot remove your website. The official information still exists and can be shared with interested parties.

Getting a Static IP Address with a Cheap VPS and WireGuard

I prefer to host my own infrastructure. This website site is on a server in my home as are my e-mail, Nextcloud, and Gitlab servers. I also encourage those interested to do the same. However, many Internet Service Providers (ISP) are reluctant to issue static IP address to residential customers and often block commonly used ports (especially port 25, which is needed if you want to self-host an e-mail server). While Dynamic Domain Name System (DDNS) mitigates the static IP address problem, it’s not perfect. Moreover, it won’t open ports that your ISP is blocking.

Fortunately, there’s another option. Many Virtual Private Server (VPS) providers happily issue static IP addresses for VPS instances. They also tend to be much less apt to block ports (although port 25 is commonly blocked and often requires submitting a support ticket and jumping through hoops to get unblocked). Wouldn’t it be convenient to use a VPS as a front end for your self-hosted network?

One of the things I played around with a lot recently is WireGuard. WireGuard is a new Virtual Private Network (VPN) technology that is both easy to setup and performant. In this guide I will explain how to use WireGuard to bridge your self-hosted network with a VPS (in my case I’m using this setup with one of my servers through a $5 per month DigitalOcean droplet) so that you can access your servers from the VPS’s static IP address.

This guide will not explain how to install WireGuard or the specifics of WireGuard since that information is better provided by WireGuard’s official website. What this guide will explain is how to create a WireGuard configuration file that can be used via the wg-quick command to forward traffic received via the VPS’s static IP address to individual servers on your self-hosted network.

As a quick aside, the reason I opted to use a configuration file and the wg-quick command instead of setting up a WireGuard interface and iptables on the Linux system directly is because one of my goals is portability. It’s a trivial matter to spin up a new VPS, install WireGuard, upload your configuration file, and run wg-quick. This way if you lose access to your VPS, you can get everything up and running again by spinning up a new VPS and updating your DNS records.

Once your VPS is up and running and WireGuard is installed, go to the /etc directory and, if it doesn’t already exist, create a wireguard directory. This is the directory where your configuration file will be store.

You will need a private and public key, which can both be generate via the wg command. First issue the umask 077 command. This will provide read and write access only to the user account creating the files (which is likely root). Now issue the wg genkey | tee privatekey | wg pubkey > publickey command. The first part of this command generates your private key and pipes it into the tee command, which outputs the key into a file named privatekey and pipes it into the wg pubkey command, which derives a public key from the private key and writes that public key to a file named publickey.

At this point you should have two files in /etc/wireguard, privatekey and publickey. Now it’s time to create your initial configuration file. Open your text editor of choice and save the file as wg0.conf (the file name doesn’t have to be wg0, but the extension has to be .conf). You’ll start by adding the following lines:


[Interface]
Address = 172.16.1.1/16
SaveConfig = false
ListenPort = 65000
PrivateKey =

[Interface] indicates that the lines that follow are related to the creation of a WireGuard interface.

Address indicates the IP address that will be assigned to the WireGuard interface. Note that you can assign multiple IP addresses to a WireGuard interface so if you also wanted to give it an IPv6 address you could add the line Address = fd00:cafe:dead:babe::1/64. If you’d rather have a single line containing all of the assigned IP addresses you can use a comma separated list such as Address = 172.16.1.1/16, fd00:cafe:dead:babe::1/64 (personally I find adding a single item per line more readable so I will stick to that convention for this guide). Also note that the subnet of the WireGuard interface’s IP address should differ from the one you use on your home network. In this example I’m using a 172.16.x.x address because 192.168.x.x and 10.x.x.x addresses are the ones I most commonly encounter on home and corporate networks.

SaveConfig = false will prevent WireGuard from automatically saving additional information to the wg0.conf file. I prefer this because otherwise WireGuard has a habit of generating a new fe80:: IPv6 address and saving it to wg0.conf every time the interface is brought up with the wg-quick command. Moreover, if SaveConfig is set to true any comments you add to the wg0.conf file will be erased when the wg-quick command is called (I tend to heavily comment my files so this is a big issue to me).

ListenPort indicates what port you want the WireGuard interface to be accessible on. The default listen port for WireGuard is 51820. However, I actually have my VPS’s WireGuard interface setup to foward traffic received on port 51820 to another WireGuard server on my home network so I’m using a non-default port in this example.

PrivateKey indicates the interface’s private key. Paste the private key stored in the privatekey file here (note that you have to enter the private key itself, not the path to the file containing the private key).

Now you need to configure WireGuard on one of your self-hosted servers. First, generate a private and public key pair on your self-hosted server the same way you generated them on your VPS instance. Then create wg0.conf and enter the following information:


[Interface]
Address = 172.16.1.2/16
SaveConfig = false
PrivateKey =

There are two changes to note. The first change is the IP address. The IP address of the WireGuard interface on your self-hosted server should be different (but in the same subnet) than the IP address of your VPS’s WireGuard interface. The second change should be obvious. You will enter the private key you generated on your self-hosted server.

Now both your VPS and your self-hosted servers should have WireGuard configuration files. The next step is to tell them about each other. This is done using the [Peer] directive in the configuration file. Let’s open the wg0.conf file on your VPS and add the [Peer] information for your self-hosted WireGuard interface. The file should look like this:


[Interface]
Address = 172.16.1.1/16
SaveConfig = false
ListenPort = 65000
PrivateKey =

[Peer]
PublicKey =
AllowedIPs = 172.16.1.2/32

PublicKey indicates the public key for your self-hosted server. This should be the key stored in the publickey file on your self-hosted server.

AllowedIPs serves two purposes. The first purpose is to inform the VPS what IP addresses can be used as source addresses when communicating with the peer. The line AllowedIPs = 172.16.1.2/32 indiates that the address 172.16.1.2 is the only source IP address allowed to encrypt traffic that can be decrypted with provided public key. The second purpose is to act as a routing table. Traffic destined for 172.16.1.2 will be encrypted with the provided public key so it can be decrypted by the WireGuard interface on your self-hosted server.

Depending on your goals, you may wish to use less strict values for AllowedIPs. However, I prefer to keep them strict when setting up a WireGuard connection to bridge between a VPS and my self-hosted network because I actually configure multiple peers and forward different traffic to different peers.

Now edit the wg0.conf file on your self-hosted WireGuard server so it looks like this:


[Interface]
Address = 172.16.1.2/16
SaveConfig = false
PrivateKey =

[Peer]
PublicKey =
AllowedIPs = 172.16.1.1/32
Endpoint = :65000
PersistentKeepalive = 25

The PublicKey directive should obviously have the public key generated on the VPS and AllowedIPs directives should have the IP address of the VPS’s WireGuard interface. However, there are two new entries.

Endpoint tells the WireGuard interface the IP address to which it should communicate. This should be set to the static IP address assigned to the VPS. Why wasn’t the Endpoint directive needed in the VPS’s configuration file? Because WireGuard has a neat feature, which enables roaming. When a WireGuard interface receives a packet, it notes the source address and uses that address when sending replies. This means that should the IP address of your self-hosted network change (which can happen periodically on networks not assigned a static IP address) the VPS WireGuard interface will note the new source address and use it as the new destination address.

PersistentKeepalive indicates how frequently the WireGuard interface should send keep alive messages to its peer. WireGuard is a pretty quiet protocol by default. It abstains from sending unnecessary traffic. While this makes for a more efficient protocol, it causes issues with peers behind a Network Address Translation (NAT) device. When a peer behind a NAT device connects to an external server, the NAT device keeps track of the connection. If no traffic is observed on the connection for a while, the connection is timed out and the NAT device forgets it. Setting PersistentKeepalive ensures traffic is periodically sent (once every 25 seconds in the case of the above configuration) across the connection and thus will not be forgotten by a NAT device middle man.

With these configurations in place, we can bring up both WireGuard connections. On the VPS issue the command systemctl start wg-quick@wg0 then issue the same command on your self-hosted server. From your VPS you should be able to ping 172.16.1.2 and from your self-hosted server you should be able to ping 172.16.1.1. If you can’t ping either peer, there’s likely an error in one of your configuration files or a firewall hindering the communications.

If both peers can ping each other, congratulations, you’ve successfully setup a WireGuard connection and are done with the self-hosted side of your configuration. There remains one last thing to do on your VPS though. At the moment traffic received on the VPS’s static IP address isn’t forwarded through the WireGuard interface and thus will never reach your self-hosted server.

To correct this, you first need to enable packet forwarding on your VPS. On most Linux distributions this is done by adding the lines net.ipv4.ip_forward=1 and, if you want the capability to forward IPv6 packets, net.ipv6.conf.all.forwarding=1 to the sysctl.conf file then issuing the sysctl --system command. Once packet forwarding is enabled, you can start adding packet forwarding capabilities to your WireGuard configuration file.

wg-quick recognizes a few useful directives. Two of them that we will use here are PostUp and PostDown. PostUp executes commands immediately after your WireGuard interface is brought up. PostDown executes commands immediately after your WireGuard interface has been torn down. These are convenient moments to either add port forwarding information or remove it.

To start with enable port forwarding by adding the following lines under the [Interface] section of the VPS’s wg0.conf file (as with the Address directive, the PostUp and PostDown directives can appear in the file multiple times so you can split up a lengthy list of commands onto separate lines):


PostUp = iptables -A FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostUp = iptables -A FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

These two lines enable packet forwarding from interface eth0 to wg0 and from wg0 to eth0 for established and related connections. Once a connection has been established, all packets for that connection will be properly forwarded by WireGuard to the appropriate peer. Do note that the interface name of your VPS’s network connection may not be eth0. To find the name use the ip address list command. The interface that has your VPS’s assigned static IP address is the one you want to put in place of eth0. %i will be replaced by the name of your WireGuard interface when the wg-quick is run.

If the WireGuard interface isn’t up, there’s no reason to keep forwarding enabled. In my configuration files I prefer to undo all of the changes I made with the PostUp directives when the interface is torn down with PostDown directives. To do this add the following lines:


PostDown = iptables -D FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostDown = iptables -D FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

These commands look almost identical except they use the -D flag (delete) instead of the -A flag (append).

Now we need to tell the VPS to change the source address of all forwarded packets to the IP address of the WireGuard interface, which can be done by adding the following line:


PostUp = iptables -t nat -A POSTROUTING -o %i -j MASQUERADE;

If the source address isn’t changed on forwarded packets, the responses to those packets won’t be properly forwarded back through the WireGuard interface (and thus will never reach the client that connected to your server). To undo this when the WireGuard interface goes down, add the following line:


PostDown = iptables -t nat -D POSTROUTING -o %i -j MASQUERADE;

Now that forwarding has been enabled we can start adding commands to forward packets to their appropriate destinations. For this example I’m going to assume your self-hosted server is running a Hypertext Transfer Protocol (HTTP) server. HTTP servers listen on two ports. Port 80 is used for unsecured traffic and port 443 is used for secured traffic. To forward traffic on ports 80 and 443 from your VPS to your self-hosted server add the following lines:


PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

The first command will forward all packets received on port 80 of eth0 through the WireGuard interface. The second command changes the designation IP address to the self-hosted WireGuard peer, which will ensure the packet is properly routed to the self-hosted server. Lines three and four do the same for traffic on port 443. To undo these changes when the WireGuard interface goes down, add the following lines:


PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

At this point the configuration file on your VPS should look like this:


[Interface]
Address = 172.16.1.1/16
SaveConfig = false
ListenPort = 65000
PrivateKey =

PostUp = iptables -A FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostUp = iptables -A FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

PostUp = iptables -t nat -A POSTROUTING -o %i -j MASQUERADE;

PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

PostDown = iptables -D FORWARD -i eth0 -o %i -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;
PostDown = iptables -D FORWARD -i %i -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT;

PostDown = iptables -t nat -D POSTROUTING -o %i -j MASQUERADE;

PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 172.16.1.2;

PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT;
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 172.16.1.2;

[Peer]
PublicKey =
AllowedIPs = 172.16.1.2/32

After the changes are made you’ll need to restart your WireGuard interface on your VPS. Do this by issuing the systemctl restart wg-quick@wg0 command. You may also need to restart the WireGuard interface on your self-hosted server (I’ve had mixed luck with this and usually restart both to be sure). Now you should be able to access your self-hosted HTTP server from your VPS’s static IP address.

I prefer not having to manually start my WireGuard interfaces every time I restart a server. You can make your WireGuard interfaces come online automatically when you system starts by issuing the systemctl enable wg-quick@wg0 command on both your VPS and your self-hosted server.