Port striping v3

Table of contents

  1. Introduction/TLDR
  2. Port striping v2
  3. Port striping v3
  4. Changing exit IPs
  5. What's next?

Introduction/TLDR

We changed the IPv4 setup on all the nodes to a single entry IP → multiple exit IPs (same setup as IPv6).

The migration to port striping v3 explained below was finished on the 18th of April, 2025. All of the node hostnames now resolve to the updated entry IPs.

Port striping v2

There have been a lot of gradual changes to our port striping v2 setup since we first implemented it in 2018. We had to change the haproxy configs when we added support for SSH tunneling, then again for HTTPS tunneling, and once more for obfs4 tunneling. To multiplex something with no identifiable pattern, like obfs4, we went with the logic "if it's not detectable, assume it's obfs4 and send it to obfs4proxy". That worked for obfs4 traffic, but resources (CPU/RAM) would be wasted while waiting for the connection to timeout if it wasn't obfs4 traffic (I.e., white noise from scanners, worms, crawlers, etc.). We couldn't lower that timeout without it affecting legitimate obfs4 traffic, so instead we changed haproxy's logic so that it would check one last time if the traffic obfs4proxy forwarded/de-obfuscated is actually OpenVPN traffic, and if not, drop it immediately so no further resources are wasted.

There was also another issue with obfs4proxy taking up resources because of stray BitTorrent traffic hitting the VPN client's exit IP (which in our old setup also acted as an entry IP). When a user deletes/stops a torrent in their BitTorrent client, sometimes the peers in the swarm still send traffic, and when those packets end up at our haproxy, it would go through the whole "can't detect it, assume it's obfs4" logic and get sent to obfs4proxy. Separating entry/exit IPs solves that problem.

Another issue we ran into was with SSH tunneling. Anyone who runs a public SSH server (even on a non-standard port) most likely has seen in their logs scanners/bots attempting to brute force accounts. Attackers will use something like masscan to scan the whole internet to find SSH servers, then a separate program will try to brute force an account on that server, hoping to find one with a guessable password. Doing that against our SSH tunneling servers isn't very smart because the configs used are public knowledge, and if they bothered reading that blog post they'd see that the only user allowed is 'sshtunnel', and its password is also 'sshtunnel', and that it's restricted so that it can only be used to tunnel to our servers. These brute forcing bots aren't going to crack any of our passwords, but they still waste resources while attempting the attack. It also doesn't help that because of port striping, all our IPs appear to have SSH open on all ports (1-29999), which means a higher chance that one of these bots will hit it. We can't block them with OpenSSH's PerSourcePenalties feature because all clients appear to be coming from 127.0.0.1 (since they go to haproxy first, then to OpenSSH). If PerSourcePenalties got triggered by any of these bots, it would prevent legitimate users from being able to use SSH tunneling on that server. Instead, we implemented something similar to PerSourcePenalties using haproxy's sticky tables, explained below. Aside from reducing wasted resources, another benefit is that historically, most of OpenSSH's critical vulnerabilities usually require a large number of connections to exploit (CVE-2001-0144, CVE-2006-5051, CVE-2024-6387, CVE-2025-26466, etc.). By preventing that many connections, it provides some protection against potential unknown OpenSSH vulnerabilities.

Port striping v3

Port Shadowing

One of the main reasons for the following changes are to address the "Port Shadowing" vulnerabilities described in https://petsymposium.org/popets/2024/popets-2024-0070.pdf. The short version is, if two VPN clients share the same public exit IP, one client can potentially exploit NAT or connection tracking behaviors to carry out attacks against another client. In most scenarios, the attacker needs to know or guess the victim’s destination IP, but in a few scenarios, it's not required.

The only way to completely protect clients from these types of attacks is to give each individual user a unique public IP. Ignoring scalability issues, doing that would introduce potential correlation attacks if only one person is using that IP. A better solution is to isolate each OpenVPN/WireGuard instance on the server to a specific port range when performing the SNAT that gives clients their public IP. That, along with frequently removing stale connection tracking entries help reduce the attack surface in these (and probably other) NAT/conntrack based vulnerabilities.

In our old setup, SNAT would be done using iptables rules that look something like:

-A POSTROUTING -s 10.10.0.0/16 -j SNAT --to-source 212.83.166.61
-A POSTROUTING -s 10.66.0.0/24 -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.1.0/24 -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.2.0/24 -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.3.0/24 -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.4.0/24 -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.5.0/24 -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.6.0/24 -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.7.0/24 -j SNAT --to-source 212.129.2.28

with the first rule covering all WireGuard clients, and the rest of the rules covering each individual OpenVPN instance—8 per exit IP (secp521r1, RSA, Ed25519, and Ed448, each running over both UDP and TCP).


In the new setup, the OpenVPN SNAT rules would look like:

-A POSTROUTING -s 10.66.0.0/24 -p udp -m udp --dport 1:65535 -j SNAT --to-source 212.129.2.28:10000-12845
-A POSTROUTING -s 10.66.0.0/24 -p tcp -m tcp --dport 1:65535 -j SNAT --to-source 212.129.2.28:10000-12845
-A POSTROUTING -s 10.66.0.0/24 -p icmp -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.1.0/24 -p udp -m udp --dport 1:65535 -j SNAT --to-source 212.129.2.28:12846-15691
-A POSTROUTING -s 10.66.1.0/24 -p tcp -m tcp --dport 1:65535 -j SNAT --to-source 212.129.2.28:12846-15691
-A POSTROUTING -s 10.66.1.0/24 -p icmp -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.2.0/24 -p udp -m udp --dport 1:65535 -j SNAT --to-source 212.129.2.28:15692-18537
-A POSTROUTING -s 10.66.2.0/24 -p tcp -m tcp --dport 1:65535 -j SNAT --to-source 212.129.2.28:15692-18537
-A POSTROUTING -s 10.66.2.0/24 -p icmp -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.3.0/24 -p udp -m udp --dport 1:65535 -j SNAT --to-source 212.129.2.28:18538-21383
-A POSTROUTING -s 10.66.3.0/24 -p tcp -m tcp --dport 1:65535 -j SNAT --to-source 212.129.2.28:18538-21383
-A POSTROUTING -s 10.66.3.0/24 -p icmp -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.4.0/24 -p udp -m udp --dport 1:65535 -j SNAT --to-source 212.129.2.28:21384-24229
-A POSTROUTING -s 10.66.4.0/24 -p tcp -m tcp --dport 1:65535 -j SNAT --to-source 212.129.2.28:21384-24229
-A POSTROUTING -s 10.66.4.0/24 -p icmp -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.5.0/24 -p udp -m udp --dport 1:65535 -j SNAT --to-source 212.129.2.28:24230-27075
-A POSTROUTING -s 10.66.5.0/24 -p tcp -m tcp --dport 1:65535 -j SNAT --to-source 212.129.2.28:24230-27075
-A POSTROUTING -s 10.66.5.0/24 -p icmp -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.6.0/24 -p udp -m udp --dport 1:65535 -j SNAT --to-source 212.129.2.28:27076-29921
-A POSTROUTING -s 10.66.6.0/24 -p tcp -m tcp --dport 1:65535 -j SNAT --to-source 212.129.2.28:27076-29921
-A POSTROUTING -s 10.66.6.0/24 -p icmp -j SNAT --to-source 212.129.2.28
-A POSTROUTING -s 10.66.7.0/24 -p udp -m udp --dport 1:65535 -j SNAT --to-source 212.129.2.28:29922-32767
-A POSTROUTING -s 10.66.7.0/24 -p tcp -m tcp --dport 1:65535 -j SNAT --to-source 212.129.2.28:29922-32767
-A POSTROUTING -s 10.66.7.0/24 -p icmp -j SNAT --to-source 212.129.2.28

with each of those /24 subnets still tied to individual OpenVPN instances, and none of the SNAT port ranges overlapping. WireGuard now does a different double SNAT thing with namespaces, explained in the next section, but it also restricts WireGuard instances to specific SNAT port ranges that are separate from the OpenVPN ranges.

The point of restricting each instance's SNAT to a specific port range is that now, if someone was going to try to perform one of the Port Shadowing attacks on another client, they would have to be connected to the exact same OpenVPN/WireGuard instance as the target. In the old setup, some Port Shadowing attacks would have worked on any of those 8 instances (or WireGuard), so long as the attacker was using the same exit IP as the target.

OpenVPN or WireGuard won't route arbitrary L4 protocols like GRE, ESP, AH, SCTP, or IGMP — at least, not by default or by design, but just in case a malicious client does somehow figure out a way to route those, and just in case there's some unknown kernel-level or NAT tracking L4 handling vulnerabilities, we explicitly restrict SNAT to UDP, TCP, and ICMP.

As for removing stale connection tracking entries, a script is executed when OpenVPN users disconnect that simply does (after input validation): conntrack -D --src $ip (using the 10.66.x.x IP, not their real one). This makes it so an attacker has to be connected at the same time as the target. That script was added to the old setup around October of 2024, so OpenVPN users on nodes that haven't been migrated yet still get some Port Shadowing protection. 

WireGuard has no concept of sessions going up or down, so we couldn't do the exact same thing there. Instead, we run a script every 20 minutes that, among other things, does:

# Remove conntrack entries for WireGuard peers whose last handshake was >= 20 minutes ago
current_time=$(date +%s)
threshold=$((20 * 60)) # 20 minutes
# Loop through each network namespace
ip netns | awk '{print $1}' | while read -r ns; do
    # Get peers in the namespace, skip ones with no handshake
    ip netns exec "$ns" wg show wg0 dump | awk '{print $1","$4","$5}' | grep -vE ",0$|off" | \
    while IFS= read -r line; do
        # Extract the last field (the handshake timestamp)
        IFS=, read -ra fields <<< "$line"
        handshake_time="${fields[-1]}"

        # Calculate time difference
        time_diff=$((current_time - handshake_time))

        # Delete conntrack entries if handshake > than threshold
        if [ "$time_diff" -ge "$threshold" ]; then
            ipv6="${fields[-2]%/128}" # The peer's /128 in fd00:10:10::/64
            ipv4="${fields[-3]%/32}" # The peer's /32 in 10.10.0.0/16
            echo "Removing conntrack entries for $ipv4 and $ipv6 in $ns"
            ip netns exec $ns conntrack -D --src $ipv4 2>/dev/null > /dev/null
            ip netns exec $ns conntrack -D --src $ipv6 2>/dev/null > /dev/null
        fi
    done
done

As you can see at the top of this script, it removes connection tracking entries for WireGuard users who haven't sent a handshake in 20 or more minutes. This shouldn't cause interruptions with long-lived TCP sessions a client might have open, since even if they're sitting idle WireGuard would still send out handshakes. If the client stops sending handshakes, then they're offline or disconnected, and when they return new conntrack entries will be created.

The attacks listed in the Port Shadowing paper were partially mitigated in the old setup, but now with the SNAT port range restrictions, and the stale conntrack entries being removed, and the other types of isolation described in the next section, these attacks are much better mitigated:

Most of these Port Shadowing attacks require the attacker to know what site or service the victim is connecting to, and when. That's not easy unless the victim's traffic was being monitored before it hit the VPN, or they had predictable habits (same site, same time, same exit IP). Still, best to minimize even those edge-case attacks, just in case.


Instance Isolation

In the old setup, there was a single WireGuard instance running in the default network namespace, which means it shared the same networking stack as OpenVPN. In the new setup, multiple WireGuard instances live in separate network namespaces, each one tied to a specific exit IP. This makes Port Shadowing attacks more difficult, but it also offers other benefits:

1. Inter-protocol isolation (OpenVPN ↔ WireGuard)

  • In the old setup, both OpenVPN and WireGuard shared the same namespace and kernel network stack. While isolation was enforced with iptables, a vulnerability in one could have exposed routes, interfaces, or traffic used by the other.
  • Now, WireGuard lives in its own stack — separate interfaces, separate routes, separate conntrack. Even if OpenVPN had an RCE or leaked decrypted traffic somehow, WireGuard users in their isolated namespaces wouldn't be affected, and vice-versa.

2. Kills lateral recon — even indirect

  • Even if client-to-client traffic is blocked, in a shared namespace an attacker might still infer information about other users (e.g., timing, IPs, port behaviors) using passive or side-channel recon.
  • Namespaces shut that down completely. Peers can’t see each other’s interfaces, routes, or traffic patterns — there’s no lateral movement surface, even for recon-only attackers.

3. Cross-protocol fingerprinting is broken

  • When everything shares the same stack, advanced attackers might use quirks in one protocol (like TCP stack behavior) to infer things about another (e.g. correlating WireGuard clients with OpenVPN traffic).
  • With per-protocol, per-IP namespace separation, each environment behaves independently. No shared packet path = no shared fingerprint.

The new WireGuard SNAT rules are similar to OpenVPN's, except that a second SNAT is done for the namespace's internal IP. Inside the first WireGuard namespace is the rule:

-A POSTROUTING -s 10.10.0.0/16 -j SNAT --to-source 192.168.0.2

Then in the default namespace, the SNAT rules for this instance would be:

-A POSTROUTING -s 192.168.0.2/32 -p udp -m udp --dport 1:65535 -j SNAT --to-source 212.83.166.61:32768-43689
-A POSTROUTING -s 192.168.0.2/32 -p tcp -m tcp --dport 1:65535 -j SNAT --to-source 212.83.166.61:32768-43689
-A POSTROUTING -s 192.168.0.2/32 -p icmp -j SNAT --to-source 212.83.166.61

If the next WireGuard namespace covered the exit IP 212.83.166.62, then it would use the rules:

-A POSTROUTING -s 10.10.0.0/16 -j SNAT --to-source 192.168.0.3
# then in the default namespace
-A POSTROUTING -s 192.168.0.3/32 -p udp -m udp --dport 1:65535 -j SNAT --to-source 212.83.166.62:32768-43689
-A POSTROUTING -s 192.168.0.3/32 -p tcp -m tcp --dport 1:65535 -j SNAT --to-source 212.83.166.62:32768-43689
-A POSTROUTING -s 192.168.0.3/32 -p icmp -j SNAT --to-source 212.83.166.62

Ports 32768-43689 are used again here because they don't overlap with the previous SNAT rules, since this is for a different exit IP.

Our current Moldova server has 6 exit IPs (3 IPv4 + 3 IPv6), which means 3 namespaces.
The WireGuard packet flow looks like:

WireGuard namespaces packet flowWhen packets leave the namespaces, the path is reversed, and once they reach the Main NIC again on their way to the internet, the SNAT port range restriction is applied to maintain isolation between the namespaces.


More IPs for WireGuard users

All WireGuard users are assigned a random IP in 10.10.0.0/16 (and fd00:10:10::/64 for IPv6). The old setup used a single WireGuard interface, which means 10.10.0.0/16 would route to that interface, and SNAT would only assign a single exit IP to all clients. Only one exit IP was possible because you can't use a second WireGuard interface tied to another exit IP and also route 10.10.0.0/16 to it because that subnet is already routed to the first interface.

You can however route 10.10.0.0/16 to more than one interface if you're using namespaces and veth. The namespace's separate networking stack doesn't interfere with the routes in another namespace, so more than one interface can have a route to 10.10.0.0/16 if that route lives in an isolated namespace (as you can see in the above packet flow image).

This means WireGuard users now get evenly distributed access to all the exit IPs available on each server. This should also help with CAPTCHAs because WireGuard is more popular than OpenVPN on our network these days, so the old single exit IPs are more likely to be blacklisted, which is why they were picked to be the new entry IPs.


Separate entry/exit IPs

Separating entry IPs (the IP clients connect to) from exit IPs (the IP the internet sees a client as having) solves more problems than just the one listed in the Port striping v2 section above:

1. No more open port fingerprinting on exit IPs

  • In the old setup, exit IPs were also entry IPs — meaning they had to appear open on all ports 1–29999. That stood out to proxy/VPN detection sites, which flag IPs with excessive open ports, or ports commonly used by proxies.
  • Now that exit IPs no longer accept inbound connections, they can block all ports (excluding forwarded ones and RELATED/ESTABLISHED connections), making them look less like VPN IPs.

2. Makes correlation attacks more difficult

  • Only the entry IP is seen by your ISP or anyone passively watching your outbound connection. The exit IP (what websites see) is unrelated and isolated.
  • This disconnect helps break timing- or pattern-based correlation between user traffic entering and leaving the network.

3. Semi-private exit IPs are harder to enumerate

  • Entry IPs must be public so clients can connect, but exit IPs don't need to be.
  • Since exit IPs never receive new inbound connections, they’re not exposed directly, which makes them harder for databases like ipinfo.io to harvest and tag as VPN IPs.

4. Fixes the old OpenVPN "Recursive routing detected, drop tun packet" issue

  • In the old setup, entry and exit IPs could be the same. That meant some software (usually a BitTorrent client) might try to connect to their own exit IP over the VPN tunnel. That creates a routing loop: the packet is sent into the tunnel, then routed out to the same IP it came from, causing OpenVPN to detect recursion and drop the packet.
  • Now that entry and exit IPs are different, that kind of recursive route can't happen, eliminating this issue entirely.

5. Easier to rotate/retire exit IPs without affecting client configs

  • Since clients always connect to the same entry IPs, we can silently add or remove from the backend exit IP pool as needed (e.g., if an exit IP gets null routed due to abuse) without any client-side disruption.


Our current Moldova server has 6 exit IPs (3 IPv4 + 3 IPv6), so 6 iptables u32 rules are used to balance between the 3 WireGuard namespaces.
I'll break down the first rule and add colored comments to make it a little easier to read:

# The IPv4 entry is 176.123.4.232, and we match all UDP ports 1-29999
-A PREROUTING -d 176.123.4.232/32 -p udp -m udp --dport 1:29999 \
    # This detects IPv4 WireGuard handshakes
    -m u32 --u32 "0x0>>0x16&0x3c@0x8=0x1000000,0x2000000,0x3000000,0x4000000" \
    # Balancing should only apply to new connections
    -m conntrack --ctstate NEW \
    # 1/3 (~33%) chance of getting sent to the first namespace
    -m statistic --mode random --probability 0.33333300008 \
    # And finally, send to the internal IP of the first namespace
    -j DNAT --to-destination 192.168.0.2:12912
# Do the same for the IPv6 entry 2001:678:6d4:5023::f
-A PREROUTING -d 2001:678:6d4:5023::f/128 -p udp -m udp --dport 1:29999 \
    -m u32 --u32 "0x2d&0xff=0x1:0x4" \
    -m conntrack --ctstate NEW \
    -m statistic --mode random --probability 0.33333300008 \
    -j DNAT --to-destination [fd00:100::2]:12912

# For the second namespace, 2/3 (50%) chance of getting sent to this one
-A PREROUTING -d 176.123.4.232/32 -p udp -m udp --dport 1:29999 \
    -m u32 --u32 "0x0>>0x16&0x3c@0x8=0x1000000,0x2000000,0x3000000,0x4000000" \
    -m conntrack --ctstate NEW \
    -m statistic --mode random --probability 0.50000000000 \
    -j DNAT --to-destination 192.168.0.3:12912
-A PREROUTING -d 2001:678:6d4:5023::f/128 -p udp -m udp --dport 1:29999 \
    -m u32 --u32 "0x2d&0xff=0x1:0x4" \
    -m conntrack --ctstate NEW \
    -m statistic --mode random --probability 0.50000000000 \
    -j DNAT --to-destination [fd00:100::3]:12912

# For the third namespace, 3/3 (100%) chance of getting sent to this one
-A PREROUTING -d 176.123.4.232/32 -p udp -m udp --dport 1:29999 \
    -m u32 --u32 "0x0>>0x16&0x3c@0x8=0x1000000,0x2000000,0x3000000,0x4000000" \
    -m conntrack --ctstate NEW \
    -m statistic --mode random --probability 1.00000000000 \
    -j DNAT --to-destination 192.168.0.4:12912
-A PREROUTING -d 2001:678:6d4:5023::f/128 -p udp -m udp --dport 1:29999 \
    -m u32 --u32 "0x2d&0xff=0x1:0x4" \
    -m conntrack --ctstate NEW \
    -m statistic --mode random --probability 1.00000000000 \
    -j DNAT --to-destination [fd00:100::4]:12912


That covers WireGuard. As for OpenVPN, similar u32 rules are used but without separate namespaces since that's not necessary because each individual instance is restricted to a specific tun interface and /24 subnet (or /64 for IPv6).

For UDP OpenVPN, the u32 rules look like this, with $entry4 set to the IPv4 entry address, and $probability set to the same numbering logic as the above rules, using a loop that iterates through all the exit IPs, which is what $exit_ip is:

# OpenVPN ECC/tls-crypt, ports 1-5060 and 5063-29999
-A PREROUTING -d $entry4/32 -p udp -m multiport --dports 1:5060,5063:29999 \
    -m u32 --u32 "0x0&0xff=0x52&&0x19&0xff=0x38" \
    -m conntrack --ctstate NEW \
    -m statistic --mode random --probability $probability \
    -j DNAT --to-destination $exit_ip:5060
# OpenVPN ECC/tls-crypt-v2, ports 1-5060 and 5063-29999
-A PREROUTING -d $entry4/32 -p udp -m multiport --dports 1:5060,5063:29999 \
    -m u32 --u32 "0x0&0xff=0x7d&&0x19&0xff=0x50" \
    -m conntrack --ctstate NEW \
    -m statistic --mode random --probability $probability \
    -j DNAT --to-destination $exit_ip:5060
# OpenVPN Ed25519, port 5061
-A PREROUTING -d $entry4/32 -p udp -m udp --dport 5061 \
    -m conntrack --ctstate NEW \
    -m statistic --mode random --probability $probability \
    -j DNAT --to-destination $exit_ip:5061
# OpenVPN Ed448, port 5062
-A PREROUTING -d $entry4/32 -p udp -m udp --dport 5062 \
    -m conntrack --ctstate NEW \
    -m statistic --mode random --probability $probability \
    -j DNAT --to-destination $exit_ip:5062
# OpenVPN RSA/tls-auth, ports 1-29999
-A PREROUTING -d $entry4/32 -p udp -m udp --dport 1:29999 \
    -m u32 --u32 "0x0&0xff=0x72&&0x19&0xff=0x38" \
    -m conntrack --ctstate NEW \
    -m statistic --mode random --probability $probability \
    -j DNAT --to-destination $exit_ip:1194

This allows clients to connect using the ECC or RSA configs on any port from 1-29999, excluding 5061 and 5062, which are exclusively used by the Ed25519 and Ed448 configs.

As for TCP OpenVPN, a single iptables rule is used to send almost everything to haproxy:

-A PREROUTING -d 176.123.4.232/32 -p tcp -m tcp --dport 1:29999 -m multiport ! --dports 5061,5062 -m conntrack --ctstate NEW -j DNAT --to-destination :443

Haproxy listens on the entry IP on ports 443, 5061, and 5062. We exclude 5061 and 5062 from the rule because haproxy listens directly on those two ports, and we can use them to determine whether or not this is an Ed25519 or Ed448 OpenVPN session.

There's also another TCP related rule:

-A INPUT -d $exit_ip/32 ! -i lo -p tcp -m multiport --dports 1:29999 -j REJECT --reject-with icmp-port-unreachable

This prevents port scans against the exit IPs (excluding port forwards). It rejects everything on ports 1-29999, excluding the lo interface because that's what haproxy uses to connect to the TCP OpenVPN ports. A similar UDP rule isn't really needed because UDP probes against the exit IP would fail without a valid OpenVPN --tls-crypt, --tls-crypt-v2, or --tls-auth key, and OpenVPN is the only thing listening on the exit IPs.

The haproxy config looks like this, again using the Moldova server's IPv4 entry:

global
pidfile /var/haproxy/176.123.4.232.pid
user haproxy
group haproxy
# Mostly used for reloading haproxy without terminating OpenVPN sessions
stats socket /var/haproxy/176.123.4.232.sock mode 600 level admin
# Peer identifier for this instance
localpeer 176.123.4.232
# Performance tuning, varies depending on the server's CPU
nbthread 3                             # Use 3 worker threads
tune.bufsize 65536                     # Buffer size for connection processing
tune.maxrewrite 1024                   # Max buffer space reserved for header rewriting
tune.rcvbuf.client 128k                # Receive buffer size for client connections
tune.rcvbuf.server 128k                # Receive buffer size for server connections
tune.sndbuf.client 128k                # Send buffer size for client connections
tune.sndbuf.server 128k                # Send buffer size for server connections

# Default settings for all other sections
defaults
timeout connect 15000   # Max time to connect to backend server
timeout client 23000    # Max client inactivity time
timeout server 23000    # Max backend server inactivity time

# Peer definitions for sharing tables with the IPv6 instance
peers mypeers
peer 176.123.4.232 127.0.0.1:2000
peer 2001:678:6d4:5023::f 127.0.0.1:2001
# Table for SSH protection
table ssh_protect type ip size 1k expire 10m store conn_rate(20s),gpc0

# Main frontend handling all incoming connections
frontend portstripev3
mode tcp
# Bind to ports 443, 5061, and 5062 with TCP Fast Open (TFO) enabled
bind 176.123.4.232:443 tfo
bind 176.123.4.232:5061 tfo
bind 176.123.4.232:5062 tfo
# Access Control Lists (ACLs) for traffic classification
acl is_localhost src 127.0.0.1         # Match connections from localhost
acl is_ed25519 dst_port 5061           # Match ED25519 OpenVPN port
acl is_ed448 dst_port 5062             # Match ED448 OpenVPN port
acl is_ssh payload(0,7) -m bin 5353482d322e30  # Match SSH protocol header (SSH-2.0)
acl is_ssh_tunnel src -f /etc/haproxy/server_ips.lst  # Match other servers
acl ssh_flood sc0_conn_rate gt 10      # Detect SSH flood (more than 10 connections/20s)
acl banned_ip sc0_get_gpc0 gt 0        # Check if IP is banned
acl ovpn_flood sc1_conn_rate gt 20     # Detect OpenVPN flood
acl ovpn_dos sc1_conn_cur gt 6        # Detect OpenVPN DoS (more than 6 concurrent connections)
# SSH protection - track connection rate and ban abusive IPs
tcp-request connection track-sc0 src table mypeers/ssh_protect
tcp-request connection reject if { sc0_get_gpc0 gt 5 }  # Ban after 5 violations
tcp-request content sc-inc-gpc0(0) if ssh_flood        # Increment violation counter
# Content inspection rules
acl is_http payload(0,4) -m bin 47455420  # Match HTTP GET requests
tcp-request inspect-delay 3s              # Wait up to 3s for protocol detection
tcp-request content accept if is_ssh or is_http  # Early accept for known protocols
tcp-request content accept if { req.ssl_hello_type 1 }  # Accept SSL/TLS handshakes
# Protection rules
tcp-request content sc-inc-gpc0(0) if ssh_flood  # Count SSH flood attempts
tcp-request content reject if banned_ip          # Reject banned IPs
tcp-request content reject if ssh_flood          # Reject during SSH flood
tcp-request content reject if ovpn_flood || ovpn_dos  # Reject OpenVPN abuse
# Routing rules - determine which backend to use based on protocol detection
use_backend openvpn_ecc if !{ req.ssl_hello_type 1 } { req.len 56 } !is_ed25519 !is_ed448
use_backend openvpn_rsa if !{ req.ssl_hello_type 1 } { req.len 88 } !is_ed25519 !is_ed448
use_backend openvpn_ecc if !{ req.ssl_hello_type 1 } { req.len 355 } !is_ed25519 !is_ed448
use_backend openvpn_ed25519 if !{ req.ssl_hello_type 1 } is_ed25519
use_backend openvpn_ed448 if !{ req.ssl_hello_type 1 } is_ed448
use_backend ssh_tunnel if is_ssh
# SNI-based routing for HTTPS traffic
acl sni_cstormis req.ssl_sni -m reg -i 176.123.4.232
acl sni_cstormis req.ssl_sni -m reg -i moldova.cstorm.is
acl sni_cstormnet req.ssl_sni -m reg -i moldova.cstorm.net
acl sni_cryptostormpw req.ssl_sni -m reg -i moldova.cryptostorm.pw
use_backend https_cstormis if sni_cstormis
use_backend https_cstormnet if sni_cstormnet
use_backend https_cryptostormpw if sni_cryptostormpw
# Allow any other SNI to be used for HTTPS tunneling via stunnel
acl other_https req.ssl_sni -m reg -i ^.*$
use_backend https_stunnel if other_https  # All other HTTPS traffic to stunnel
# Additional OpenVPN packet length checks, for obfs4
acl valid_openvpn_len_56 req.len 56
acl valid_openvpn_len_88 req.len 88
acl valid_openvpn_len_355 req.len 355
# Drop packet if it's not OpenVPN, for when obfs4proxy returns the deobfuscated traffic
use_backend drop if is_localhost !valid_openvpn_len_56 !valid_openvpn_len_88 !valid_openvpn_len_355  # Drop invalid localhost traffic
# HTTP host-based routing
use_backend http_cstormis if is_http { req.hdr(Host) -m reg -i 176.123.4.232 }
use_backend http_cstormis if is_http { req.hdr(Host) -m reg -i moldova\.cstorm\.is }
use_backend http_cstormnet if is_http { req.hdr(Host) -m reg -i moldova\.cstorm\.net }
use_backend http_cryptostormpw if is_http { req.hdr(Host) -m reg -i moldova\.cryptostorm\.pw }
default_backend obfs4  # Default to obfs4 backend for unmatched traffic

# HTTP backends for static sites
backend http_cstormis
mode tcp
# Apache VirtualHost for http://moldova.cstorm.is/
server http_cstormis_server 127.0.0.1:8002

backend http_cstormnet
mode tcp
# Apache VirtualHost for http://moldova.cstorm.net/
server http_cstormnet_server 127.0.0.1:8004

backend http_cryptostormpw
mode tcp
# Apache VirtualHost for http://moldova.cryptostorm.pw/
server http_cryptostormpw_server 127.0.0.1:8006

# HTTPS backends for static sites
backend https_cstormis
mode tcp
# Apache VirtualHost for https://moldova.cstorm.is/
server https_cstormis_server 127.0.0.1:8001

backend https_cstormnet
mode tcp
# Apache VirtualHost for https://moldova.cstorm.net/
server https_cstormnet_server 127.0.0.1:8003

backend https_cryptostormpw
mode tcp
# Apache VirtualHost for https://moldova.cryptostorm.pw/
server https_cryptostormpw_server 127.0.0.1:8005

# Generic HTTPS backend (stunnel, for HTTPS obfuscation)
backend https_stunnel
mode tcp
option tcp-smart-connect  # Optimize TCP connections
server stunnel-localhost 127.0.0.1:8000

# SSH tunnel backend
backend ssh_tunnel
mode tcp
option tcp-smart-connect
timeout server 2h  # Long timeout for SSH sessions
server sshtunnel-localhost 127.0.0.1:2222

# OpenVPN backend for the ECC instances
backend openvpn_ecc
mode tcp
option tcp-smart-connect
option tcpka       # Enable TCP keep-alive
option splice-auto  # Use kernel splicing when possible
timeout tunnel 24h  # Long timeout for VPN tunnels
balance leastconn   # Load balance to least busy server
server openvpn0-localhost 176.123.4.233:5060 source 176.123.4.232 maxconn 5000
server openvpn1-localhost 176.123.4.235:5060 source 176.123.4.232 maxconn 5000
server openvpn2-localhost 176.123.4.236:5060 source 176.123.4.232 maxconn 5000

# OpenVPN backend for RSA instances
backend openvpn_rsa
mode tcp
option tcp-smart-connect
option tcpka
option splice-auto
timeout tunnel 24h
balance leastconn
server openvpn0-localhost 176.123.4.233:1194 source 176.123.4.232 maxconn 5000
server openvpn1-localhost 176.123.4.235:1194 source 176.123.4.232 maxconn 5000
server openvpn2-localhost 176.123.4.236:1194 source 176.123.4.232 maxconn 5000

# OpenVPN backend for the Ed25519 instances
backend openvpn_ed25519
mode tcp
option tcp-smart-connect
option tcpka
option splice-auto
timeout tunnel 24h
balance leastconn
server openvpn0-localhost 176.123.4.233:5061 source 176.123.4.232 maxconn 5000
server openvpn1-localhost 176.123.4.235:5061 source 176.123.4.232 maxconn 5000
server openvpn2-localhost 176.123.4.236:5061 source 176.123.4.232 maxconn 5000

# OpenVPN backend for the Ed448 instances
backend openvpn_ed448
mode tcp
option tcp-smart-connect
option tcpka
option splice-auto
timeout tunnel 24h
balance leastconn
server openvpn0-localhost 176.123.4.233:5062 source 176.123.4.232 maxconn 5000
server openvpn1-localhost 176.123.4.235:5062 source 176.123.4.232 maxconn 5000
server openvpn2-localhost 176.123.4.236:5062 source 176.123.4.232 maxconn 5000

# Obfs4 backend
backend obfs4
mode tcp
timeout tunnel 6h  # Shorter timeout for obfs4
server obfs4-localhost 127.0.0.1:6060 maxconn 3000

# Silent drop backend for unwanted traffic (non-OpenVPN traffic from obfs4proxy)
backend drop
mode tcp
timeout connect 1ms  # Immediate timeout
timeout server 1ms
server drop 0.0.0.0:1  # Invalid server to force drop


This config handles all the TCP-based services we offer — OpenVPN (the 4 config types), SSH tunneling, HTTPS tunneling, a few HTTP/S sites, and obfs4 — and lets them all share the same public ports. It also includes some CPU/performance tweaks, plus protections against brute force, DoS, or exploit attempts, especially for SSH and OpenVPN.

As I'm writing this blog post, OpenVPN 2.6.14 was released to address CVE-2025-2704, which https://community.openvpn.net/openvpn/wiki/CVE-2025-2704 describes as:

[An] error condition [that] results in an ASSERT statement being triggered, stopping the server process that has tls-crypt-v2 enabled. 

This CVE bug affects OpenVPN servers from 2.6.1 until 2.6.13 (inclusive) set up with --tls-crypt-v2. On reception of a particular combination of incoming packets, some authorized and some malformed, client state in the server gets corrupted and a self-check is triggered that exits the server with an ASSERT message.

The combination of the above haproxy inspection rules and u32-based rules adds a layer of protocol-specific filtering, making it much harder to exploit handshake-based vulnerabilities like this CVE-2025-2704 without a valid session key and correct timing. But it doesn't completely protect against this particular vulnerability, so we updated OpenVPN to 2.6.14 on all the servers.


Misc Hardening

A bunch of smaller changes were made as part of this new setup that either improve hardening, boost performance, or just clean up older, messier setups. Here’s what changed:


1. Blacklisted kernel modules

  • To reduce the attack surface, more unnecessary kernel modules are now blacklisted at boot. We already were blacklisting some of the obvious ones like Bluetooth, CD/DVD drives, FireWire, Thunderbolt [and PCIe Hotplug], serial ports, and WiFi, but this update also includes other hardware drivers and debugging modules that aren't necessary, and video/keyboard/sound related modules (since these are headless servers). Most of these weren’t in use anyway, but blacklisting them means they won’t even be loaded.

2. tmpfs for /var/log

  • Even though logging is disabled on everything, /var/log is now a RAM-only tmpfs filesystem, which wipes itself on shutdown. This reduces the risk of any sensitive data being saved to disk, either accidentally or if a compromise occurs. This was actually implemented a year or two ago, so the old nodes are already doing this.

3. Swap disabled

  • All servers have swap turned off, preventing potentially sensitive data from being paged out to disk. If something crashes because of no swap, that’s better than risking data persistence.

4. sysctl.conf changes

  • The /etc/sysctl.conf file has been overhauled to apply more precise kernel-level restrictions:
# ================================================
# Kernel & Hardware-level Hardening
# ================================================
kernel.deny_new_usb = 1  # Disable hotplugging USB devices — mitigates physical attack vector on compromised servers
kernel.core_uses_pid = 1 # Include PID in core filenames — useful for debugging if core dumps are ever enabled
kernel.sysrq = 0         # Disable magic SysRq key — prevents attackers from triggering low-level kernel commands

# ================================================
# File & IPC Resource Limits
# ================================================
fs.file-max = 360000       # Allow a large number of open files — necessary for thousands of concurrent VPN related connections
kernel.msgmax = 65536      # Max size of a single message in IPC (message queues)
kernel.msgmnb = 65536      # Default max size of a message queue
kernel.shmall = 268435456  # Shared memory limit in pages (used by some daemons, e.g. PowerDNS, Snort)
kernel.shmmax = 268435456  # Max size (in bytes) of a shared memory segment

# ================================================
# Network Stack: Throughput & Resilience
# ================================================
net.core.netdev_max_backlog = 500000  # Queue length for incoming packets — raised to handle bursts; set to 1M+ on high-end servers
net.core.rmem_max = 134217728         # Max socket read buffer (128MB)
net.core.wmem_max = 134217728         # Max socket write buffer (128MB)

# ================================================
# IP Spoofing & Redirection Protections
# ================================================
net.ipv4.conf.all.accept_redirects = 0      # Ignore ICMP redirects — prevents MITM attacks
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 1        # Still allow sending redirects internally if needed (e.g., policy routing)
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.accept_source_route = 0   # Drop packets with source routing — a legacy attack vector
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_source_route = 0

# ================================================
# Routing & IP Behavior
# ================================================
net.ipv4.ip_forward = 1                     # Enable IPv4 forwarding (VPN traffic)
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv4.ip_nonlocal_bind = 1               # Allow services to bind to IPs that aren't yet assigned — useful for failover setups
net.ipv6.ip_nonlocal_bind = 1
net.ipv4.ip_local_port_range = 10000 65535  # Ephemeral port range — avoids port reuse since daemons only use < 10000
net.ipv4.route.flush = 1                    # Forces a route flush after changes

# ================================================
# Anti-Fingerprinting & Privacy
# ================================================
net.ipv4.tcp_timestamps = 0         # Disable TCP timestamps — reduces fingerprinting surface
net.ipv4.tcp_no_metrics_save = 1    # Don't cache TCP metrics per destination — prevents leaking behavioral patterns
net.ipv6.conf.all.use_tempaddr = 0  # Avoid using temporary IPv6 addresses — avoids privacy pitfalls in VPN context

# ================================================
# TCP Performance & Tuning
# ================================================
net.ipv4.tcp_congestion_control = bbr         # Use BBR congestion control — ideal for high-throughput, low-latency VPN traffic
net.ipv4.tcp_rmem = 4096 262144 134217728     # TCP read buffer: min/default/max
net.ipv4.tcp_wmem = 4096 262144 134217728     # TCP write buffer: min/default/max
net.ipv4.tcp_mtu_probing = 1                  # Enable MTU probing — avoids PMTU blackhole issues common with VPNs
net.ipv4.tcp_fastopen = 3                     # Enable TCP Fast Open for both client and server — improves connect latency
net.ipv4.tcp_tw_reuse = 1                     # Reuse TIME_WAIT sockets — reduces port exhaustion under high conn churn
net.ipv4.tcp_fin_timeout = 10                 # Faster close on sockets in FIN_WAIT2 — reduces stale conn load
net.ipv4.tcp_max_syn_backlog = 65535          # Increase SYN backlog queue — defends against SYN floods, supports scale
net.ipv4.tcp_syncookies = 0                   # Left off unless an actual SYN flood DoS happens
net.ipv4.tcp_syn_retries = 5
net.ipv4.tcp_synack_retries = 4
net.ipv4.tcp_challenge_ack_limit = 1073741823 # Effectively disables challenge ACK throttling — avoids timing leaks (per CVE-2016-5696)
net.ipv4.tcp_low_latency = 1                  # Optimize for latency-sensitive workloads
net.ipv4.tcp_window_scaling = 1               # Enable TCP window scaling — required for high-bandwidth TCP flows
net.ipv4.tcp_sack = 1                         # Selective ACKs — better recovery from packet loss
net.ipv4.tcp_dsack = 1                        # Duplicate SACKs — helps tune performance in lossy networks
net.ipv4.tcp_fack = 1                         # Forward ACKs — improves congestion handling in lossy conditions
net.ipv4.tcp_moderate_rcvbuf = 0              # Allows fine-grained tuning of buffer sizing

# ================================================
# UDP Performance
# ================================================
net.ipv4.udp_mem = 65536 173800 419430     # UDP buffer memory thresholds
net.ipv4.udp_rmem_min = 262144             # UDP minimum receive buffer
net.ipv4.udp_wmem_min = 262144             # UDP minimum write buffer

# ================================================
# Misc Network Settings
# ================================================
net.ipv4.neigh.default.proxy_qlen = 96         # ARP proxy queue length
net.ipv4.neigh.default.unres_qlen = 6          # Queue length for unresolved ARP entries
net.ipv4.icmp_echo_ignore_broadcasts = 1       # Ignore broadcast pings — basic smurf protection
net.ipv4.icmp_ignore_bogus_error_responses = 1 # Drop bogus ICMP errors
net.ipv4.icmp_echo_ignore_all = 0              # Respond to ping — for debugging and reachability
net.ipv4.conf.all.log_martians = 0             # Log packets with impossible source addresses (disabled for performance/noise reasons)
net.ipv4.conf.default.log_martians = 0
net.ipv6.conf.all.accept_ra = 2                # Accept IPv6 router advertisements even if forwarding is enabled (needed on some servers, gets disabled elsewhere for tun+/wg+)

# ================================================
# Conntrack / Firewall
# ================================================
net.netfilter.nf_conntrack_max = 10000000    # Raise conntrack limit — VPNs create a lot of stateful connections
net.netfilter.nf_conntrack_buckets = 524288  # Hash table size for conntrack — typically 1/8 of nf_conntrack_max

# ================================================
# VM / Memory Settings
# ================================================
vm.min_free_kbytes = 65536    # Minimum free memory — helps avoid low-memory situations under load
vm.mmap_min_addr = 4096       # Prevent low-memory mmap — basic userspace memory protection



5. Tighter leak prevention in iptables

  • In addition to the NAT rules already shown earlier, the non-NAT (-t filter) iptables rules now drop traffic that looks suspicious, malformed, or shouldn't exist under normal VPN operation. These include things like:
    • packets with reserved/private IP ranges on the wrong interfaces
    • rare/obscure ICMP types
    • attempts to leak traffic outside the tunnel

The old rules only blocked client-to-client traffic, potential LAN related leaks, and NetBIOS traffic:

# Block NetBIOS/MSRPC (can be used to deanonymize)
-A FORWARD -p tcp -m tcp --dport 445 -j DROP
-A FORWARD -p udp -m udp --dport 445 -j DROP
-A FORWARD -p tcp -m tcp --dport 139 -j DROP
-A FORWARD -p udp -m udp --dport 139 -j DROP
-A FORWARD -p tcp -m tcp --dport 135 -j DROP
-A FORWARD -p udp -m udp --dport 135 -j DROP
# Block client-to-client traffic and LAN traffic
-A FORWARD -s 10.0.0.0/8 -d 10.0.0.0/8 -j DROP
# Block other RFC1918 LAN subnets
-A FORWARD -s 10.0.0.0/8 -d 172.16.0.0/12 -j DROP
-A FORWARD -s 10.0.0.0/8 -d 192.168.0.0/16 -j DROP
-A FORWARD -s 10.0.0.0/8 -d 169.254.0.0/16 -j DROP
-A FORWARD -s 10.0.0.0/8 -d 100.64.0.0/10 -j DROP
# Send egress VPN traffic to snort
-A FORWARD -s 10.0.0.0/8 -j NFQUEUE --queue-num 0

The new rules are more precise to minimize even edge-case leaks:

# Block egress ICMP redirects
-A FORWARD -i tun+ -s 10.0.0.0/8 -p icmp --icmp-type 5 -j DROP
# Block NetBIOS/SMB/MSRPC
-A FORWARD -p tcp -m tcp -m multiport --dports 135,139,445 -j DROP
-A FORWARD -p udp -m udp -m multiport --dports 135,139,445 -j DROP
-A FORWARD -p tcp -m tcp -m multiport --sports 135,139,445 -j DROP
-A FORWARD -p udp -m udp -m multiport --sports 135,139,445 -j DROP
# Block leaked/spoofed LAN traffic (loopback, link-local, private LANs, CG-NAT)
# in all directions (ingress+egress by src/dst)
-A FORWARD -i tun+ -s 127.0.0.0/8 -j DROP    # Block loopback
-A FORWARD -i tun+ -d 127.0.0.0/8 -j DROP
-A FORWARD -o tun+ -s 127.0.0.0/8 -j DROP
-A FORWARD -o tun+ -d 127.0.0.0/8 -j DROP
-A FORWARD -i tun+ -s 169.254.0.0/16 -j DROP # Block link-local IPv4
-A FORWARD -i tun+ -d 169.254.0.0/16 -j DROP
-A FORWARD -o tun+ -s 169.254.0.0/16 -j DROP
-A FORWARD -o tun+ -d 169.254.0.0/16 -j DROP
-A FORWARD -i tun+ -s 192.168.0.0/16 -j DROP # Block RFC1918
-A FORWARD -i tun+ -d 192.168.0.0/16 -j DROP
-A FORWARD -o tun+ -s 192.168.0.0/16 -j DROP
-A FORWARD -o tun+ -d 192.168.0.0/16 -j DROP
-A FORWARD -i tun+ -s 172.16.0.0/12 -j DROP  # Block RFC1918
-A FORWARD -i tun+ -d 172.16.0.0/12 -j DROP
-A FORWARD -o tun+ -s 172.16.0.0/12 -j DROP
-A FORWARD -o tun+ -d 172.16.0.0/12 -j DROP
-A FORWARD -i tun+ -s 100.64.0.0/10 -j DROP  # Block CG-NAT range
-A FORWARD -i tun+ -d 100.64.0.0/10 -j DROP
-A FORWARD -o tun+ -s 100.64.0.0/10 -j DROP
-A FORWARD -o tun+ -d 100.64.0.0/10 -j DROP
# Block client-to-client traffic
-A FORWARD -i tun+ -s 10.66.0.0/16 -d 10.66.0.0/16 -j DROP
-A FORWARD -o tun+ -s 10.66.0.0/16 -d 10.66.0.0/16 -j DROP
-A FORWARD -i tun+ -s 10.67.0.0/16 -d 10.67.0.0/16 -j DROP
-A FORWARD -o tun+ -s 10.67.0.0/16 -d 10.67.0.0/16 -j DROP
-A FORWARD -i tun+ -s 10.70.0.0/16 -d 10.70.0.0/16 -j DROP
-A FORWARD -o tun+ -s 10.70.0.0/16 -d 10.70.0.0/16 -j DROP
# Send egress VPN traffic to snort
# (iptables continues past this rule, unless the packet matches a snort rule)
-A FORWARD -i tun+ -s 10.0.0.0/8 -j NFQUEUE --queue-num 0
# Allow traffic from VPN subnets to anywhere else (i.e., the internet), only if it's non-10.0.0.0/8
-A FORWARD -i tun+ -s 10.66.0.0/16 ! -d 10.0.0.0/8 -j ACCEPT
-A FORWARD -i tun+ -d 10.66.0.0/16 ! -s 10.0.0.0/8 -j ACCEPT
-A FORWARD -o tun+ -s 10.66.0.0/16 ! -d 10.0.0.0/8 -j ACCEPT
-A FORWARD -o tun+ -d 10.66.0.0/16 ! -s 10.0.0.0/8 -j ACCEPT
-A FORWARD -i tun+ -s 10.67.0.0/16 ! -d 10.0.0.0/8 -j ACCEPT
-A FORWARD -i tun+ -d 10.67.0.0/16 ! -s 10.0.0.0/8 -j ACCEPT
-A FORWARD -o tun+ -s 10.67.0.0/16 ! -d 10.0.0.0/8 -j ACCEPT
-A FORWARD -o tun+ -d 10.67.0.0/16 ! -s 10.0.0.0/8 -j ACCEPT
-A FORWARD -i tun+ -s 10.70.0.0/16 ! -d 10.0.0.0/8 -j ACCEPT
-A FORWARD -i tun+ -d 10.70.0.0/16 ! -s 10.0.0.0/8 -j ACCEPT
-A FORWARD -o tun+ -s 10.70.0.0/16 ! -d 10.0.0.0/8 -j ACCEPT
-A FORWARD -o tun+ -d 10.70.0.0/16 ! -s 10.0.0.0/8 -j ACCEPT
# Finally, drop everything not already ACCEPTed
-A FORWARD -i tun+ -j DROP
-A FORWARD -o tun+ -j DROP

The rules used by the WireGuard namespaces are virtually the same, except the wg0 interface is used instead of tun+


6. ovpn-dco module enabled

  • OpenVPN now uses the ovpn-dco kernel module for data channel offload. This pushes crypto work into the kernel where it’s faster and more efficient, especially on systems with lots of OpenVPN traffic. This also removes some of the overhead involved with large tun queues. See https://blog.openvpn.net/openvpn-data-channel-offload/ for more info on OpenVPN's DCO.

7. PowerDNS recursor upgrades (with DNSSEC validation)

  • Both the ad-blocking DNS servers (10.31.33.7 / 2001:db8::7) and the unfiltered DNS servers (10.31.33.8 / 2001:db8::8) now have dnssec=validate enabled. We tried enabling it years ago, but back then a lot of popular sites had bad DNSSEC implementations that would have failed validation. That's less of an issue now, so we're enabling it again.

8. AIDE for integrity checking (replacing Tripwire)

  • Tripwire was too much of a hassle to use and maintain. Now we use AIDE for file integrity checking. The AIDE database is generated after a node is freshly provisioned, and saved securely off-site. If a node ever reboots unexpectedly, we assume compromise and use a clean AIDE binary + the database to compare hashes and check for tampering.

9. Snort redesign

  • Snort used to be a single process receiving all VPN traffic. Now:
    • OpenVPN traffic goes through one Snort instance
    • Each WireGuard namespace has its own Snort process

This removes Snort as a single point of failure. If a namespace is getting hammered (DoS, botnet traffic, etc.), that spike in Snort's processing won't affect the performance in other namespaces.

Also, unlike the old setup where Snort only dropped packets that matched a rule, the new setup blocks all traffic from a client for a short time if they trigger any abuse rule. This should help keep abuse complaints (and CAPTCHAs) down.

The snort ruleset we use has always been very minimal to reduce false positives, so if some lazy malicious traffic is getting caught by such a simple ruleset, then it usually means that the user is also sending a lot of other malicious traffic that we don't have rules for, which causes abuse complaints to come in, which gets our IPs blacklisted, which leads to more CAPTCHAs. By blocking all of the malicious user's traffic for a short time, it greatly reduces the abuse leaving the server.


10. IPv6 DNSCrypt

  • Our public DNSCrypt resolvers and relays now support both IPv4 and IPv6. As each node is migrated to the new setup, its existing DNSCrypt IPv4 endpoint will be joined by an IPv6 address. This brings full dual-stack support to our DNSCrypt relays/resolvers — useful for clients on IPv6-only networks or just looking to avoid CGNAT pathologies on IPv4.

  • The new IPv6 addresses will automatically show up in the resolver and relays lists as the nodes are migrated.

Changing exit IPs

For users who might want to change their exit IP without reconnecting, or who need their traffic to look like it’s coming from a residential ISP, or who want their exit IP to be a Tor one, there's a new changemyip endpoint for all of that.

  • Available at http://10.31.33.7/changemyip (for IPv4)
    and http://[2001:db8::7]/changemyip (for IPv6)

  • On the IPv4 changemyip page, you’ll also see additional options:

    • Residential (US) and Residential (UK): Routes your TCP traffic through a third-party residential IP located in either the US or UK. These proxies are initiated from the server side — the proxy provider never sees your real IP or even your VPN IP. If this feature is popular enough, we may add other regions in the future.

    • Tor: Routes all TCP IPv4 and IPv6 traffic through the local Tor daemon (the same instance used by our transparent .onion setup). This might be useful for those who need the additional layer of Tor between the VPN server and a website/service they're connecting to, without needing to run Tor Browser yourself.

Residential proxies are IPv4-only. When enabled, all IPv6 traffic is automatically blocked to prevent any leaks.

The IPv6 changemyip page has fewer options (no residential proxy), but still supports switching exits and the Tor option.

If the IPv6 changemyip page shows an error that says that page is only available on 10.31.33.7, then you're on a legacy node that hasn't been migrated yet.

Like with port forwarding, changemyip entries are automatically cleared when you're no longer active:

  • OpenVPN users: entries are wiped immediately upon disconnect.

  • WireGuard users: entries are removed when you disconnect, but if you reconnect within 20 minutes and happen to land in the same namespace as before, your settings will still be active. If you're assigned a different namespace, you'll need to revisit /changemyip to reapply them.

What's next?

We're keeping an eye on OpenSSL 3.5, scheduled to go stable on April 8th, 2025, which introduces support for post-quantum cryptographic algorithms like ML-KEM, ML-DSA, and SLH-DSA. If OpenVPN's certificate validation path inherits PQC support directly from OpenSSL like it historically has (we added secp521r1 CA certificates to the RSA instances years before OpenVPN officially supported it because of this), you'll likely see new multi-algorithm instances running with PQC-enabled trust chains in the near future. And if those certs can be reliably detected via u32/haproxy matching, they could get added to the port striping model.

Posted on