Server-side multihop

an image of a potato, because I couldn't think of an image for server-side multihop

Table of contents

  1. Introduction
  2. Instructions
  3. Technical details

Introduction

We added a new server-side multihop feature that's much easier to use and faster than the client-side multihop described on https://cryptostorm.is/multihop (OpenVPN) and https://cryptostorm.is/blog/multihopping-wireguard (WireGuard)

Client-side multihop is useful because it works with any provider, so you don't have to entirely trust a single one, but it's also slow because you're creating two VPN tunnels on your end, increasing your routing table exponentially with every hop (unless you're using namespaces or virtual machines). It's also very difficult to set up on non-PC devices.

Server-side multihop means the VPN server creates the second tunnel for you and forwards your traffic through it, so from your end (client-side), only a single VPN tunnel is needed. The servers are also in data centers that generally have much faster speeds than most residential ISPs, so it's faster than client-side multihopping. It also works on any device that has a web browser (and supports WireGuard or OpenVPN).

Instructions

Simply connect to the VPN as you normally would, then go to http://10.31.33.7/multihop in your web browser.

mh screenshot 1

Select the new server for the second hop then click the "Enable multihop" button. The internet should instantly see your new IPv4 and IPv6 address in the region of whichever server you selected.

To turn it off or switch servers, click the "Disable multihop" button:

mh screenshot 2

And it will revert back to your original exit IP(s). As the yellow text says, please don't use high-traffic applications since this doubles the servers' bandwidth for your VPN session(s).

Also, port forwarding will only apply to the first hop, not the second. And like with port forwarding, server-side multihop entries are removed when you disconnect from the VPN (immediately for OpenVPN, after 20 minutes for WireGuard). 

Also, /changemyip must be set to the default exit IP before you can use this multihop, and vice-versa, multihop must be disabled to use /changemyip. That's because using both simultaneously can lead to weird combinations like IPv6 is multihopped, but IPv4 is through Tor, and it's difficult to revert that correctly.

Technical details

Once server-side multihop is enabled, the entry server creates a per-destination WireGuard path to the chosen second-hop server, rewrites the client’s source addresses onto an internal overlay, and policy-routes only that rewritten traffic into the second hop. The second server then NATs that overlay traffic out through one of its normal exit IPs. Coordination between the two servers happens over a private inter-server control plane carried inside a separate WireGuard mesh, so the backend calls can use plain HTTP since the transport is already encrypted and authenticated by WireGuard, and VPN clients (and the internet) can't reach that control-plane network.

One small note: every IP involved in this process is an internal address. These include the client’s internal VPN address, randomized internal session addresses used between servers, and the internal addresses used for the server control plane. The user’s real IP address is never part of this process and never appears in the multihop routing path.

The first step is creating the encrypted transport path between the entry server and the selected second-hop server. The function below is the actual logic that ensures the interface exists and configures the peer. Each possible destination gets its own WireGuard interface. The interface name and listening port are derived from a numeric identifier assigned to each server, which keeps everything deterministic across the entire network and avoids port collisions.

ensure_remote_if(){
  local remote_slug="$1" remote_pub="$2" remote_ip="$3" remote_oct="$4"
  local ifn port pk

  ifn="wg0mh-r${remote_oct}"
  port="$(( PER_REMOTE_PORT_BASE + remote_oct ))"
  pk="$(read_private_key)"

  if ! ip link show "$ifn" >/dev/null 2>&1; then
    ip link add dev "$ifn" type wireguard
    ip link set dev "$ifn" up
  fi

  wg set "$ifn" \
    listen-port "$port" \
    private-key <(echo "$pk") \
    peer "$remote_pub" \
    endpoint "${remote_ip}:${BASE_PORT}" \
    persistent-keepalive 25 \
    allowed-ips "0.0.0.0/0,::/0"
}

Once that encrypted path exists, the entry server assigns the session a temporary internal address. These addresses exist only between the two servers and are never visible (or reachable) to the user. They allow traffic from one specific client to be redirected without affecting anyone else connected to the same server.

alloc_session_v4(){
  local oct="$1"
  local host ip

  for attempt in $(seq 1 400); do
    host="$(shuf -i 2-254 -n1)"
    ip="10.1.${oct}.${host}"

    if ! grep -Rqs "^SESSION_V4=${ip}$" "$STATE_DIR"; then
      echo "$ip"
      return
    fi
  done
}

The client’s traffic is then rewritten so that packets appear to originate from this temporary session address. Only the selected client’s packets are rewritten. Other users connected to the same entry server continue using the normal routing path.

add_snat_v4(){
  local ns="$1" src="$2" to="$3"

  if [[ "$ns" == "ovpn" ]]; then
    iptables -t nat -I POSTROUTING 1 \
      -s "${src}/32" \
      -m comment --comment "mhop:${src}" \
      -j SNAT --to-source "${to}"
  else
    ip netns exec "$ns" iptables -t nat -I POSTROUTING 1 \
      -s "${src}/32" \
      -m comment --comment "mhop:${src}" \
      -j SNAT --to-source "${to}"
  fi
}

With the client now rewritten onto the internal overlay, policy routing ensures that only that client’s traffic is sent through the second-hop tunnel. A dedicated routing table is created and rules are added that match only the temporary session address.

policy_add(){
  # table, session_v4, session_v6, remote_endpoint_v4, mh_if
  local table="$1" s4="$2" s6="$3" remote_ep="$4" ifn="$5"
  local dev via
  read -r dev via < <(route_get_v4 "$remote_ep")

  # Prevent recursion: force traffic to the WG endpoint itself via main route.
  if [[ -n "${via:-}" ]]; then
    ip route replace table "$table" "${remote_ep}/32" via "$via" dev "$dev" >/dev/null 2>&1 || true
  else
    ip route replace table "$table" "${remote_ep}/32" dev "$dev" >/dev/null 2>&1 || true
  fi

  ip rule add pref "$MH_PRIO_V4" from "${s4}/32" table "$table" 2>/dev/null || true
  ip route replace table "$table" default dev "$ifn" >/dev/null 2>&1 || true

  if [[ -n "$s6" ]]; then
    ip -6 rule add pref "$MH_PRIO_V6" from "${s6}/128" table "$table" 2>/dev/null || true
    ip -6 route replace table "$table" default dev "$ifn" >/dev/null 2>&1 || true
  fi
}

After the entry server finishes preparing its side, it tells the destination server to activate the second half of the path. This request is sent to a backend endpoint over the internal server mesh. The request itself is plain HTTP, but the transport is inside WireGuard so it is already encrypted and authenticated. Since VPN clients cannot route to this internal network, the backend is unreachable from the outside world.

control_url_for_octet(){
  local oct="$1"
  echo "http://${CONTROL_NET_PREFIX}.${oct}/mh-backend"
}

resp="$(curl -fsS --max-time 8 \
  --data-urlencode "op=start" \
  --data-urlencode "a_slug=${this_slug}" \
  --data-urlencode "session_ip=${session_v4}" \
  --data-urlencode "session_ip6=${session_v6}" \
  "$url")"

When the second server receives the request, it installs NAT rules that translate the internal session address into one of its normal exit IPs. From that point onward the traffic path is: client → entry server → encrypted inter-server tunnel → exit server → internet.

backend_add_snat_v4(){
  local session_ip="$1"
  local exit exit_ip pr
  exit="$(pick_exit_v4)"
  exit_ip="${exit%:*}"
  pr="${exit#*:}"

  # UDP
  iptables -w -t nat -I POSTROUTING 1 -s "${session_ip}/32" -p udp -m udp --dport 1:65535 \
    -m comment --comment "mhop:${session_ip}" \
    -j SNAT --to-source "${exit_ip}:${pr}"

  # TCP
  iptables -w -t nat -I POSTROUTING 1 -s "${session_ip}/32" -p tcp -m tcp --dport 1:65535 \
    -m comment --comment "mhop:${session_ip}" \
    -j SNAT --to-source "${exit_ip}:${pr}"

  # ICMP
  iptables -w -t nat -I POSTROUTING 1 -s "${session_ip}/32" -p icmp \
    -m comment --comment "mhop:${session_ip}" \
    -j SNAT --to-source "${exit_ip}"

  echo "$exit_ip:$pr"
}

Source port range restrictions apply here too (the 'pr' variable in the above code), so the Port Shadowing protections described on https://cryptostorm.is/blog/port-striping-v3#I-I won't be affected by this server-side multihop.

Earlier versions of this system used HTTPS over the public internet for the backend coordination between servers. That design was still secure — the HTTPS used post-quantum ciphers and the backend endpoint name was intentionally difficult to guess — but it still meant exposing an endpoint to the public internet. Even though the backend only accepted POST requests and validated all inputs, a determined attacker could still attempt to discover the endpoint and flood it with requests.

Rather than maintain IP-based ACLs or additional authentication layers between dozens of servers running identical scripts, the system now uses a dedicated WireGuard control-plane mesh. Each server has a lightweight interface that connects to every other server. Even with forty peers this consumes negligible resources, while completely removing the need to expose backend endpoints to the public internet.

Some other parts of the infrastructure still use internal HTTPS endpoints with custom certificate authorities and strict IP-based ACLs. Those endpoints are already limited to specific internal systems and use post-quantum TLS, which is sufficiently secure for their purposes. They could also be migrated onto this mesh though.

WireGuard itself is not post-quantum secure yet, but that is acceptable here because the control-plane traffic does not contain any sensitive information. The requests only coordinate session state between servers and never include user identities or real client IP addresses. Even in a hypothetical future where WireGuard’s cryptography could be broken, the traffic crossing this mesh wouldn't reveal anything useful about users.

Posted on