The solution has been found, see the “Solution” section for the full write up and config files.
Initial Question
What I’m looking to do is to route WAN traffic from my personal wireguard server through a gluetun container. So that I can connect a client my personal wireguard server and have my traffic still go through the gluetun VPN as follows:
client <–> wireguard container <–> gluetun container <–> WAN
I’ve managed to set both the wireguard and gluetun container up in a docker-compose file and made sure they both work independently (I can connect a client the the wireguard container and the gluetun container is successfully connecting to my paid VPN for WAN access). However, I cannot get route traffic from the wireguard container through the gluetun container.
Since I’ve managed to set both up independently I don’t believe that there is an issue with the docker-compose file I used for setup. What I believe to be the issue is either the routing rules in my wireguard container, or the firewall rules on the gluetun container.
I tried following this linuxserver.io guide to get the following wg0.conf
template for my wireguard container:
[Interface]
Address = ${INTERFACE}.1
ListenPort = 51820
PrivateKey = $(cat /config/server/privatekey-server)
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth+ -j MASQUERADE
# Adds fwmark 51820 to any packet traveling through interface wg0
PostUp = wg set wg0 fwmark 51820
# If a packet is not marked with fwmark 51820 (not coming through the wg connection) it will be routed to the table "51820".
# PostUp = ip -4 rule add not fwmark 51820 table 51820
# Creates a table ("51820") which routes all traffic through the gluetun container
PostUp = ip -4 route add 0.0.0.0/0 via 172.22.0.100
# If the traffic is destined for the subnet 192.168.1.0/24 (internal) send it through the default gateway.
PostUp = ip -4 route add 192.168.1.0/24 via 172.22.0.1
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth+ -j MASQUERADE
Along with the default firewall rules of the gluetun container
Chain INPUT (policy DROP 13 packets, 1062 bytes)
pkts bytes target prot opt in out source destination
15170 1115K ACCEPT 0 -- lo * 0.0.0.0/0 0.0.0.0/0
14403 12M ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
1 60 ACCEPT 0 -- eth0 * 0.0.0.0/0 172.22.0.0/24
Chain FORWARD (policy DROP 4880 packets, 396K bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy DROP 360 packets, 25560 bytes)
pkts bytes target prot opt in out source destination
15170 1115K ACCEPT 0 -- * lo 0.0.0.0/0 0.0.0.0/0
12716 1320K ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT 0 -- * eth0 172.22.0.100 172.22.0.0/24
1 176 ACCEPT 17 -- * eth0 0.0.0.0/0 68.235.48.107 udp dpt:1637
1349 81068 ACCEPT 0 -- * tun0 0.0.0.0/0 0.0.0.0/0
When I run the wireguard container with this configuration I can successfully connect my client however I cannot connect to any website, or ping any IP.
During my debugging process I ran tcpdump
on the docker network both containers are in which showed me that my client is successfully sending packets to the wireguard container, but that no packets were sent from my wireguard container to the gluetun container. The closest I got to this was the following line:
17:27:38.871259 IP 10.13.13.1.domain > 10.13.13.2.41280: 42269 ServFail- 0/0/0 (28)
Which I believe is telling me that the wireguard server is trying, and failing, to send packets back to the client.
I also checked the firewall rules of the gluetun container and got the following results:
Chain INPUT (policy DROP 13 packets, 1062 bytes)
pkts bytes target prot opt in out source destination
18732 1376K ACCEPT 0 -- lo * 0.0.0.0/0 0.0.0.0/0
16056 12M ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
1 60 ACCEPT 0 -- eth0 * 0.0.0.0/0 172.22.0.0/24
Chain FORWARD (policy DROP 5386 packets, 458K bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy DROP 360 packets, 25560 bytes)
pkts bytes target prot opt in out source destination
18732 1376K ACCEPT 0 -- * lo 0.0.0.0/0 0.0.0.0/0
14929 1527K ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT 0 -- * eth0 172.22.0.100 172.22.0.0/24
1 176 ACCEPT 17 -- * eth0 0.0.0.0/0 68.235.48.107 udp dpt:1637
1660 99728 ACCEPT 0 -- * tun0 0.0.0.0/0 0.0.0.0/0
Which shows that the firewall for the gluetun container is dropping all FORWARD traffic which (as I understand it) is the sort of traffic I’m trying to set up. What is odd is that I don’t see any of those packets in the tcpdump of the docker network.
Has anyone successfully set this up or have any indication on what I should try next? At this point any ideas would be helpful, whether that be more debugging steps or recommendations for routing/firewall rules.
While there have been similar posts on this topic (Here and Here) the responses on both did not really help me.
Solution
Docker Compose Setup
My final working setup uses the following docker-compose
file:
networks:
default:
ipam:
config:
- subnet: 172.22.0.0/24
services:
gluetun_vpn:
image: qmcgaw/gluetun:latest
container_name: gluetun_vpn
cap_add:
- NET_ADMIN # Required
environment:
- VPN_TYPE=wireguard # I tested this with a wireguard setup
# Setup Gluetun depending on your provider.
volumes:
- {docker config path}/gluetun_vpn/conf:/gluetun
- {docker config path}/gluetun_vpn/firewall:/iptables
sysctls:
# Disables ipv6
- net.ipv6.conf.all.disable_ipv6=1
restart: unless-stopped
networks:
default:
ipv4_address: 172.22.0.100
wireguard_server:
image: lscr.io/linuxserver/wireguard:latest
container_name: wg_server
cap_add:
- NET_ADMIN
environment:
- TZ=America/Detroit
- PEERS=1
- SERVERPORT=3697 # Optional
- PEERDNS=172.22.0.100 # Set this as the Docker network IP of the gluetun container to use your vpn's dns resolver
ports:
- 3697:51820/udp # Optional
volumes:
- {docker config path}/wg_server/conf:/config
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
networks:
default:
ipv4_address: 172.22.0.2
restart: unless-stopped
Once you get both docker containers working you still need to edit some configuration files.
Wireguard Server Setup
After the wireguard container setup you need to edit {docker config path}/wg_server/conf/templates/server.conf
to the following:
[Interface]
Address = ${INTERFACE}.1
ListenPort = 51820
PrivateKey = $(cat /config/server/privatekey-server)
# Default from the wg container
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth+ -j MASQUERADE
## Add this section
# Adds fwmark 51820 to any packet traveling through interface wg0
PostUp = wg set wg0 fwmark 51820
# If a packet is not marked with fwmark 51820 (not coming through the wg connection) it will be routed to the table "51820".
PostUp = ip -4 rule add not fwmark 51820 table 51820
PostUp = ip -4 rule add table main suppress_prefixlength 0
# Creates a table ("51820") which routes all traffic through the vpn container
PostUp = ip -4 route add 0.0.0.0/0 via 172.22.0.100 table 51820
# If the traffic is destined for the subnet 192.168.1.0/24 (internal) send it through the default gateway.
PostUp = ip -4 route add 192.168.1.0/24 via 172.22.0.1
# Default from the wg container
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth+ -j MASQUERADE
The above config is a slightly modified setup from this linuxserver.io tutorial
Gluetun Setup
If you’ve setup your gluetun container properly the only thing you have to do is create {docker config path}/gluetun_vpn/firewall/post-rules.txt
containing the following:
iptables -t nat -A POSTROUTING -o tun+ -j MASQUERADE
iptables -t filter -A FORWARD -d 172.22.0.2 -j ACCEPT
iptables -t filter -A FORWARD -s 172.22.0.2 -j ACCEPT
These commands should be automatically run once you restart the gluetun container. You can test the setup by running iptables-legacy -vL -t filter
from within the gluetun container. Your output should look like:
Chain INPUT (policy DROP 7 packets, 444 bytes)
pkts bytes target prot opt in out source destination
27512 2021K ACCEPT all -- lo any anywhere anywhere
43257 24M ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED
291 28191 ACCEPT all -- eth0 any anywhere 172.22.0.0/24
# These are the important rules
Chain FORWARD (policy DROP 12276 packets, 2476K bytes)
pkts bytes target prot opt in out source destination
17202 8839K ACCEPT all -- any any anywhere 172.22.0.2
26704 5270K ACCEPT all -- any any 172.22.0.2 anywhere
Chain OUTPUT (policy DROP 42 packets, 2982 bytes)
pkts bytes target prot opt in out source destination
27512 2021K ACCEPT all -- any lo anywhere anywhere
53625 9796K ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- any eth0 c6d5846467f3 172.22.0.0/24
1 176 ACCEPT udp -- any eth0 anywhere 64.42.179.50 udp dpt:1637
2463 148K ACCEPT all -- any tun0 anywhere anywhere
And iptables-legacy -vL -t nat
which should look like:
Chain PREROUTING (policy ACCEPT 18779 packets, 2957K bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 291 packets, 28191 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 7212 packets, 460K bytes)
pkts bytes target prot opt in out source destination
# This is the important rule
Chain POSTROUTING (policy ACCEPT 4718 packets, 310K bytes)
pkts bytes target prot opt in out source destination
13677 916K MASQUERADE all -- any tun+ anywhere anywhere
The commands in post-rules.txt
are a more precises version of @CumBroth@discuss.tchncs.de solution in the comments.
Gluetun likely doesn’t have the proper firewall rules in place to enable this sort of traffic routing, simply because it’s made for another use case (using the container’s network stack directly with network_mode: "service:gluetun"
).
Try to first get this setup working with two vanilla Wireguard containers (instead of Wireguard + gluetun). If it does, you’ll know that your Wireguard “server” container is properly set up. Then replace the second container that’s acting as a VPN client with gluetun and run tcpdump again. You likely need to add a postrouting masquerade rule on the NAT table.
Here’s my own working setup for reference.
Wireguard “server” container:
[Interface]
Address = <address>
ListenPort = 51820
PrivateKey = <privateKey>
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostUp = wg set wg0 fwmark 51820
PostUp = ip -4 route add 0.0.0.0/0 via 172.22.0.101 table 51820
PostUp = ip -4 rule add not fwmark 51820 table 51820
PostUp = ip -4 rule add table main suppress_prefixlength 0
PostUp = ip route add 192.168.16.0/24 via 172.22.0.1
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip route del 192.168.16.0/24 via 172.22.0.1
#peer configurations (clients) go here
and the Wireguard VPN client that I route traffic through:
# Based on my VPN provider's configuration + additional firewall rules to route traffic correctly
[Interface]
PrivateKey = <key>
Address = <address>
DNS = 192.168.16.81 # local Adguard
PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE #Route traffic coming in from outside the container (host/other container)
PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
[Peer]
PublicKey = <key>
AllowedIPs = 0.0.0.0/0
Endpoint = <endpoint_IP>:51820
Note the NAT MASQUERADE
rule.
Thank you for the reply! I’ve been busy the last couple of days so I just got around to looking back at this.
I tested out your advice and setup a wireguard container with the MASQUERADE
NAT rule and it worked! However, when I tried it out again with the gluetun container. I’m still running into issues, but there is progress!
With my setup before when I connect my client to the wireguard network I would get a “no network” error. Now when I try access the internet the connection times out. Still not ideal, but at least it’s a different error than before!
With the MASQUERADE
NAT rule in place, running tcpdump
on the docker network shows that at least the two containers are talking to each other:
17:04:29.927415 IP 172.22.0.2 > 172.22.0.100: ICMP echo request, id 4, seq 9823, length 64
17:04:29.927466 IP 172.22.0.100 > 172.22.0.2: ICMP echo reply, id 4, seq 9823, length 64
but I still cannot get any internet access through the wireguard tunnel.
When exploring around the gluetun config I confirmed that the MASQUERADE
rule was actually set:
Chain PREROUTING (policy ACCEPT 2933 packets, 316K bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 839 packets, 86643 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 12235 packets, 741K bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 11408 packets, 687K bytes)
pkts bytes target prot opt in out source destination
2921 284K MASQUERADE 0 -- * eth+ 0.0.0.0/0 0.0.0.0/0
I think that the issue may be the default firewall rules of the gluetun block all traffic besides the VPN traffic via the main iptable:
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
2236 164K ACCEPT 0 -- lo * 0.0.0.0/0 0.0.0.0/0
11914 12M ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
87 15792 ACCEPT 0 -- eth0 * 0.0.0.0/0 172.22.0.0/24
Chain FORWARD (policy DROP 381 packets, 22780 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy DROP 76 packets, 5396 bytes)
pkts bytes target prot opt in out source destination
2236 164K ACCEPT 0 -- * lo 0.0.0.0/0 0.0.0.0/0
8152 872K ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT 0 -- * eth0 172.22.0.100 172.22.0.0/24
1 176 ACCEPT 17 -- * eth0 0.0.0.0/0 213.152.187.229 udp dpt:1637
212 12843 ACCEPT 0 -- * tun0 0.0.0.0/0 0.0.0.0/0
I tried adding simple iptables rules such as iptables -A FORWARD -i tun+ -j ACCEPT
(and the same with eth+
as the interface) but with no luck.
If you think you can help I’ll be down to try out other solutions, or if you need more information I can post it when I have time. If you don’t think this will be an easy fix I can revert back to the wireguard-wireguard container setup since that worked. I tried to get this setup working so I could leverage the gluetun kill-switch/restart.
I think you already have a kill-switch (of sorts) in place with the two Wireguard container setup, since your clients lose internet access (except to the local network, since there’s a separate route for that on the Wireguard “server” container") if any of the following happens:
- “Client” container is spun down
- The Wireguard interface inside the “client” container is spun down (you can try this out by execing
wg-quick
inside the container) wg0 - or even if the interface is up but the VPN connection is down (try changing the endpoint IP to a random one instead of the correct one provided by your VPN service provider)
I can’t be 100% sure, because I’m not a networking expert, but this seems like enough of a “kill-switch” to me. I’m not sure what you mean by leveraging the restart. One of the things that I found annoying about the Gluetun approach is that I would have to restart every container that depends on its network stack if Gluetun itself got restarted/updated.
But anyway, I went ahead and messed around on a VPS with the Wireguard+Gluetun approach and I got it working. I am using the latest versions of The Linuxserver.io Wireguard container and Gluetun at the time of writing. There are two things missing in the Gluetun firewall configuration you posted:
- A
MASQUERADE
rule on the tunnel, meaning thetun0
interface. - Gluetun is configured to drop all
FORWARD
packets (filter table) by default. You’ll have to change that chain rule toACCEPT
. Again, I’m not a networking expert, so I’m not sure whether or not this compromises the kill-switch in any way, at least in any way that’s relevant to the desired setup/behavior. You could potentially set a more restrictive rule to only allow traffic coming in from<wireguard_container_IP>
, but I’ll leave that up to you. You’ll also need to figure out the best way to persist the rules through container restarts.
First, here’s the docker compose setup I used:
networks:
wghomenet:
name: wghomenet
ipam:
config:
- subnet: 172.22.0.0/24
gateway: 172.22.0.1
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
ports:
- 8888:8888/tcp # HTTP proxy
- 8388:8388/tcp # Shadowsocks
- 8388:8388/udp # Shadowsocks
volumes:
- ./config:/gluetun
environment:
- VPN_SERVICE_PROVIDER=<your stuff here>
- VPN_TYPE=wireguard
# - WIREGUARD_PRIVATE_KEY=<your stuff here>
# - WIREGUARD_PRESHARED_KEY=<your stuff here>
# - WIREGUARD_ADDRESSES=<your stuff here>
# - SERVER_COUNTRIES=<your stuff here>
# Timezone for accurate log times
- TZ= <your stuff here>
# Server list updater
# See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
- UPDATER_PERIOD=24h
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
networks:
wghomenet:
ipv4_address: 172.22.0.101
wireguard-server:
image: lscr.io/linuxserver/wireguard
container_name: wireguard-server
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1001
- TZ=<your stuff here>
- INTERNAL_SUBNET=10.13.13.0
- PEERS=chromebook
volumes:
- ./config/wg-server:/config
- /lib/modules:/lib/modules #optional
restart: always
ports:
- 51820:51820/udp
networks:
wghomenet:
ipv4_address: 172.22.0.5
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
You already have your “server” container properly configured. Now for Gluetun:
I exec into the container docker exec -it gluetun sh
.
Then I set the MASQUERADE rule on the tunnel: iptables -t nat -A POSTROUTING -o tun+ -j MASQUERADE
.
And finally, I change the FORWARD chain policy in the filter table to ACCEPT iptables -t filter -P FORWARD ACCEPT
.
Note on the last command: In my case I did iptables-legacy
because all the rules were defined there already (iptables
gives you a warning if that’s the case), but your container’s version may vary. I saw different behavior on the testing container I spun up on the VPS compared to the one I have running on my homelab.
Good luck, and let me know if you run into any issues!
EDIT: The rules look like this afterwards:
Output of iptables-legacy -vL -t filter
:
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
10710 788K ACCEPT all -- lo any anywhere anywhere
16698 14M ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED
1 40 ACCEPT all -- eth0 any anywhere 172.22.0.0/24
# note the ACCEPT policy here
Chain FORWARD (policy ACCEPT 3593 packets, 1681K bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
10710 788K ACCEPT all -- any lo anywhere anywhere
13394 1518K ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- any eth0 dac4b9c06987 172.22.0.0/24
1 176 ACCEPT udp -- any eth0 anywhere connected-by.global-layer.com udp dpt:1637
916 55072 ACCEPT all -- any tun0 anywhere anywhere
And the output of iptables -vL -t nat
:
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_OUTPUT all -- any any anywhere 127.0.0.11
# note the MASQUERADE rule here
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_POSTROUTING all -- any any anywhere 127.0.0.11
312 18936 MASQUERADE all -- any tun+ anywhere anywhere
Chain DOCKER_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- any any anywhere 127.0.0.11 tcp dpt:domain to:127.0.0.11:39905
0 0 DNAT udp -- any any anywhere 127.0.0.11 udp dpt:domain to:127.0.0.11:56734
Chain DOCKER_POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 SNAT tcp -- any any 127.0.0.11 anywhere tcp spt:39905 to::53
0 0 SNAT udp -- any any 127.0.0.11 anywhere udp spt:56734 to::53
I tried out your solution and it worked! I thought it was an iptables / firewall issue on the gluetun end but I didn’t know which table the packets were going through.
There’s also a way to set persistent iptable rules via gluetun, the docs are here
Thank you for your help! I’ll clean up my configs, and post my working configs and setup process!