I have an openwrt router at home which also acts as my home server. It’s running a bunch of services using docker (Jellyfin, Nextcloud, etc.)
I have set up an SSH tunnel between my openwrt router and VPS and can access jellyfin successfully.
I understand that I need to set up a reverse proxy to access multiple services and have https.
But I’m confused if I should set up this reverse proxy on the VPS or on the router itself. Is nginx the easiest option? Should i add subdomains in cloudflare for every service?
Pease don’t recommend vpns since they are all blocked where i live (wireguard, tailscale openVPN, etc.) I’m limited to using ssh tunneling only.
Thanks
I should also add something that lots of beginners miss.
The reverse proxy does not care what the domains that you define in it actually resolve to. It receives the domain name as a HTTP header which is completely at the whim of the client. As long as that domain name matches one of the domains defined in the proxy, it’s all good.
You can successfully connect to a proxy with a domain name defined in the domain owner’s DNS, or you can make up your own DNS that says whatever you want, or you can define any domain->IP association you want in your hosts file, or you can simply use curl or wget to connect directly to the proxy IP and lie about the domain in the HTTP headers without having it resolve in any DNS.
This means that yes, the proxy will happily serve your “private” *.local.example.com services to someone connecting from outside your LAN. All they have to do is figure out (or guess) your subdomain names. You need to add IP restrictions in the proxy (default deny from all + lan ip mask explicit exception) if you really want those services to be restricted to the LAN.
DNS is not security, it’s a public service that maps domains to IPs.
TLS is only security in the sense it protects the connection en route from eavesdropping, but it doesn’t restrict access.
Thanks I understand the theory behind this but I can’t get it to work.
I have a jellyfin.mydomain.com subdomain pointing at my VPS ip. On my home server I have Nginx Proxy Manager listening to 192.168.8.1:8998 (http) and 8999 (https) From my home server I forward port 80 from the VPS to local port 8999 like this:
ssh -R 80:127.0.0.1:8998 root@vps-ip
Then on npm I define a proxy to localhost:8096 (jellyfin) for any traffic sent to jellyfinn.mydomain.com.
But I can’t access jellyfin remotely.
Check all the steps individually then:
- check that the ip resolves to the VPS IP at the location you’re testing this
- set up the tunnel to bypass the proxy (connect it directly to jellyfin)
- check that jellyfin works directly
- check the proxy directly, with curl connected to the proxy with the header “Host” set to the domain
- check that the VPS firewall didn’t block port 80
- normally you wouldn’t be able to forward port 80 with a normal ssh user but I see you’re logging in as root so it should be working
If you are new i recommend “Caddy V2”
It is by far the easiest.
Wait with Nginx until you’re better. (and even then, use linuxserverio/swag instead of nginx)
As someone who used caddy over years, I can’t completely agree.
Caddy has some downsides (nextcloud needs special setup for example) and not everyone is familiar with writing a Caddyfile. (Json)
For someone new I would recommend “nginx proxy manager”. Easy to install with docker and self explained through GUI.
i actually think NPM is more confusing. 1: there are practically always already finished Files for Caddy V2. Most of the times directly in the Repo of the Project. A lot of Devs use Caddy themselves. 2: NPM exposes a lot of Options additionally. This can confuse newcomers. With Caddy, all these extra options are invisible. you just write and see “reverse_proxy jellyfin” and that’s it.
Completely agree. I haven’t used NPM since I started self hosting a few years ago, but I was never able to get it to work right. I ended up using apache2 as it was pretty well documented everywhere. Moved to caddy v1 when I found it as the config is so easy to write and understand. Moved to v2 when it was released and had no issues. Their forum is incredibly helpful if you run into any issues. At this point its a “relatively” mature platform and most projects I’ve setup have an example config (usually just 1 or 2 lines because that’s all you need).
I know this isn’t what you asked but I would move any hosted services outside of DNS to a separate device.
There are pros and cons to keeping the proxy on the VPS or at home.
If you keep it at home you will have end-to-end encryption from the browser to your home server. Downside, you will not get the IP of the remote client, just the IP of the router, so you won’t be able to do IP blocking or diagnostics.
By putting the proxy on the VPS and decrypting HTTPS there you can add remote IPs to connections but you have to keep the TLS certificate on the VPS so in theory someone could mess with it.
A third option is to run a minimal passthrough proxy on the VPS that adds the remote IP to the HTTPS connections without decrypting them. To do this you must use the same proxy at both ends (home and VPS) and both must have the PROXY protocol enabled.
I would suggest doing just proxy at home to start with because it’s simpler. If you want a GUI use NPM (Nginx Proxy Manager) it’s super easy. If you prefer a proxy where you write config by hand use Caddy.
After you have it working at home you can consider adding the one on VPS and enabling the PROXY protocol. Although I’m not 100% sure Caddy supports it, look into it. You may have to use Nginx in both places I’d it doesn’t.
You do not need to add subdomains in DNS, not unless you want to. You just need one domain to point an A/AAAA record at the VPS public IP, then you can make the subdomains a wildcard CNAME pointing at the base domain. So A/AAAA example.com -> IP, and CNAME *.example.com -> example.com. Or you can put the A in another domain and point the CNAME at that.
When requesting TLS certificates it’s the same thing, you never ask for explicit certificates for each subdomain, you just ask for one wildcard certificate for *.example.com. Aside from the obvious benefit of not having to add and remove certificates every time you add or remove subdomains, there’s the not obvious benefit of not having bots learn about your subdomains (certificate application are public records).
The subdomains do not need to resolve in DNS for this to work, the certbot verifies that you own the domain by using a DNS API key to create a temporary TXT on example.com; as long as that works it won’t care what’s actually defined in there.
Thanks for the detailed reply. But I’m still confused. Do I need a separate ssh tunnel for every single service I run on my local server?
No, that’s the magic of the reverse proxy. You can transport all HTTP services through just one port. It will route them to the correct service on your service based on the domain (which is passed through the HTTP headers).
It won’t work for non-HTTP services, for those you’ll have to make a separate ssh tunnel per port.
The reverse proxy is going to have a config that says “for hostname ‘foo’ I should forward traffic to foo.example.com:port”.
If you setup the rproxy at home then ssh just needs to forward all port 443 traffic to the rproxy. It doesn’t care about hostnames. The rproxy will then get a request with the hostname in the data and forward it to the appropriate target on behalf of the requester.
If you setup the rproxy at the vps then yes - you would need to forward different ports to each backend target. This is because the rproxy would need to direct traffic to each target individually. And if your target is “localhost” (because that’s where the ssh endpoint is) then you would differentiate each backend by port.