I understand that people enter the world of self hosting for various reasons. I am trying to dip my toes in this ocean to try and get away from privacy-offending centralised services such as Google, Cloudflare, AWS, etc.
As I spend more time here, I realise that it is practically impossible; especially for a newcomer, to setup any any usable self hosted web service without relying on these corporate behemoths.
I wanted to have my own little static website and alongside that run Immich, but I find that without Cloudflare, Google, and AWS, I run the risk of getting DDOSed or hacked. Also, since the physical server will be hosted at my home (to avoid AWS), there is a serious risk of infecting all devices at home as well (currently reading about VLANS to avoid this).
Am I correct in thinking that avoiding these corporations is impossible (and make peace with this situation), or are there ways to circumvent these giants and still have a good experience self hosting and using web services, even as a newcomer (all without draining my pockets too much)?
Edit: I was working on a lot of misconceptions and still have a lot of learn. Thank you all for your answers.
This is nonsense. A small static website is not going to be hacked or DDOSd. You can run it off a cheap ARM single board computer on your desk, no problem at all.
What?
I’ve popped up a web server and within a day had so many hits on the router (thousands per minute) that performance tanked.
Yea, no, any exposed service will get hammered. Frankly I’m surprised that machine I setup didn’t get hacked.
Don’t leave SSH on port 22 open as there are a lot of crawlers for that, otherwise I really can’t say I share your experience, and I have been self-hosting for years.
Am I missing something? Why would anyone leave SSH open outside the internal network?
All of my services have SSH disabled unless I need to do something, and then I only do it locally, and disable as soon as I’m done.
Note that I don’t have a VPS anywhere.
I’ve been self-hosting a bunch of stuff for over a decade now, and have not had that issue.
Except for a matrix server with open registration for a community that others not in the community started to use.
I can’t say I’ve seen anything like that on the webservers I’ve exposed to the internet. But it could vary based on the IP you have if it’s a target for something already I suppose.
Frankly I’m surprised that machine I setup didn’t get hacked.
How could it if all you had was a basic webserver running?
One aspect is how interesting you are as a target. What would a possible attacker gain by getting access to your services or hosts?
The danger to get hacked is there but you are not Microsoft, amazon or PayPal. Expect login attempts and port scans from actors who map out the internets. But I doubt someone would spend much effort to break into your hosts if you do not make it easy (like scripted automatic exploits and known passwords login attempts easy) .
DDOS protection isn’t something a tiny self hosted instance would need (at least in my experience).
Firewall your hosts, maybe use a reverse proxy and only expose the necessary services. Use secure passwords (different for each service), add fail2ban or the like if you’re paranoid. Maybe look into MFA. Use a DMZ (yes, VLANs could be involved here). Keep your software updated so that exploits don’t work. Have backups if something breaks or gets broken.
In my experience the biggest danger to my services is my laziness. It takes steady low level effort to keep the instances updated and running. (Yes there are automated update mechanisms - unattended upgrades i.e. -, but also downwards compatibility breaking changes in the software which will require manual interactions by me.)
+1 for the main risk to my service reliability being me getting distracted by some other shiny thing and getting behind on maintenance.
…maybe use a reverse proxy…
+1 post.
I would suggest definitely reverse proxy. Caddy should be trivial in this use case.
cheers,
I have a dozen services running on a myriad of ports. My reverse proxy setup allows me to map hostnames to those services and expose only 80/443 to the web, plus the fact that an entity needs to know a hostname now instead of just an exposed port. IPS signatures can help identify abstract hostname scans and the proxy can be configured to permit only designated sources. Reverse proxies also commonly get used to allow for SSL offloading to permit clear text observation of traffic between the proxy and the backing host. Plenty of other use cases for them out there too, don’t think of it as some one trick off/on access gateway tool
May not add security in and of itself, but it certainly adds the ability to have a little extra security. Put your reverse proxy in a DMZ, so that only it is directly facing the intergoogles. Use firewall to only expose certain ports and destinations exposed to your origins. Install a single wildcard cert and easily cover any subdomains you set up. There’s even nginx configuration files out there that will block URL’s based on regex pattern matches for suspicious strings. All of this (probably a lot more I’m missing) adds some level of layered security.
A reverse proxy is used to expose services that don’t run on exposed hosts. It does not add security but it keeps you from adding attack vectors.
They usually provide load balancing too, also not a security feature.
Edit: in other words what he’s saying is true and equal to “raid isn’t baclup”
Drink less paranoia smoothie…
I’ve been self-hosting for almost a decade now; never bothered with any of the giants. Just a domain pointed at me, and an open port or two. Never had an issue.
Don’t expose anything you don’t share with others; monitor the things you do expose with tools like fail2ban. VPN into the LAN for access to everything else.
DDoS and hacking are like taxes: you should be so lucky as to have to worry about them, because that means you’re wildly successful. Worry about getting there first because that’s the hard part.
You don’t have to be successful to get hit by bots scanning for known vulnerabilities in common software (e.g. Wordpress), but OP won’t have to worry about that if they keep everything up to date. However, this is also necessary when renting a VPN from said centralised services.
Well he specified static website, which rules out WP, but yes. If your host accepts posts (in the generic sense, not necessarily specifically the http verb POST) that raises tons of other questions, that frankly were already well addressed when I made my post.
Use any old computer you have lying around as a server. Use Tailscale to connect to it, and don’t open any ports in your home firewall. Congrats, you’re self-hosting and your risk is minimal.
Exactly what I do and works like a dream. Had a VPS and nginx to proxy domain to it but got rid of it because I really had no use for it, the Tailscale method worked so well.
I’ve been thinking of trying this (or using Caddy instead of nginx) so I could get Nextcloud running on an internal server but still have an external entry point (spousal approval) but after setting up the subdomain and then starting caddy and watching how many times that subdomain started to get scanned from various Ips all over the world, I figured eh that’s not a good plan. And I’m a nobody and don’t promote my domain anywhere.