- Nextcloud + OnlyOffice
- *arr media management series (Lidarr, Sonarr, etc)
- Gitea
- Vaultwarden
- PiHole
- Jellyfin
- Wiki-js
- Lemmy
- Prometheus/Grafana/Loki
Currently all containerised running on a debian VM on a Rockylinux Qemu/KVM hypervisor. Initially I was using rocky+podman but inevitably hit something I wanted to run that just straight up needed docker and was too much effort to try and get working. 🤷
Hardware is an circa 2012 gaming machine with a few ZFS raids for all of my Linux ISOs. It lives an extremely tortured existence and longs for the sweet release of death.
Toying with the idea of migrating it all to on-prem virtualised kubernetes cluster using helm charts to manage the stacks and using NFS mounts for persistent storage because I hate myself (and to upskill I guess)
What about you?
- The Lounge (IRC Client)
- Blocky (local DNS server with ad-blocking)
- Tailscale (VPN mesh between clients and other servers)
- Cloudflare-Tunnel (to access some local services directly from the internet via my own domain)
- traefik (reverse proxy + TLS for all my services)
- Authelia (auth server for services that don’t have their own authentication)
- borgmatic (borg backup automation for container data. Pushing backups to borgbase.com)
- paperless-ngx (document management system)
- Plex (media server)
- Tautulli (stats and tracking for Plex)
- mosquitto (MQTT server)
- zigbee2mqtt (service to manage my Zigbee devices)
- Homebridge (service to get z2m devices into Homekit)
- Homeassistant (home automation)
- Prometheus (collect stats from several services above)
- telegraf (more stats collection + server metrics collection)
- Grafana (for some dashboards that I didn’t want to create in HA)
- miniflux (RSS reader)
- Linkding (bookmark manager)
- Atuin (shell history sync server)
- uptime-kuma (monitor some external servers + my local internet connection by pinging healthchecks.io)
- redis (for paperless and some own projects)
- postgres (for miniflux, atuin and some own projects)
Everything is running in containers on an Unraid server
- 24 TB usable (16 TB parity drive)
- 1 TB nvme Cache Drive
- Intel i3-12100T
With disks at idle/spun down, it consumes roughly 25W.
I have a very similar setup minus the iot and metric related services. I’m managing the services with Docker Compose on unRAID.
What’s the reasoning behind using docker compose on unraid, instead of the built in docker implementation?
Personally I use it for a couple services that would be difficult to run separately (ie: deemix + lidarr). I’m also planning on moving all of my services with databases over to compose. I do lose a couple other QOL features but I still prefer this approach to start/stop all related containers instead of manually having to close each one.
For a couple reasons
-
Store and version configs in git. I realize unRAID provides flash drive backup (using git also), but this allows me to spin up my setup on another machine that may not be running unRAID. Helped recently when I switched away from Proxmox.
-
Allows me to group services with their dependencies. ( e.g. postgres, redis, etc ) Also can help isolate service groups from each other. Avoiding port conflicts on common db ports for example. Downside being may have more than one database, redis, etc.
Note, there is an unRAID docker compose plugin so you can still get easy access management buttons to start, stop, view logs, and edit services.