After getting fed up with TrueNAS (after it borked itself for the third time and I would have had to set it up AGAIN) I decided to learn Ansible and write a playbook to setup my homeserver that way.
I wanted to share this playbook with you in case someone might find it useful for their own setup and maybe someone has some tips on things I could improve.
This server will not be exposed to the public/internet. If I want to access a service on it from outside my home network I have Wireguard setup on my router to connect to my home network from anywhere.
Keep in mind that I’m relatively new to sysadmin stuff etc so don’t be too harsh please 😅
If I read this correctly, Immich is setup entirely through Ansible, no docker compose. That’s fine, however if Immich changes something drastically in their setup topology, it’ll be more work for you to implement those changes. For services that use docker compose, you could use Ansible to deploy a compose file in a dir, say /opt/immich-docker
along with its requisite .env
and other files. Then setup running it via systemd. Then when you need to update it, it’s almost copy-paste from the upstream compose file into your Ansible repo.
Heck, you could do a pre-stage play where you delegate to localhost an ansible.builtin.get_url
to download the compose file before doing the rest.
I wouldn’t do that because I’d be inevitably picking up breaking changes without my knowledge that I’d have to fix after the fact. Unless you’re pulling from a tag I guess. Still storing along the playbook feels more robust. It’s less likely to get any surprises. Also I’m working under the assumption that you want to write idempotent code so you don’t get breakage when your rerun it, which allows to run it on a schedule, to ensure your config doesn’t drift too much.
Nice work!
I’m unsure but I see secret.yml in there. Is that sensitive? You might want to update that ASAP if it is.
Nice, well done. I wish I could find the same for Debian.
It should be pretty easy to adapt it for Debian. The only thing you need to change as far as I can see is the usage of the dnf module to the apt module.
If you want to make your playbooks/roles more universal, there’s a generic package module which will figure out what package manager to use based on the detected OS.
Or, if that doesn’t fit your needs, you can add conditions to tasks (or blocks of tasks), like
when: ansible_os_family == "Debian"
and use that for tasks specific to a given Linux distro/family.
Ansible will detect a lot of info about each host and make it available as facts. See for example https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html
I’m curious how using ansible to deploy docker containers is easier than just using docker compose?
Ansible makes sense to setup the OS the way it needs to be (file systems, folder structure etc), but why make every container through ansible instead of just making a docker compose and maybe having ansible deploy that?
Even easier is probably to just run something like portainer and run the compose file through there
just making a docker compose and maybe having ansible deploy that?
that’s what I do, why ansible? Because it makes it easier to deploy the same service in different servers with slightly different configurations, for example when migrating from one server to another. Also it helps with having something I can easily backup (e.g. git repo) that can redo my server(s) if needed.
That being said I’m still setting everything up with ansible.