I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

69 points

dude, im kinda you. i just jumped into docker over the summer… feel stupid not doing it sooner. there is just so much pre-created content, tutorials, you name it. its very mature.

i spent a weekend containering all my home services… totally worth it and easy as pi[hole] in a container!.

permalink
report
reply
25 points
*

Well, that wasn’t a huge investment :-) I’m in…

I understand I’ve got LOTS to learn. I think I’ll start by installing something new that I’m looking at with docker and get comfortable with something my users (family…) are not yet relying on.

permalink
report
parent
reply
26 points

Forget docker run, docker compose up -d is the command you need on a server. Get familiar with a UI, it makes your life much easier at the beginning: portainer or yacht in the browser, lazy-docker in the terminal.

permalink
report
parent
reply
22 points
*

I would suggest docker compose before a UI to someone that likes to work via the command line.

Many popular docker repositories also automatically give docker run equivalents in compose format, so the learning curve is not as steep vs what it was before for learning docker or docker compose commands.

permalink
report
parent
reply
5 points

Second this. Portainer + docker compose is so good that now I go out of my way to composerize everything so I don’t have to run docker containers from the cli.

permalink
report
parent
reply
3 points
*
# docker compose up -d
no configuration file provided: not found
permalink
report
parent
reply
1 point

dockge is amazing for people that see the value in a gui but want it to stay the hell out of the way. https://github.com/louislam/dockge lets you use compose without trapping your stuff in stacks like portainer does. You decide you don’t like dockge, you just go back to cli and do your docker compose up -d --force-recreate .

permalink
report
parent
reply
4 points

If you are interested in a web interface for management check out portainer.

permalink
report
parent
reply
8 points

As a guy who’s you before summer.

Can you explain why you think it is better now after you have ‘contained’ all your services? What advantages are there, that I can’t seem to figure out?

Please teach me Mr. OriginalLucifer from the land of MoistCatSweat.Com

permalink
report
parent
reply
23 points

No more dependency hell from one package needing libsomething.so 5.3.1 and another service absolutely can only run with libsomething.so 4.2.0

That and knowing that when i remove a container, its not leaving a bunch of cruft behind

permalink
report
parent
reply
13 points

You can also back up your compose file and data directories, pull the backup from another computer, and as long as the architecture is compatible you can just restore it with no problem. So basically, your services are a whole lot more portable. I recently did this when dedipath went under. Pulled my latest backup to a new server at virmach, and I was up and running as soon as the DNS propagated.

permalink
report
parent
reply
5 points
*

Modularity, compartmentalization, reliability, predictability.

One software needs MySQL 5, another needs mariadb 7. A third service needs PHP 7 while the distro supported version is 8. A fourth service uses cuda 11.7 - not 11.8 which is what everything in your package manager uses. a fifth service’s install was only tested on latest Ubuntu, and now you need to figure out what rpm gives the exact library it expects. A sixth service expects odbc to be set up in a very specific way, but handwaves it in the installation docs. A seventh program expects a symlink at a specific place that is on the desktop version of the distro, but not the server version. And then you got that weird program that insist on admin access to the database so it can create it’s own user. Since I don’t trust it with that, let it just have it’s own database server running in docker and good riddance.

And so on and so forth… with docker not only is all this specified in excruciating details, it’s also the exact same setup on every install.

You don’t have it not working on arch because the maintainer of a library there decided to inline a patch that supposedly doesn’t change anything, but somehow causes the program to segfault.

I can develop a service on windows, test it, deploy it to my Kubernetes cluster, and I don’t even have to worry about which machine to deploy it on, it just runs it on a machine. Probably an Ubuntu machine, but maybe on that Gentoo node instead. And if my osx friend wants to try it out, then no problem. I can just give him a command, and it’s running on his laptop. No worries about the right runtime or setting up environment or libraries and all that.

If you’re an old Linux admin… This is what utopia looks like.

Edit: And restarting a container is almost like reinstalling the OS and the program. Since the image is static, restarting the container removes all file system cruft too and starts up a pristine new copy (of course except the specific files and folders you have chosen to save between restarts)

permalink
report
parent
reply
2 points

It sounds very nice and clean to work with!

If I’m lucky enough to get the Raspberry 5 at Christmas, I will try to set it up with docker for all my services!

Thanks for the explanation.

permalink
report
parent
reply
37 points

It just making things easier and cleaner. When you remove a container, you know there is no leftover except mounted volumes. I like it.

permalink
report
reply
16 points

It’s also way easier if you need to migrate to another machine for any reason.

permalink
report
parent
reply
4 points

I use LXC for all the reasons most people use Docker, it’s easy to spin up a new service, there are no leftovers when I remove a service, and everything stays separate. What I really like about LXC though is that you can treat containers like VMs, you start it up, attach and install all your software as if it were a real machine. No extra tech to learn.

permalink
report
parent
reply
1 point
*

Not completely true you probably have to prune some images, or volumes.

permalink
report
parent
reply
28 points
*
Deleted by creator
permalink
report
reply
1 point

For sure! Most seem to be random git repo level of reviewed instead of being seriously tested and hardened. I really wish we had more of an source for reliable audits of containers, and flatpaks. Just someone trusted or collectively running trivy, clair, sonarqube, etc, posting the results publicly, and having tools like podman/K3s/etc have sane defaults for checkibg it against containers on pull.

permalink
report
parent
reply
22 points

I would absolutely look into it. Many years ago when Docker emerged, I did not understand it and called it “Hipster shit”. But also a lot of people around me who used Docker at that time did not understand it either. Some lost data, some had servicec that stopped working and they had no idea how to fix it.

Years passed and Containers stayed, so I started to have a closer look at it, tried to understand it. Understand what you can do with it and what you can not. As others here said, I also had to learn how to troubleshoot, because stuff now runs inside a container and you don´t just copy a new binary or library into a container to try to fix something.

Today, my homelab runs 50 Containers and I am not looking back. When I rebuild my Homelab this year, I went full Docker. The most important reason for me was: Every application I run dockerized is predictable and isolated from the others (from the binary side, network side is another story). The issues I had earlier with my Homelab when running everything directly in the Box in Linux is having problems when let´s say one application needs PHP 8.x and another, older one still only runs with PHP 7.x. Or multiple applications have a dependency of a specific library when after updating it, one app works, the other doesn´t anymore because it would need an update too. Running an apt upgrade was always a very exciting moment… and not in a good way. With Docker I do not have these problems. I can update each container on its own. If something breaks in one Container, it does not affect the others.

Another big plus is the Backups you can do. I back up every docker-compose + data for each container with Kopia. Since barely anything is installed in Linux directly, I can spin up a VM, restore my Backups withi Kopia and start all containers again to test my Backup strategy. Stuff just works. No fiddling with the Linux system itself adjusting tons of Config files, installing hundreds of packages to get all my services up and running again when I have a hardware failure.

I really started to love Docker, especially in my Homelab.

Oh, and you would think you have a big resource usage when everything is containerized? My 50 Containers right now consume less than 6 GB of RAM and I run stuff like Jellyfin, Pi-Hole, Homeassistant, Mosquitto, multiple Kopia instances, multiple Traefik Instances with Crowdsec, Logitech Mediaserver, Tandoor, Zabbix and a lot of other things.

permalink
report
reply
1 point

The backup and easy set up on other servers is not necessarily super useful for a homelab but a huge selling point for the enterprise level. You can make a VM template of your host with docker set up in it, with your Compose definitions but no actual data. Then spin up as many of those as you want and they’ll just download what they need to run the images. Copying VMs with all the images in them takes much longer.

And regarding the memory footprint, you can get that even lower using podman because it’s daemonless. But it is a little more work to set things up to auto start because you have to manually put it into systemd. But still a great option and it also works in Windows and is able to parse Compose configs too. Just running Docker Desktop in windows takes up like 1.5GB of memory for me. But I still prefer it because it has some convenient features.

permalink
report
parent
reply
1 point

It seems like docker would be heavy on resources since it installs & runs everything (mysql, nginx, etc.) numerous times (once for each container), instead of once globally. Is that wrong?

permalink
report
parent
reply
2 points

You would think so, yes. But to my surprise, my well over 60 Containers so far consume less than 7 GB of RAM, according to htop. Also, of course Containers can network and share services. For external access for example I run only one instance of traefik. Or one COTURN for Nextcloud and Synapse.

permalink
report
parent
reply
20 points

Another old school sysadmin that “retired” in the early 2010s.

Yes, use docker-compose. It’s utterly worth it.

I was intensely irritated at first that all of my old troubleshooting tools were harder to use and just generally didn’t trust it for ages, but after 5 years I wouldn’t be without.

permalink
report
reply
5 points

I’m a little younger but in the same boat. There is some friction having filesystems, ports and processes “hidden” from your hosts programs that you typically rely on. But I needed them sooooo much less now that all my services are in Docker with exactly matching dependencies instead of rolling my eyes about running two PostgreSQL servers in different versions or juggling Python / node / Ruby versions with ASDF.

permalink
report
parent
reply
2 points

Yeah, so worth it! The first time I moved a service to a new box and realised all I had to do was copy the compose file and docker-compose up -d … I was sold.

Now I’m moving everything to Docker Swarm which is a new adventure. :-)

permalink
report
parent
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 4.8K

    Monthly active users

  • 3.6K

    Posts

  • 77K

    Comments