Programs with custom services, virtual environments, config files in different locations, programs creating datas in different location…

I know today a lot of stuff runs in docker, but how does a sysadmin remember what has done on its system? Is it all about documenting and keeping your docs updated? Is there any other way?

(Eg. For installing calibre-web I had to create a python venv, the venv is owned by root in /opt, but the service starting calibre web in /etc/systemd/system needs to be executed with the User=<user> specifier because calibre web wants to write in a user home directory, at the same time the database folder needs to be owned by www-data because I want to r/w it from nextcloud… So calibreweb is installed as a custom root(?) program, running in a virtual env, can access a folder owned by someone else, but still needs to be executed by another user to store its data there… )

Despite my current confusion in understanding if all of this is right in terms of security, syntax and ownership, No fucking way I will remember all this stuff in a week from now… So… What do you use to do, if you do something? Do you use flowcharts? Simple text documents? Both?

Essentially, how do you keep track?

2 points

Declarative configuration fixes this problem. You don’t really have to write down how to setup something because the configuration is the description.

I use NixOS so in my case all the stuff you described would be defined in a Nix code in a separate Calibre module. I can enable and disable such module at will with a single option in my main config file.

I really recommend looking into immutable, declarative systems. I think NixOS is the most complete solution but there are some other too. I have no experience with them though.

permalink
report
reply
1 point

code forges are great for management tasks. host an internal forgejo, and create repos for your servers and services. use issues for keeping track of initial setup, config changes and upgrades. have a longer term issue for whenyou just want to record a little change but too lazy to open a full issue for it. you can also store config in the git repo, and write docs as wiki pages for things that are more stable or important aspects of your systems

permalink
report
reply
1 point

I keep a documentation page in my wiki for every thing I set up - how I did it, what I ran into, how I fixed it, and where everything is. Reason being, when it comes time to upgrade or I have to install it again someplace else, I remember how I did it. Basically, every completed step gets copy-and-pasted into a page along with notes about it.

As for watching the file system, I have AIDE on all of my boxen (configured to run daily, but not configured to copy the new AIDE database over the old one automatically). That way, I can look at the output of an AIDE run and see what new files were created where (which would correspond to when I installed the new thing).

permalink
report
reply
4 points

“Infrastructure as code” is what the strategy is typically called. You use one of the many tools for orchestrating configuration of hosts (Ansible, OpenTofu, Puppet, Saltstack, Chef, etc.). These allow you to provide configuration files and code for setting up your hosts in a central place. This place is typically a Git repo, allowing you to keep track of when which change was made.

Depending on the tool you use, you trigger applying the configuration on your dev PC, or there’s a hosted CI/CD server which automatically rolls out the changes when a new commit is pushed.

permalink
report
reply
11 points

Don’t make a mess, and do the changes you need with ansible. Effectively making its code your documentation.

permalink
report
reply

Sysadmin

!sysadmin@lemmy.ml

Create post

A community dedicated to the profession of IT Systems Administration

Community stats

  • 1

    Monthly active users

  • 70

    Posts

  • 155

    Comments