Hello everyone, my company (our department is of around 150+ developers/machine learning people/researchers) is currently considering switching from Windows to Gnu+Linux for company devices (as in the machines we use in our daily work) and we are currently in the phase of collecting requirements. I’m not in charge of the process or involved in the decision phase, but as an enthusiast I’m curious about it. We handle data and other sensitive resources, so the environment should remain managed by the IT department (what’s possible to install, VPNs, firewalls, updates and similar). What do companies generally use in this kind of scenario? I’m assuming they generally do some stuff with either Canonical or Red Hat, but are there alternatives? Are there ways to do something that works across distributions by using flatpak or the nix package manager? What are your experiences?
OpenSUSE can also be considered.
YaST has so many awesome funktions
Btw if Enterprise support is needed SLE Desktop is probably better than OpenSUSE
Putting on my enterprise hat, most companies are going to want some form of paid support. Canonical and Suse are probably the two trustworthy players in that space (from an enterprise perspective, I really don’t trust Canonical) now that RedHat has developed some form of paranoia. If it were up to me I’d push for SUSE. I think Nix is super great and fantastic but I’m a DevOps engineer, not a support desk engineer, so I view deployments and support differently than someone who has to answer questions about how to open email
In terms of ease of management and deployment NixOS might be an interesting option. It can be completely configured through a single file so the deployment and update processes become very straightforward and easy to manage in a centralised fashion.
I was thinking about that myself, but is there a way to remotely update configuration.nix and rebuild, if the requirements change? For example, if some dev wants to use Geany instead of Vscode and Admin is like “Yeah, why not”, how would that be implemented?
Sure. Pick any orchestration solution you like. Ansible, for example. You’d just change the file that is rolled out for that machine, either by changing some central, per-machine file or its ansible file, then tell ansible to update the file remotely and make it run nixos-rebuild switch
on that machine. A few seconds later the tool is installed. If you replaced vscode with geany vscode would be uninstalled, too.
I would consider a git repo of a few standard configurations and switch them to a config that had it, or possibly maintain individual configs per user. Your orchestration would need to reference the git repo so when you need to add software XYZ to everyone’s machine you don’t have to re-run all of the individual playbooks and deal with the hassle of remembering who needed which playbook ran.
There are numerous ways to approach this.
Canonical:
- Cheap finance-wise
- Low upfront cost skill-wise
- Medium ongoing cost skill-wise
- Occasionally breaks without being touched
RedHat:
- Medium cost finance-wise
- Low upfront cost skill-wise
- Medium ongoing cost skill-wise
- RedHat is not what it used to be. Has QA been sacked?
And, of course, my favourite 😁
Gentoo:
- Cheap finance-wise
- High upfront cost skill-wise
- Medium ongoing cost skill-wise
- Only breaks when multiple warnings are ignored
From my experience, though - you’ll probably end up on Ubuntu. Because everyone knows it, right?
Yep, Ubuntu was mentioned as an example in a few meetings and I think they will end up doing that. And it’s fine, give me literally anything other than Windows and I will be happy, however I’m a spoiled kid, so I also don’t really want Ubuntu.
The disappointing thing about Ubuntu is that the Ubuntu in everyone’s minds is very different from Ubuntu that’s actually getting installed. Snap is atrocious on desktop. Random inconsistencies across a fleet on a few hundred identical desktops. A dodgy campaign to onboard everyone onto Ubuntu Pro (I don’t mind them charging for a service, but the way they do it is disgusting.). Incredibly inflexible if you want more than just the barebones desktop.
Every day there’s something annoying popping up.
so the environment should remain managed by the IT department
For the domain you are describing
developers, machine learning people, researchers, Linux
having personal machines maintained by “IT” is not a typical setup. A ML engineer is more IT savy than your average IT department, especially on Linux when they use Linux for work.
An alternative to a centrally mentioned solution can be a reasonable set of rules with allows more freedom for the individual. For example, you could provide every new-joiner with a pre-configured laptop, but they can take over ‘root’ privileges if they want. It’s not so hard to keep a Linux machine secure if the users doesn’t intentionally screw it up. You don’t need IT to run updates, it happens automatically. You don’t need a personal firewall, they are no open ports per default anyway.
IT may want to install an agent that helps detecting security breaches (or misconduct). Or remote-support. Of course, you may require VPN. You may require that sensitive data does not leave the server. You should. A personal device is an attack vector and Linux has security holes all the time, but remote exploits are nowhere as common as in a Windows, Outlook, AD world.
The upside of allowing users to tinker with their machines is that IT only needs to provide a reasonable starting point, not a perfect solution for everyone. If you expect support from IT, you will want to standardize the distribution and maybe some applications or tools, but you don’t need everyone in the company to be on the same version of Solitaire.