Hey again! I’ve progressed in my NAS project and I’ve chosen to go for a DIY NAS. I can’t wait for the parts to arrive!
Now I’m a bit struggling to choose an OS. I am starting with 2x10To HDD + 1To NVMe SSD. I plan to use 1 HDD for parity and to add more disks later.
I plan to use this server purely as a NAS because I will be getting a second more powerful server some time next year. But in the meantime, this NAS is a big upgrade over my rpi 4, so I will run some containers or VMs.
I don’t want to go with TrueNAS as I don’t want to use ZFS (my RAM is limited and I’m not sure I can add drives with different sizes). I’ve read btrfs is the second best for NAS, so I may use this.
Unraid seemed like the perfect fit. But the more I read about it, the more I wonder if I shouldn’t switch to Proxmox.
What I like about Unraid is the ability to add a disk without worrying about the size. I don’t care much about the applications Unraid provides and since docker-compose is not fully supported, I’m afraid I won’t be able to do things I could have done easily with a docker-compose.yml I also like that’s it’s easy to share a folder. What I don’t like about Unraid is the cache system and the mover. I understand why the system works this way but I’m not a fan.
I’ve asked myself if I needed instant parity for all my data and if I should put everything in the array.
The thing is that for some of my data I don’t care about parity. For instance, I’m good with only backing up my application data and to have parity for the backup. For my tv shows I don’t care about parity nor backup while I want both for my photos.
After some more research, I found mergerfs and snapraid. I feel that they are more flexible and fix the cache/mover issue from Unraid. Although I’m not sure if snapraid can run with only 2 disks.
If I go with Proxmox I think I would use OpenMediaVault to setup shares.
Is anyone using something like this? What are your recommendations?
Thanks!
Unraid is the absolute goat 🐐, been in production for years at my house and I’m debating deploying a second dedicated machine. 11/10 recommend.
Unraid “supports” docker compose. You can install and use it but you won’t be able to utilize how unraid handles docker containers.
All that unraid does is make docker more accessible for the normal user. In the end the container template constructs a docker run command.
So you could use portainer to manage stacks through a webui or install compose and have to SSH into the unraid server all the time.
https://xigmanas.com/xnaswp/download/
For a pure NAS purpose this is my go to. Serves drives, supports multiple file systems, and has a few extras like a basic web server and RSync built into a nice embedded system. The OS can run on a USB stick and manage the drives separately for the data.
On the ZFS front, a common misconception is that it eats a ton of RAM. What it does actually is use idle RAM for the ‘arc’ which caches the most frequent and/or most recently used files to avoid pulling them from disk. That RAM though will get dumped and made available to the system on demand though if for whatever reason the OS needs it. Idle RAM is wasted RAM so it’s a nice thing to have available.
Indeed, ZFS uses a percentage of RAM for cache. That amount is configurable. ZFS has a easier CLI, I’d recommend it for a NAS. And, allow me to say that I am not sure the comparison is between truenas scale and proxmox really. This thread reminds me of the usual distro wars when people don’t know about desktop environments (kde, gnome, xfce, etc.) as in: you can use ZFS in proxmox ;O
Indeed, it wasn’t clear that it was how it worked. That seems better than the cache/mover system from Unraid.
Another point as to why I didn’t consider ZFS, is that it only works with disks having the same capacities. As I will be adding disks over time, I think I will be wasting disk space
The disk size also doesn’t have to match. Creating a drive array for ZFS is a 2 phase thing:
Creating a series of ‘vdev’ which can be single disks or mirrored pairs,
Then you combine the vdevs into a ‘zpool’ regardless of their sizes and it all becomes one big pool, and it acts somewhere between raid and disk spanning where it reads and writes to all but once any given vdevs is full it just stops going there. I currently have vdevs sets in 12, 8, 6 and three 4 TB sizes for a total of 38 TB of space minus formatting loss.
Example how I have it laid out, it’d be ideal to have them all the same size to balance it better, but it’s not required.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters | More Letters |
---|---|
LTS | Long Term Support software version |
LXC | Linux Containers |
NAS | Network-Attached Storage |
RAID | Redundant Array of Independent Disks for mass storage |
SSD | Solid State Drive mass storage |
SSH | Secure Shell for remote terminal access |
ZFS | Solaris/Linux filesystem focusing on data integrity |
7 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.
[Thread #510 for this sub, first seen 13th Feb 2024, 18:35] [FAQ] [Full list] [Contact] [Source code]
Use Debian, no need for those over bloated systems. BTRFS as filesystem. Then setup LXD/Incus to run containers and VMs and you’ll get a very Proxmox like experience without the bloat or the nagging to buy licenses.
Some people also like Cockpit which comes with a nice UI, has basic virtual machine management features and has a Samba plugin to manage users and shares.
I’m going to disagree with this. I’ve setup everything in one Debian server before and it became unwieldy to keep in check when you’re trying new things, because you can end up with all kinds of dependencies and leftover files from shit that you didn’t like.
I’m sure this can be avoided with forethought and more so if you’re experienced with Debian, but I’m going to assume that OP is not some guru and is also interested in trying new things, and that’s why he’s asked this question.
Proxmox is perfectly fine. For many years I had an OMV VM for my file server and another server for my containers. If you don’t like what you’ve done it is much easier to just remove one VM doing one thing and switch to some other solution.
end up with all kinds of dependencies and leftover files from shit that you didn’t like.
I’ve been using debian for 10 years and never had this problem. Apt keeps everything very neat and tidy
Are you downloading random .deb packages off the internet and installing them manually?
I’m going to disagree with this. I’ve setup everything in one Debian server before and it became unwieldy to keep in check when you’re trying new things, because you can end up with all kinds of dependencies and leftover files from shit that you didn’t like.
Run your new things inside Docker OR LXD/Incus and destroy the containers/VMs when not required anymore. I don’t get your comment.
Proxmox is perfectly fine. For many years I had an OMV VM for my file server and another server for my containers. If you don’t like what you’ve done it is much easier to just remove one VM doing one thing and switch to some other solution.
And you can use LXD/Incus for that as described. LXD replaces Proxmox the difference is that it isn’t an entire OS with quirks but a simple thing you install on Debian. It will allow you to create, move, remove VMs and containers, and also has a WebUI for those interested. The irony here is that in your Proxmox setup, if you’re using containers, you’re already using LXC containers, a technology effectively created by the same people who made LXD.
But as I said, even if you don’t want LXD/Incus you can also use Cockpit, it also provides a WebUI you can use to create and manage your VMs.
Look, I never said you were wrong man. Clearly you probably have a lot more experience than i do. Which is why I said what I said. Because I personally believe Proxmox is way easier for someone who is a casual like me. That’s all.
Edit: Also, though it doesn’t really matter, I don’t use LXC.
I read a 2021 Ars Technica article on BTRFS and it was extensively critical, to put it mildly. Have things changed since then? I’m down with LXD or Incus but I don’t know if that’s the file system for me.
People like to complain a lot about things that aren’t particularly true. The default file system in Fedora and SUSE is BTRFS and I think that speaks volumes, besides others (including me) having been using BTRFS in production for serious stuff for years now. Even Synology is all up for BTRFS: https://www.synology.com/en-global/dsm/Btrfs
There are also advantages of using LXD/Incus with BTRFS, most likely taking advantage of the sub volume features and snapshots so VMs and containers run faster (each machine gets a sub volume that works like a classic partition and is better / faster than a simple folder with files). Snapshots are also a good feature as you can use them to backup and rollback your base system or the containers/VMs as well.
Don’t I need mergerfs and snapraid with BTRFS?
Also it’s not clear what LXD/Incus replaces? Is it Promox or Promox + OMV?
Don’t I need mergerfs and snapraid with BTRFS?
No. It’s just a FS like any other… actually it’s a proper filesystem unlike Ext4. Why would you need those tools? https://wiki.tnonline.net/w/Btrfs/Profiles
Also it’s not clear what LXD/Incus replaces? Is it Promox or Promox + OMV?
It replaces Proxmox and can run both containers and VMs, but it isn’t an entire OS, it’s just something you can install on a clean Debian system (from the Debian repository) and enjoy it.
Some people also like Cockpit which comes with a nice UI, has basic virtual machine management features and has a Samba plugin to manage users and shares.
Thanks, I’ve seen Incus have an online demo, this is nice, I’ll give it a try.
For BTRFS, if I understand correctly, I can have a similar result as mergerfs if I use SINGLE. But as RAID5/6 is unstable it seems I would still need snapraid, or am I missing something?