I figured most of you could relate to this.
I was updating my Proxmox servers from 7.4 to 8. First one went without problems. That second one though… Yea, not so much… I THINK it’s GRUB but not sure yet.
Now my Nextcloud, NAS, main reverse proxy and half my DNS went down. And no time to fix it before work. Lovely 🤕 Well I now know what I’ll be doing when I get home.
Out of morbid curiosity, What are some of ya’lls self hosting horror stories.?
Had a mini heart attack as my Cryptpad told me password or username wrong, but I use a pw-manager. There are important documents in there and there is no “forgot password” function.
Solution was, I copied a wrong password to Cryptpad in Vaultwarden. Thanks to the password history of Vaultwarden, I found the right one and got logged in again 😅
Used to have a Dell R710 in a rack in the garage. The rack doesn’t have a door, bit it was cheap and fits in the space like a glove.
One day I was down there with the wife and kids sorting some stuff out at one end of the garage. Look over and see that the little one had pulled all the disks out of the server.
Managed to recover all my VMs that were running ext4 with a quick fsck. My main data storage VM that was using btrfs just locked me out with no possibility of mounting it even read only. From then on I will not touch btrfs with a barge pole.
I’ve been carrying an OMV VM since Proxmox 5. Between one of the major version updates, usrmerge
made a mess and forced me to reinstall the boot disk, re-hook everything up, and while not ideal, it works. Updated again recently, and my disks started to fall into read only mode. Tried the usual, rebooting into single user mode, fsck the volume, remounting, etc. and “hey look, it came back online!” only for it to go back into read only mode again. Since it was a virtual disk on a RAID6 array, and nothing else was breaking, it was really boggling my mind. It kept doing that despite still having a couple TB of free space available… or at least so I thought.
Turns out:
I had the virtual disk allocated to 19TB of my 24TB available space to work with. The qcow file lazy-write so despite it showing 19TB on disk in ls
, it only used as much as the VM actually used. Usage grew to 16TB, the qcow file tried to write more data, but 16TB is the ext4 file size limit on my system. Oops.
I ended up ordering 3 more drives, expanding to 8x8TB on RAID6 w/ 48TB ish workable space, copied the data out into separate volumes, with none of them exceeding 15TB in size, then finally deleting the old “19TB” volume. Now I have over 25TB of space to grow, and new found appreciation for the 16TB limit :)
Ugh, this happened to me during a minor release. For whatever reason I had to lug the PC into my office, connect keyboard and mouse, boot it up, and press a key. Then it would boot normally again. I get jealous of those of you with servers that have those remote KVM capabilities.
My issue wasn’t quite that easy but it wasn’t as headache inducing as I had thought. Turns out, last time I had rejiggered my services I had failed to delete a now unused fstab entry. One pound sign, save file and a reboot later and everything was back up and running correctly. I lucked out! Now tiem to move my Nextcloud backups off that machine!
Oh man, I empathize with you. Sometimes your self-hosted services go down at really bad times and you just don’t have time to fix it in the moment. Then the fact that its broken starts nagging at you throughout the rest of the day. Hope you get your stuff back up without too much fuss.
My current horror story is that my QNAP TS-453 Pro NAS that was hosting my Jellyfin and Nextcloud shut off on its own several weeks back and then refused to boot up. Turns out there’s a known manufacturing defect in the Intel J1900 chip the NAS uses that causes clock drift and every TS-451 and TS-453 NAS that was ever sold is basically a ticking time bomb and it was my time to get bit. QNAP never issued a recall even though they knew about the issue and are refusing to help customers affected by it. Now I am hoping that I can use the resistor fix in that forum post to briefly revive my NAS so that I can then backup all the data into a DIY NAS that I am still ordering parts for. Picked up some good deals but man DIY is still expensive. Hopefully, it’s worth it as I never want to use turnkey solutions again after this experience.
The fact that QNAP knew about this and didn’t warn their customers would cause me to boycott them for life. This isn’t just like a gaming PC. This is a NAS. Some peoples entire lives are on there.
There are lots of reasons to avoid QNAP but that’s rough.
So glad I went DIY with Ryzen and Unraid
That’s why I am doing a DIY NAS now. I don’t think I’ll ever buy another QNAP ever again after this experience. Is your DIY a mini-ITX by the way? I’ve been having a hell of a time figuring out whether I can get PCI-E bifurcation for my nvme SSDs while using a 5600G CPU.
What are you thoughts about Unraid btw? I’ve been looking into TrueNAS Scale.
Not OP but Unraid is fantastic. I know ZFS expansion is coming at some point but being able to slap in another drive and add it to the pool and have parity “just work” is worth the money. Plus it makes Docker containers much easier to manage (Not like Portainer is that hard, but it’s nice to have configs already set to go).