I’ve been running a headless Ubuntu server for about 10 years or so. At first, it was just a file/print server, so I bought a super low power motherboard/processor to cut down on the energy bill. It’s a passively cooled Intel Celeron J3455 “maxed out” with 16BG of RAM.

Since then it’s ballooned into a Plex/Shinobi/Photoprism/Samba/Frigate/MQTT/Matrix/Piwigo monster. It has six drives in RAID6 and a 7th for system storage (three of the drives are through a PCI card). I’m planning on moving my server closet, and I’ll be upgrading the case into a rack-mount style case. While I’m at it, I figured I could upgrade the hardware as well. I was curious what I should look for in hardware.

I’ve built a number of gaming PCs in the past, but I’ve never looked at server hardware. What features should I look for? Also, is there anything specific (besides a general purpose video card) that I can buy to speed up video encoding? It’d be nice to be able to real-time transcode video with Plex.

7 points

Video encoding you’ve really got 2 clear options: Either a 8th gen or newer consumer Intel chip with integrated graphics for QuickSync support or toss a GPU in there. You can also rely on raw CPU cycles for video transcode but that’s wildly energy inefficient in comparison.

I’ve heard good things about how anything AM4 compares to x99 era Intel on both raw performance and performance per watt, but I have no personal anecdata to share.

Personally I’m currently eyeing up a gaming computer refresh as the opportunity to refresh my primary server with the old components from the gaming computer, but I’m also starting with literal ewaste I scrounged for free, so pretty much anything is big upgrade.

permalink
report
reply
4 points

So my current processor has QuickSync. Are there generations of quicksync? Would a newer implementation be faster? There’s not a lot of data out there. It seems like QS support is either yes or no.

permalink
report
parent
reply
2 points
*

QS is generational, and newer versions will be much better in quality than older ones, and have some more throughput too.

Important note: ARC gpus all have the same qs engine right now (A780 = A310), so even an arc a310 will decimate any cpu qs and will be much faster than any nvidia hardware encoder too. (qs encoder in a310 is slightly handicapped by lower vram bandwidth and size, but it is negligible)

permalink
report
parent
reply
1 point

I’m not entirely certain. QuickSync is an Intel GPU feature and generally just listed as Yes/No on ark.intel.com so I’m inclined to suspect it doesn’t have significant change from one generation to another. Most GPUs have a limited number of of video streams they can transcode at a time, so if you’re exceeding that number then I believe it will have to brute force it on the processor which will be anemic on an older Celeron. Have you verified that Plex is actually using QuickSync to transcode? If its been hitting the processor this whole time that would easily do that.

permalink
report
parent
reply
1 point
*

Not sure what Plex is using, but Shinobi and Photoprism do.

Plex usually runs at native resolution, but it can just barely run if it has to downscale or bake in subtitles in real time. I’ll have to check the settings to see what it’s using.

Edit: Ah, looks like you need to pay for Plex Pass to enable Quick Sync.

permalink
report
parent
reply
1 point

Newer generations have decoders/encoders for more codecs. 8th gen Intel Core cpus have good HEVC support while you need the more recent gens for good AV1 support.

permalink
report
parent
reply
4 points

If you’re on a budget, check out X99 socket Xeons. You can pick up Mobos and chips for super cheap. 10+ core hyper-threaded Xeons with solid clocks and a motherboard for 120-180 bucks total. Support 64 GB of RAM, more if you have a proper server board.

For transcoding, depending on the codec, dedicated GPU is best.

I’m not sure about Plex, but I know on Jellyfin, the new Intel Arc GPUs are really great for encoding, not too expensive for the lower end cards either, and low profile options for smaller rack cases.

permalink
report
reply
4 points

Thanks for the tips!

To clarify, by “x99,” do you mean LGA2011-3? That’s the socket wikipedia associates with the hardware.

And as for Arc, it looks like they’re a great option for video encoding. I’m actually using QuickSync already on my Celeron processor which has helped. From what I can understand, it looks like QuickSync is basically the same processor on all of the Arc cards, so I can just go with the cheapest card if I don’t plan to use much of the other features? Looking like an A380 can be had for $100 or so.

permalink
report
parent
reply
2 points

Sorry for the slow reply. Yes, I mixed the chipset up with the socket lol.

The A380 is the same I’ve been looking at for my own home Media setup, should be plenty of encoding power for your use case.

Good luck!

permalink
report
parent
reply
3 points

Great advice from everyone here. For the transcoding side of things you want an 8th gen or newer Intel chip to handle quicksync and have a good level of quality. I’ve been using a 10th gen i5 for a couple of years now and it’s been great. Regularly handles multiple transcodes and has enough cores to do all the other server stuff without an issue. You need Plex Pass to do the hardware transcodes if you don’t already have it or can look at switching to Jellyfin.

As mentioned elsewhere, using an HBA is great when you start getting to large numbers of drives. I haven’t seen random drops the way I’ve seen occasionally on the cheap SATA PCI cards. If you get one that’s flashed in “IT mode” the drives appear normally to your OS and you can then build software raid however you want. If you don’t want to flash it yourself, I’ve had good luck with stuff from The Art of Server

I know some people like to use old “real” server hardware for reliability or ECC memory but I’ve personally had good luck with quality consumer hardware and keeping everything running on a UPS. I’ve learned a lot from serverbuilds.net about compatibility works between some of the consumer gear, and making sense of some of the used enterprise gear that’s useful for this hobby. They also have good info on trying to do “budget” build outs.

Most of the drives in my rack have been running for years and were shucked from external drives to save money. I think the key to success here has been keeping them cool and under consistent UPS power. Some of mine are in a disk shelf, and some are in the Rosewill case with the 12 hot swap bays. Drives are sitting at 24-28 degrees Celsius.

Moving to the rack is a slippery slope… You start with one rack mounted server, and soon you’re adding a disk shelf and setting up 10 gigabit networking between devices. Give yourself more drive bays than you need now if you can so you have expansion space and not have to completely rearrange the rack 3 years later.

Also if your budget can swing it, it’s nice keeping other older hardware around for testing. I leave my “critical” stuff running on one server now so that a reboot when tinkering doesn’t take down all the stuff running the house. That one only gets rebooted or has major changes made when it’s not in use (and wife isn’t watching Plex). The stuff that doesn’t quite need to be 24/7 gets tested on the other server that is safe to reboot.

permalink
report
reply
2 points
*

I see a lot of drives there, all presumably connected via SATA. If you’re looking to add more drives in the future I recommend a SAS card or two, specifically a Dell PERC H310 flashed in IT mode. I picked one up on eBay for $20 a while back and it gives me 8 drive connectivity. Also snag some mini SAS to SATA cables to connect the drives.

I’ve got 44TB running in my Plex server using it and have had 0 issues with the card. Even had a friend 3D print a fan housing and attached a small Noctua fan to the heatsink for some peace of mind when doing large data transfers to make sure the card doesn’t overheat.

Edit: Like so

permalink
report
reply
1 point

That’s interesting. I’m running a software raid since I’ve been warned of dying raid controllers making your data irretrievable unless you buy an exact replacement. I guess the enterprise folks have that figured out.

Having a little trouble finding details online. Do those two cables going off to the right split off into a bunch of SATA connections?

permalink
report
parent
reply
1 point

I use ZFS for this exact reason. I didn’t want to be stuck using a specific controller or have problems if I needed to migrate my storage to another server. It’s a lot more flexible than a hardware RAID too and has some nice benefits like snapshotting.

permalink
report
parent
reply
1 point
permalink
report
parent
reply
2 points

my 2 cents just in case…:

A raid6 is not a replacement for backup ;-) i use rdiff-backup which is easy to use, stores only one full backup and all increments are to the past while it is only possible to delete the oldest increments (afaik no “merging”) i never needed anything else. The backup should be one off-site and another one offline to be synced once in a while manually. Make complete dumps (including triggers, etc) from databases before doing the backup ;-)

i like to have a recreateable server setup, like setting it up manually, then putting everything i did into ansilbe, try to recreate a “spare” server using ansible and the backup, test everything and you can be sure you also have “documented” your setup to a good degree.

for hardware i do not have much assumptions about performance (until it hits me), but an always-running in-house server should better safe power (i learned this the costly way). it is possible to turn cpu’s off and run only on one cpu with only a reduced freq in times without performance needs, that could help a bit, at least it would feel good to do so while turning cpu’s on again + set higher frequency is quick and can be easily scripted.

hard drives: make sure you buy 24/7, they are usually way more hassle-free than the consumer grades and likely “only” cost double the price. i would always place the system on SSD but always as raid1 (not raid6), while the “other” could then maybe be a magnetic one set to write-mostly.

as i do not buy “server” hardware for my home server, i always buy the components twice when i change something, so that i would have the spare parts ready at hand when i need it. running a server for 5+ years often ends up in not beeing able to buy the same again, and then you have to first search what you want, order, test, maybe send back as it might not fit… instable memory? mainboard released smoke signs? with spare parts at hand, a matter of minutes! only thing i am missing with my consumer grade home server hardware is ecc ram :-/

for cooling i like to use a 12cm fan and only power it with 5v (instead of the 12v it wants) so that it runs smoothly slow and nearly as silent as a passive only cooling, but heat does not build up in the summer. do not forget to clean the dust once in a while… i never had a 5v powered 12V-12cm fan that had any problems with the bearings and i think one of them ran for over a decade. i think the 12volt fans last longer with 5v, but no warranty from me ;-)

even with headless i like to have a quick way at hand to get to a console in case of network might not be working. i once used a serial cable and my notebook, then a small monitor/keyboard, now i use pikvm and could look to my servers physical console from my mobile phone (but would need ssl client certificate and TOTP to do so) but this involves network, i know XD

you likely want smart monitoring and once in a while run memtest.

for servers i also like to have some monitoring that could push a message to my phone somehow for some foreseeable conditions that i would like to handle manually.

debsums, logcheck logwatch and fail2ban are also worth looking at depending on what you want.

also after updating packages, have a look at lsof | egrep “DEL|deleted” to see what programs need a simple restart to really use libraries that have been updated. so reboots only for newer kernels.

ok this is more than 2 cents, maybe 5. never mind

hope these ideas help a bit

permalink
report
reply
1 point

Yeah, I have an offline backup I do every year in a fireproof safe in my basement. Might open a safe deposit box at some point, but I feel reasonably safe.

Good call on power efficiency. I’ll have to keep that in mind. I think I’m currently drawing around 100W which is mostly the hard drives (the CPU doesn’t even need a fan). I assume that might go up a bit in a new build, but I think the benefits will be worth it.

permalink
report
parent
reply

Self Hosted - Self-hosting your services.

!selfhost@lemmy.ml

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules

  • No harassment
  • crossposts from c/Open Source & c/docker & related may be allowed, depending on context
  • Video Promoting is allowed if is within the topic.
  • No spamming.
  • Stay friendly.
  • Follow the lemmy.ml instance rules.
  • Tag your post. (Read under)

Important

Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!

  • Lemmy doesn’t have tags yet, so mark it with [Question], [Help], [Project], [Other], [Promoting] or other you may think is appropriate.

Cross-posting

If you see a rule-breaker please DM the mods!

Community stats

  • 123

    Monthly active users

  • 333

    Posts

  • 1.9K

    Comments