Title. :)
If there is one thing you shouldn’t cheap out on imo it’s the storage.
There is probably some iron triangle to be found there; cheap, large capacity ,reliable ; chose two.
You can have all three of those, but you won’t get great performance. The Samsung QVO SATA drives are a great example. I wouldn’t use those for an OS drive but they’re fantastic for NAS or media use.
Personally I have focused on fast SSD storage and utilized the vast, cheap, slow storage available with mechanical drives for backup.
At the end of the day, if an SSD fails, you’re effectively just screwed. If a mechanical drive fails, there is some possibility that the data is recoverable. But moreover, mechanical storage is so cheap by volume that you can just have redundant backup and never worry about it, really.
I thought that SSD fails “better” than HDD because SDD become read-only first.
Only when they get to the end of life of the cells. If there’s another failure before that, it’s likely a full failure.
To my knowledge, that isn’t a consistent pattern (someone please correct if wrong).
So far I’ve been following recommendations from this person: https://old.reddit.com/r/NewMaxx/comments/16xhbi5/ssd_guides_resources_ssd_help_post_your_questions/
They’ve at least created a website that houses the SSD tier lists, buying guide, etc: https://borecraft.com/
The point is to run TLC drives. SLC drives of that capacity are too expensive and are thus not recommended.
- SLC -> Single-Level Cell, i.e. 1 bit per cell
- MLC -> Multi-Level Cell, i.e. 2 bits per cell
- TLC -> Triple-Level Cell, i.e. 3 bits per cell
- QLC -> Quad-Level Cell, i.e. 4 bits per cell
The more bits per cell you store, the more dense and therefore cheaper your flash chips can be for a give capacity. The downside is that it is slower and less reliable since you have to be able to write and read exponentially more voltage states per cell, e.g. 2 states for SLC, 4 states for MLC, 8 states for TLC, etc.
WD Green /shrug
I’ve been using all Red Pros since I first built my nas, but it started with a couple of green 2TB that where in there for like 7 years before being replaced (didn’t die yet)
I had WD Greens in my first NAS (they were HDDs, though). This was ill-advised. Definitely better for power consumption, but they took forever to spin up for access to the point where it seemed like the NAS was always on the fritz.
Now I swear by WD Red. Much, much better (in my use case).
(I’m not sure how things pan out in SSD land though. Right now it’s just too pricey for me to consider.)
I was using HDDs, and I believe it may have been a little less of an issue bc I had Unraid configured to keep the drives spun up (I’ve read the spin up is hard on the drive, not so much the time being spun up)
But I did occasionally have some IOWait issues. Reds plus a NVME cache has resolved all those issues.
My concern (back then) with keeping the greens spun up would be that I’d lose the energy savings potential of them without the benefits of a purpose built NAS drive.
In my current NAS, I just have a pair of WD Red+. I don’t have a NVME cache or anything but it’s never been an issue given my limited needs.
I am starting to plan out my next NAS though, as the current on (Synology DS716+) has been running for a long time. I figure I can get a couple more years out of it, but I want to have something in the wings planned just in case. (seriously looking at a switch to TrueNas but grappling with price for HW vs appliance…). My hope is that SSDs drop on price enough to make the leap when the time comes.