Ive been wanting to get proper storage for my lil server running nextcloud and a couple other things, but nc is the main concern. Its currently running on an old ssd ive had laying around so i would want a more reliable longer term solution.
So thinking of a raid1 (mirror) hdd setup, with two 5400rpm 8tb drives, bringing the choices down to ironwolf or wd red plus, which both are in the same price range.
Im currently biased towards the ironwolfs because they are slightly cheaper and have a cool print on them, but from reddit threads ive seen that wd drives are generally quieter, which currently is a concern since the server is in my bedroom.
Does anyone have experience with these two drives and or know better solutions?
Oh and for the os, being a simple linux server, is it generally fine to have that on a separate drive, an ssd in this case?
Thanks! :3
Backblaze reports HDD reliability data on their blog. Never rely on anecdata!
https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2024/
Kind of frustrating that Seagate is the only company to have multiple drives with ZERO failures, but then that 12TB model with over 12% failure… ouch.
That said, I’ve been on Team Seagate IronWolf for years without issues.
And as backblaze says themselves every time, this data is not.the complete truth either, they have large sample size but this sample size is still too small to large claims about reliability. Especially about brand reliability.
Google claims in their data the failure rate is pretty equal across all brands.
These all seem to be 7200rpm drives, would 5400rpm drives make a large difference in terms of longevity relative to that? Also seeing mixed results from seagate there, first they mention there being 0 failures of a couple seagate models, but then later in the graph of annualized failures, seagate in general has the highest failure rate
I wouldn’t think so. 5400 rpm drives might last longer if we’re specifically thinking about mechanical wear. My main takeaway is that WDC has the best. I would use the largest number available which is the final chart which you also point out. One thing which others have also pointed out that there is no metadata included with these results. For example the location of different drives, i.e. rack and server-room specific data. These would control for temperature, vibration and other potential confounders. Also likely that as new servers are brought online, different SKUs are being bought in groups, i.e. all 10 TB Seagate Ironwolf. I don’t know why they haven’t tried to use linear or simple machine learning models to try to provide some explanatory information to this data, but nevertheless I am deeply appreciative that this data is available at all.
I have been burned by WD Red on SMR drives, so I will just say Fuck You WD. That is all.
I’ll be honest, I havent used Seagate in 15 years, so maybe theyve improved, but the only hard drives I’ve ever had fail on me have all been Seagate.
Never had any other drive hard fail. The ones that did were always Seagate.
After (ugh) 30 years of having PCs and many, many, drives, Seagate has been the worst.
But I’ve had WD fail too. Just not as much, and I’ve had far more WD drives. I currently have about 20 drives of varying ages, 98% of them are WD, because more of the Seagate drives have failed and been trashed.
My methodology is to look at BackBlaze, throw out any data with less than 100k hours, and pick the drive with the lowest AFR (annualized failure rate).
It’s maybe $50-$100 between the cheapest and best enterprise drive and I’m not buying 1,000 drives so I do not care about price.
How often do you get new drives? I’ve been through 2 drives that store all my data since the last 15 or so years. I’m wanting to get a nas together now though but I’ve no server experience.
WD is a shit company doing shit things, so Seagate. They are less shit, for now.
Western Digital, among other things, has been selling SMR drives using CMR SKUs. So if you’re building something and wanted specific performance, and selected some WD drives based on what the SKU says, you might end up with SMR drives, which are not nearly as performnt. It was a bate and switch tactic and they never really acknowledged it except for making the “red pro” line of drives, which are CMR. Regular “red” drives are SMR now.
Ehh, this practice has stopped - they now label their drives properly on their website/tech specs. I was one of the affected users when I went to raid1 for my 10tb disk (bought ~6mo apart, second drive affected) and I was fucking pissed, as I’ve read mixing CMR and SMR in raid is a recipe for disaster. I straight up told the CS rep that ‘you send me a CMR drive and take the SMR, or I will join the class action lawsuit and never be a WD customer again’. I received a CMR model next day, and they received their SMR drive back.
They pissed me off, but they did the correct response and resolution. I have continued to buy WD since the incident.