• @[email protected]
      link
      fedilink
      English
      24 months ago

      Not worth the risk for me to find out lol. My granddaddy stored his data on WD drives and his daddy before him, and my daddy after him. Now I store my data on WD drives and my son will to one day. Such is life.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        4 months ago

        And here I am with HGST drives hitting 50k hours

        Edit: no one ever discusses the Backblaze reliability statistics. Its interesting to see how they stack up against the anecdotes.

      • @[email protected]
        link
        fedilink
        English
        14 months ago

        Nice data but I stick with Toshiba old HGST and WD. For me they seem to last much longer than Seagate

    • @[email protected]
      link
      fedilink
      English
      24 months ago

      I have one Seagate drive. It’s a 500 GB that came in my 2006 Dell Dimension E510 running XP Media Center. When that died in 2011, I put it in my custom build. It ran until probably 2014, when suddenly I was having issues booting and I got a fresh WD 1 TB. Put it in a box, and kept it for some reason. Fast forward to 2022, I got another Dell E510 with only an 80 GB. Dusted off the old 500 GB and popped it in. Back with XP Media Center. The cycle is complete. That drive is still noisy as fuck.

    • @[email protected]
      link
      fedilink
      English
      14 months ago

      My personal experience has been hit n miss.

      Was using one 4TB Seagate for 11 years then bought a newer model to replace it since I thought it was gonna die any day. That new one died within 6 months. The old one still works although I don’t use it for for anything important now.

    • @[email protected]
      link
      fedilink
      English
      14 months ago

      I bought 16TB one as an urgent replacement for a failing raid.
      It arrived defective, so I can’t speak on the longevity.

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      4 months ago

      Vastly. I’m running all seagate ironwolf pros. Best drives Ive ever used.

      Used to be WD all the way.

      • @[email protected]
        link
        fedilink
        English
        14 months ago

        I’m going to have to pass though. They cost too much. I buy refurb with 5 year warranty

  • @[email protected]
    link
    fedilink
    English
    34 months ago

    “The two models, the 30TB … and the 32TB …, each offer a minimum of 3TB per disk”. Well, yes, I would hope something advertised as being 30TB would offer at least 3TB. Am I misreading this sentence somehow?

    • @[email protected]
      link
      fedilink
      English
      54 months ago

      These drives aren’t for people who care how much they cost, they’re for people who have a server with 16 drive bays and need to double the amount of storage they had in them.

      (Enterprise gear is neat: it doesn’t matter what it costs, someone will pay whatever you ask because someone somewhere desperately needs to replace 16tb drives with 32tb ones.)

      • @[email protected]
        link
        fedilink
        English
        24 months ago

        In addition to needing to fit it into the gear you have on hand, you may also have limitations in rack space (the data center you’re in may literally be full), or your power budget.

  • @[email protected]
    link
    fedilink
    English
    23
    edit-2
    4 months ago

    Seagate. The company that sold me an HDD which broke down two days after the warranty expired.

    No thanks.
    laughing in Western Digital HDD running for about 10 years now

    • @[email protected]
      link
      fedilink
      English
      7
      edit-2
      4 months ago

      Funny because I have a box of Seagate consumer drives recovered from systems going to recycling that just won’t quit. And my experience with WD drives is the same as your experience with Seagate.

      Edit: now that I think about it, my WD experience is from many years ago. But the Seagate drives I have are not new either.

      • @[email protected]
        link
        fedilink
        English
        54 months ago

        Survivorship bias. Obviously the ones that survived their users long enough to go to recycling would last longer than those that crap out right away and need to be replaced before the end of the life of the whole system.

        I mean, obviously the whole thing is biased, if objective stats state that neither is particularly more prone to failure than the other, it’s just people who used a different brand once and had it fail. Which happens sometimes.

        • @[email protected]
          link
          fedilink
          English
          24 months ago

          Ah I wasn’t thinking about that. I got the scrappy spinny bois.

          I’m fairly sure me and my friends had a bad batch of Western digitals too.

    • @[email protected]
      link
      fedilink
      English
      154 months ago

      I had the opposite experience. My Seagates have been running for over a decade now. The one time I went with Western Digital, both drives crapped out in a few years.

      • @[email protected]
        link
        fedilink
        English
        34 months ago

        I have 10 year old WDs and 8 year old Seagates still kicking. Depends on the year. Some years one is better than others.

    • @[email protected]
      link
      fedilink
      English
      44 months ago

      Had the same experience and opinion for years, they do fine on Backblaze’s drive stats but don’t know that I’ll ever super trust them just 'cus.

      That said, the current home server has a mix of drives from different manufacturers including seagate to hopefully mitigate the chances that more than one fails at a time.

    • ArxCyberwolf
      link
      fedilink
      English
      24 months ago

      I currently have an 8 year old Seagate external 4TB drive. Should I be concerned?

      • @[email protected]
        link
        fedilink
        English
        14 months ago

        Any 8 years old hard drive is a concern. Don’t get sucked into thinking Seagate is a bad brand because of anecdotal evidence. He might’ve bought a Seagate hard drive with manufacturing defect, but actual data don’t really show any particular brand with worse reliability, IIRC. What you should do is research whether the particular model of your drive is known to have reliability problems or not. That’s a better indicator than the brand.

    • @[email protected]
      link
      fedilink
      English
      24 months ago

      Same but western digital, 13gb that failed and lost all my data 3 time and 3rd time was outside the warranty! I had paid 500$, the most expensive thing I had ever bought until tgat day.

  • @[email protected]
    link
    fedilink
    English
    54 months ago

    The two models, […] each offer a minimum of 3TB per disk

    Huh? The hell is this supposed to mean? Are they talking about the internal platters?

  • veee
    link
    fedilink
    English
    134 months ago

    Just one would be a great backup, but I’m not ready to run a server with 30TB drives.

    • mosiacmango
      link
      fedilink
      English
      94 months ago

      I’m here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.

      This would net around 180TB in that form factor. Thats would go a long way for a long while.

      • Badabinski
        link
        fedilink
        74 months ago

        I dunno if you would want to run raidz2 with disks this large. The resilver times would be absolutely bazonkers, I think. I have 24 TB drives in my server and run mirrored vdevs because the chances of one of those drives failing during a raidz2 resilver is just too high. I can’t imagine what it’d be like with 30 TB disks.

        • @[email protected]
          link
          fedilink
          English
          24 months ago

          Yeah I agree. I just got 20tb in mine. Decided to just z2, which in my case should be fine. But was contemplating the same thing. Going to have to start doing z2 with 3 drives in each vdev lol.

        • @[email protected]
          link
          fedilink
          English
          44 months ago

          A few years ago I had a 12 disk RAID6 array and the power distributor (the bit between the redundant PSUs and the rest of the system) went and took 5 drives with them, lost everything on there. Backup is absolutely essential but if you can’t do that for some reason at least use RAID1 where you only lose part of your data if you lose more than 2 drives.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          4 months ago

          Is RAID2 ever the right choice? Honestly, I don’t touch anything outside of 0, 1, 5, 6, and 10.

          Edit: missed the z, my bad. I don’t use ZFS and just skipped over it.

          • Badabinski
            link
            fedilink
            24 months ago

            raidz2 is analogous to RAID 6. It’s just the ZFS term for double parity redundancy.

            • @[email protected]
              link
              fedilink
              English
              14 months ago

              Yeah, I noticed the “z” in there shortly after posting. I don’t use ZFS much, so I kinda skimmed over it.

  • @[email protected]
    link
    fedilink
    English
    23
    edit-2
    4 months ago

    This is for cold and archival storage right?

    I couldn’t imagine seek times on any disk that large. Or rebuild times…yikes.

    • @[email protected]
      link
      fedilink
      English
      21
      edit-2
      4 months ago

      up your block size bro 💪 get them plates stacking 128KB+ a write and watch your throughput gains max out 🏋️ all the ladies will be like🙋‍♀️. Especially if you get those reps sequentially it’s like hitting the juice 💉 for your transfer speeds.

    • @[email protected]
      link
      fedilink
      English
      64 months ago

      Random access times are probably similar to smaller drives but writing the whole drive is going to be slow

    • @[email protected]
      link
      fedilink
      English
      154 months ago

      Definitely not for either of those. Can get way better density from magnetic tape.

      They say they got the increased capacity by increasing storage density, so the head shouldn’t have to move much further to read data.

      You’ll get further putting a cache drive in front of your HDD regardless, so it’s vaguely moot.

    • @[email protected]
      link
      fedilink
      English
      84 months ago

      For a full 32GB at the max sustained speed(275MB/s), 32ish hours to transfer a full amount, 36 if you assume 250MB/s the whole run. Probably optimistic. CPU overhead could slow that down in a rebuild. That said in a RAID5 of 5 disks, that is a transfer speed of about 1GB/s if you assume not getting close to the max transfer rate. For a small business or home NAS that would be plenty unless you are running greater than 10GiBit ethernet.

  • @[email protected]
    link
    fedilink
    English
    64 months ago

    These things are unreliable, I had 3 seagate HDDs in a row fail on me. Never had an issue with SSDs and never looked back.

    • @[email protected]
      link
      fedilink
      English
      14 months ago

      well until you need capacity why not use an SSD. It’s basically mandatory for the operating system drive too

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          4 months ago

          I would rather not buy so large SSDs. for most stuff the performance advantage is useless while the price is much larger, and my impression is still that such large SSDs have a shorter lifespan (regarding how many writes will it take to break down). recovering data fron a failing HDD is also easier: SSDs just turn read-only or completely fail at one point, in the latter case often even data recovery companies being unable to recover anything, while HDDs will often give signs that a good monitoring software can detect weeks or months before, so that you know to be more cautious with it

            • @[email protected]
              link
              fedilink
              English
              14 months ago

              How is it easier? Do you open your HDDs and take info from there?

              obviously not. often they don’t break all at once, but start with corrupting smaller areas of sectors

    • @[email protected]
      link
      fedilink
      English
      34 months ago

      Seagate in general are unreliable in my own anecdotal experience. Every Seagate I’ve owned has died in less than five years. I couldn’t give you an estimate on the average failure age of my WD drives because it never happened before they were retired due to obsolescence. It was over a decade regularly though.

  • @[email protected]
    link
    fedilink
    English
    30
    edit-2
    4 months ago

    Everybody taking shit about Seagate here. Meanwhile I’ve never had a hard drive die on me. Eventually the capacity just became too little to keep around and I got bigger ones.

    Oldest I’m using right now is a decade old, Seagate. Actually, all the HDDs are Seagate. The SSDs are Samsung. Granted, my OS is on an SSD, as well as my most used things, so the HDDs don’t actually get hit all that much.

    • @[email protected]
      link
      fedilink
      English
      144 months ago

      Seagate had some bad luck with their 3TB drives about 15 years ago now if memory serves me correctly.

      Since then Western Digital (the only other remaining HDD manufacturer) pulled some shenanigans with not correctly labeling different technologies in use on their NAS drives that directly impacted their practicality and performance in NAS applications (the performance issues were particularly agregious when used in a zfs pool)

      So basically pick your poison. Hard to predict which of the duopoly will do something unworthy of trusting your data upon, so uh…check your backups I guess?

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          4 months ago

          Yeah our file server has 17 Toshiba drives in the 10/14 TiB sizes ranging from 2-4 years of power-on age and zero failures so far (touch wood).

          Of our 6 Seagate drives (10 TiB), 3 of them died in the 2-4 year age range, but one is still alive 6 years later.

          We’re in Japan and Toshiba is by far the cheapest here (and have the best support - they have advance replacement on regular NAS drives whereas Seagate takes 2 weeks replacement to ship to and from a support center in China!) so we’ll continue buying them.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          4 months ago

          Ah I thought I had remembered their hard drive division being aquired but I was wrong! Per Wikipedia:

          At least 218 companies have manufactured hard disk drives (HDDs) since 1956. Most of that industry has vanished through bankruptcy or mergers and acquisitions. None of the first several entrants (including IBM, who invented the HDD) continue in the industry today. Only three manufacturers have survived—Seagate, Toshiba and Western Digital

    • @[email protected]
      link
      fedilink
      English
      34 months ago

      I had 3 drives from seagate (including 1 enterprise) that died or got file-corruption issues when I gave up and switched to SSDs entirely…

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      4 months ago

      Yeah, same. I switched to seagate after 3 WD drives failed in less then 3 years. Never had problems since.

    • @[email protected]
      link
      fedilink
      English
      18
      edit-2
      4 months ago

      I’ve had a Samsung SSD die on me, I’ve had many WD drives die on me (also the last drive I’ve had die was a WD drive), I’ve had many Seagate drives die on me.

      Buy enough drives, have them for a long enough time, and they will die.

  • @[email protected]
    link
    fedilink
    English
    944 months ago

    I can’t wait for datacenters to decommission these so I can actually afford an array of them on the second-hand market.

    • @[email protected]
      link
      fedilink
      English
      164 months ago

      Exactly, my nas is currently made up of decommissioned 18tb exos. Great deal and I can usually still get them rma’d the handful of times they fail

        • @[email protected]
          link
          fedilink
          English
          24 months ago

          eBay sellers that have tons of sales and specialize. You can learn to read between the lines and see that decom goods are what they do.

          SaveMyServer is a perfect example. Don’t know if they sell drives though.

        • @[email protected]
          link
          fedilink
          English
          144 months ago

          Serverpartdeals has done me well, drives often come new enough that they still have a decent amount of manufacturers warranty remaining (exos is 5yr) and depending on the drive you buy from them spd will rma a drive for 5 years from purchase (but not always, depends on the listing, read the fine print).

          I have gotten 2 bad drives from them out of 18 over 5 years or so. Both bad drives were found almost immediately with basic maintenance steps prior to adding to the array (zeroing out the drives, badblocks) and both were rma’d by seagate within 3-5 days because they were still within the mfr warranty.

          If you’re running a gigantic raid array like me (288tb and counting!) it would be wise to recognize that rotational hard drives are doomed and you need a robust backup solution that can handle gigantic amounts of data long term. I have a tape drive for that because I got it cheap at an electronics recycler sold as not working (thankfully it was an easy fix) but this is typically a super expensive route. If you only have like 20tb then you can look into stuff like cloud services, bluray, redundant hard drive, etc. or do like I did in the beginning and just accept that your pirated anime collection might go poof one day lol

          • @[email protected]
            link
            fedilink
            English
            14 months ago

            What kind of tape drive are you using? My array isn’t as large as yours (120tb physical), but it’s big enough that my only real options for backup are tape or a whole secondary array for just backup.

            Based on what I’ve seen, my options are a prohibitively large number tapes with an older LTO standard or prohibitively expensive tapes with a newer LTO standard.

            My current backup strategy consists of automated backups to Backblaze B2 for the really important stuff like personal documents or projects and hoping my ZFS array doesn’t fail for everything else.

            • @[email protected]
              link
              fedilink
              English
              24 months ago

              I have an ibm qualstar lto8 drive. I got it because I gambled, it was cheap because it was throwing an error (I forget what the number was) but it was one that indicates an issue in the tape path. I was able to get the price to $150 because I was buying some other stuff and because ultimately if the head was toast it was basically useless. But I got lucky and cleaning the head and tape path brought it back to life. Dunno how long it will last. I’ll live with it though because buying one that’s confirmed working can be thousands

              You’re right that lto8 tapes are pricey but they’re quite a bit cheaper than building an equivalent array for backup that is significantly more reliable long term. A tape is about 12tb and $40-50, although sometimes they pop up cheaper. I generally don’t back up stuff continually with this method, I back up newer files that haven’t been synced to tape once every six weeks or so. It’s also something that you can buy a bit at a time to soften the financial blow of course. Maybe if you get a fancy carousel drive you’d want to fill it up but frankly that just seems like it would break much easier

              More modern tapes have support for ltfs and I can basically use it like an external hard drive that way. So it’s pretty much I pop a tape in, once a week or so I sync new files to said tape, then as it gets full I swap it for a new tape. Towards the end I print a directory of what’s on it because admittedly doing it this way is messy. But my intention with this is to back up my “medium critical” files. Stuff that if I lost I would be frustrated over, but not heartbroken. Movies and TV shows that I did custom muxes of to have my ideal subtitles, audio tracks, etc. all my dockers so stuff like my Jellyfin watch status and komga library stay intact, stuff like that. That takes up the bulk of my nas and my primary concerns are either the array fully failing or significant bit rot, and if either of those occur I would rebuild from scratch and just copy all the tapes back over anyway so the messy filing isn’t really a huge issue.

              I also do sometimes make it a point to copy harder to find files onto at least 2 tapes on the outside chance a tape goes bad. It’s unlikely given I only buy new tapes and store them properly (I even go to the effort to store them offsite just in case my house burns down) but you never know I suppose

              The advertised values of tape capacity is crap for this use. You’ll see like lto 8 has a native capacity of 12tb but a compressed capacity of 30tb per disk! And the disks will frequently just say 30tb on them. That’s nonsense here. Maybe for a more typical server environment where they’re storing databases and text files and shit but compressed movies and music? Not so much. I get some advantage because I keep most of my stuff in archival quality (remux/flac/etc) but even then I still usually dont get anywhere near 30tb

              It’s pretty slow. Not the end of the world but just something to keep in mind. Lto8 is supposed to be 360MBps for uncompressed and 750MBps for compressed data but I don’t seem to hit those speeds at all. I’m not really in a rush though and everything verifies fine and works after copying back over so I’m not too worried. But it can take like 10-14 hours to fill a tape. If I ever do have to rebuild the array it will take AGES

              For my “absolutely priceless” data I have other more robust backup solutions that are basically the same as yours (literally down to using backblaze, ha).

              • @[email protected]
                link
                fedilink
                English
                14 months ago

                You got an incredible deal on your tape drive. For LTO8 drives, I’m seeing “for parts only” drives sold for around $500. I’d be willing to throw away $100 or $200 on the possibility that I could repair a drive; $500 is a bit too much. It looks like LTO6 is more around what my budget would be.; it would require a much larger number of tapes, but not excessively so.

                I remember when BD-R was a reasonable solution for backup. There’s no way that’s true now. It really seems like hard drive capacity has far outpaced removable media. If most people are streaming everything, those of us who actually want to save their data locally are really the minority these days. There’s just not as much of a compelling reason for companies to develop cheap high-capacity removable discs.

                I’m sure I’ll invest in a tape backup solution eventually, but for now, at least I have ZFS with paranoid RAIDZ.

                • @[email protected]
                  link
                  fedilink
                  English
                  14 months ago

                  It was about a year ago and I’ve found general prices have gone up on basically everything, even stuff for parts, in the past few years, but more importantly it was also a local sale in person with a vendor I know. I find that’s the only way to actually get deals anymore. If you buy stuff like this and are stuffing a network rack at home it makes sense to befriend a local electronics recycler or two if you live in an area where that’s a thing.

                  I actually moved about two years ago to a less developed area but I will still drive to where I used to live (which is like 90-120 minute drive) 1-2x a year for stuff like this. It’s worth it bc these guys still know me and they’ll cut me deals on stuff like this where ebay sellers will list it for 2-3x as much. but if you watch 8x out of 10 their auctions never sell at those prices, at best they sometimes sell for an undisclosed “best offer” if they even have that option. It’s crazy how many ebay sellers will let shit sit on the market for inflated prices for weeks, months, or longer rather than drop their prices to promote an artificial economy in the hopes that eventually a clueless buyer with fat pockets will come along. They get that and they don’t want to waste the space storing shit for ages

                  Full disclosure: when I lived in the area I ran a refurbishing business on the side and would buy tons of stuff from them to fix and resell, that probably helped get me on their good side. From like 2013-2019 I would buy tons of broken phones, consoles, weird industrial shit, etc, fix it, and resell it. They loved it because it was a guaranteed cash sale with no ebay/paypal fees, no risk of negative feedback for their ebay store, no risk of a buyer doing a chargeback or demanding to return, etc. I wanted their broken shit and if I couldn’t fix it I accepted the loss, would bring it back to them to recycle and admit defeat in shame

        • @[email protected]
          link
          fedilink
          English
          34 months ago

          Way ahead of you… I have a Brocade ICX6650 waiting to be racked up once I’m not limited to just the single 15A circuit my rack runs off of currently 😅

          Hopefully 40G interconnect between it and the main switch everything using now will be enough for the storage nodes and the storage network/VLAN.