Hi,

I’m not sure if this is the right community for my question, but as my daily driver is Linux, it feels somewhat relevant.

I have a lot of data on my backup drives, and recently added 50GB to my already 300GB of storage (I can already hear the comments about how low/high/boring that is). It’s mostly family pictures, videos, and documents since 2004, much of which has already been compressed using self-made bash scripts (so it’s Linux-related ^^).

I have a lot of data that I don’t need regular access to and won’t be changing anymore. I’m looking for a way to archive it securely, separate from my backup but still safe.

My initial thought was to burn it onto DVDs, but that’s quite outdated and DVDs don’t hold much data. Blu-ray discs can store more, but I’m unsure about their longevity. Is there a better option? I’m looking for something immutable, safe, easy to use, and that will stand the test of time.

I read about data crystals, but they seem to be still in the research phase and not available for consumers. What about using old hard drives? Don’t they need to be powered on every few months/years to maintain the magnetic charges?

What do you think? How do you archive data that won’t change and doesn’t need to be very accessible?

Cheers

  • DasFaultier
    link
    fedilink
    498 months ago

    This is my day job, so I’d like to weigh in.

    First of all, there’s a whole community of GLAM institutions involved in what is called Digital Preservation (try googling that specifically). Here in Germany, a lot of them have founded the Nestor Group (www.langzeitarchivierung.de) to further the case and share knowledge. Recently, Nestor had a discussion group on Personal Digital Archiving, addressing just your use case. They have set up a website at https://meindigitalesarchiv.de/ with the results. Nestor publishes mostly in German, but online translators are a thing, so I think you will be fine.

    Some things that I want to address from your original post:

    • Keep in mind that file formats, just like hardware and software, become obsolete over time. Think about a migration strategy for your files to a more recent format of your current format falls out of style and isn’t as widely supported anymore. I assume your photos are JPGs, which are widely not considered safe for preservation, as they decay with subsequent encoding runs and use lossy compression. A suitable replacement might be PNG, though I wouldn’t go ahead and convert my JPGs right away. For born digital photo material, uncompressed TIFF is the preferred format.
    • Compression in general is considered a risk, because a damaged bit will potentially impact a larger block of compressed data. Saving a few bytes on your storage isn’t worth listing your precious memories.
    • Storage media have different retention times. It’s true that magnetic tape storage has the best chances for survival, and it’s what we use for long term cold storage, but it’s prohibitively expensive for home use. Also, it’s VERY slow on random access, because tape has to be rewound to the specific location of your file before reading. If you insist on using it, format your tapes using LTFS to eliminate the need for a storage management system like IBM Spectrum Protect. The next best choice of storage media are NAS grade HDDs, which will last you upwards of five years. Using redundancy and a self correcting file system like ZFS (compression & dedup OFF!) will increase your chances of survival. Keep you hands off optical storage media; they tend to decay after a year already according top studies on the subject. Flash storage isn’t much greater either, avoid thumb drives at all cost. Quality SSD storage might last you a little longer. If you use ZFS or a comparable file system that provides snapshots, you can use that to implement immutability.
    • Kudos for using Linux standard tooling; it will help other people understand your stack of anything happens to you. Digital Preservation is all about removing dependencies on specific formats, technologies and (importantly) people.
    • Backup is not Digital Preservation, though I will admit that these two tend get mixed into one another in personal contexts. Backups save the state of a system at a specific point in time, DigiPres tries to preserve only data that isn’t specific to a system and tends to change very little. Also, and that is important, DigiPres tries to save context along with the actual payload, so you might want to at least save some metadata along with your photos and store them all in a structure that is made for preservation. I recommend BagIt; there’s a lot of existing tooling for creating it, it’s self-contained, secured by strong checksums and it’s an RFC.
    • Keep complexity as low as possible!
    • Last of all, good on you for doing SOMETHING. You don’t have to be perfect to improve your posture, and you’re on the right track, asking the right questions. Keep on going, you’re doing great.

    Come back at me if you have any further questions.

      • DasFaultier
        link
        fedilink
        28 months ago

        Good to hear! When you go with the National Archives UK, you can’t fail. They have some very, VERY competent people in staff over there, who are also quite active in the DigiPres community. They are also the inventors of DROID and the maintainers of the widely used PRONOM database of file formats. https://www.nationalarchives.gov.uk/PRONOM/Default.aspx Absolute heroes of Digital Preservation.

    • @[email protected]
      link
      fedilink
      28 months ago

      And have multiple copies in at least two locations of anything truly important to guard against disaster (such as a fire or regionally appropriate natural disaster). I got a spare drive to copy all the music that I’ve made and sent it to my father in a different part of the country. I could lose everything and be pretty bummed, but not that (without severe depression). I also endorse use of a safe deposit box at a bank if you don’t have someone who can hold data in a different city.

      • DasFaultier
        link
        fedilink
        28 months ago

        Yeah, you can always go crazy with (off site) copies. There’s a DigiPres software system literally called LOCKSS (Lots Of Copies Keep Stuff Safe).

        The German Federal Office for Information Security recommends a distance of at least 200km between (professional) sites that keep georedundant copies of the same data/service, so depending on your upload capacity and your familiarity with encryption (ALWAYS backup your keys!), some cloud storage provider might even be a viable option to create a second site.

        Spare drives do absolutely work as well, but remember that, depending on the distance, data there will get more or less outdated and you might not remember to refresh the hardware in a timely manner.

        A safe deposit box is something that I hadn’t considered for my personal preservation needs yet, but sounds like a good idea as well.

        Whatever you use, also remember to read back data from all copies regularly and recalculate checksums for fixity checks to make sure your data doesn’t get corrupted over time. Physical objects (like books) decay slowly over time, digital objects break more spontaneously and often catastrophically.

  • Max-P
    link
    fedilink
    38 months ago

    I would use maybe a Raspberry Pi or old laptop with two drives (preferably different brands/age, HDD or SSD doesn’t really matter) in it using a checksumming filesystem like btrfs or ZFS so that you can do regular scrubs to verify data integrity.

    Then, from that device, pull the data from your main system as needed (that way, the main system has no way of breaking into the backup device so won’t be affected by ransomware), and once it’s done, shut it off or even unplug it completely and store it securely, preferably in a metal box to avoid any magnetic fields from interfering with the drives. Plug it in and boot it up every now and then to perform a scrub to validate that the data is all still intact and repair the data as necessary and resilver a drive if one of them fails.

    The unfortunate reality is most storage mediums will eventually fade out, so the best way to deal with that is an active system that can check data integrity and correct the files, and rewrite all the data once in a while to make sure the data is fresh and strong.

    If you’re really serious about that data, I would opt for both an HDD and an SSD, and have two of those systems at different locations. That way, if something shakes up the HDD and damages the platter, the SSD is probably fine, and if it’s forgotten for a while maybe the SSD’s memory cells will have faded but not the HDD. The strength is in the diversity of the mediums. Maybe burn a Blu-Ray as well just in case, it’ll fade too but hopefully differently than an SSD or an HDD. The more copies, even partial copies, the more likely you can recover the entirety of the data, and you have the checksums to validate which blocks from which medium is correct. (Fun fact, people have been archiving LaserDiscs and repairing them by ripping the same movie from multiple identical discs, as they’re unlikely to fade at exactly the same spots at the same time, so you can merge them all together and cross-reference them and usually get a near perfect rip of it).

    • @[email protected]
      link
      fedilink
      English
      28 months ago

      with two drives (preferably different brands/age, HDD or SSD doesn’t really matter) in it using a checksumming filesystem like btrfs or ZFS so that you can do regular scrubs to verify data integrity.

      an important detail here is to add the 2 disks to the filesystem in a way so that the second one does not extend the capacity, but adds parity. on ZFS, this can be done with a mirror vdev (simplest for this case) or a raidz1 vdev.

    • @[email protected]OP
      link
      fedilink
      18 months ago

      “The strength is in the diversity of the mediums” I like that. Should be part of the book of Zen for Backups. Thank you for your insights.

  • xmanmonk
    link
    fedilink
    68 months ago

    Don’t use DVDs. They suffer bitrot, as do “metal” hard drives.

  • @[email protected]
    link
    fedilink
    4
    edit-2
    8 months ago

    I am using https://duplicati.com/ and https://www.backblaze.com/ ( use their b2 cloud storage its variable and 6$ a month for 1TB or less depending on how much you use) run a schedule beckup every night for my photos. It’s compressed and encrypted. I save a config file to my google so say if my house and server burn down. I just pull my config from google then redownload duplicati and boom pull my back up down. The whole set up backs up incremental so once you do the first back up its only changes that are uploaded. I love the whole set up.

    Edit: You can also just pull files you need not the whole backup.

  • @[email protected]
    link
    fedilink
    58 months ago

    The local-plus-remote strategy is fine for any real-world scenario. Make sure that at least one of the replicas is a one-way backup (i.e., no possibility of mirroring a deletion). That way you can increment it with zero risk.

    And now for some philosophy. Your files are important, sure, but ask yourself how many times you have actually looked at them in the last year or decade. There’s a good chance it’s zero. Everything in the world will disappear and be forgotten, including your files and indeed you. If the worst happens and you lose it all, you will likely get over it just fine and move on. Personally, this rather obvious realization has helped me to stress less about backup strategy.

    • @[email protected]OP
      link
      fedilink
      38 months ago

      So you would suggest to get bigger and bigger storages?

      I really like and can embrace the philosophical part. I do delete rigorously data. At the same time, i once had a data lost, because I was young and stupid and tried to install Suse without an backup. I still am sad to not to be able to look at the images of me and my family from this time. I do look at those pictures/videos/recordings from time to time. It gives me a nice feeling of nostalgia. Also grounds me and shows me how much have changed.

      • @[email protected]
        link
        fedilink
        28 months ago

        Fair enough!

        So you would suggest to get bigger and bigger storages?

        Personally I would suggest never recording video. We did fine without it for aeons and photos are plenty good enough. If you can still to this rule you will never have a single problem of bandwidth or storage ever again. Of course I understand that this is an outrageous and unthinkable idea for many people these days, but that is my suggestion.

        • @[email protected]OP
          link
          fedilink
          28 months ago

          Never recording videos… That is outrageous ;) Interesting train of thought, though. Video is the main data hog on my drives. It’s easy to mess up the compression. At the same time is combines audio, image and time in one easy to consume file. Personally, i would miss it.

      • Nine
        link
        fedilink
        28 months ago

        Waaaaay better.

        Restic allows you to make dedupe snapshots of your data. Everything is there and it’s damn hard to loose anything. I use backblaze b2 as my long term end point / offsite… some will use AWS glacier. But you don’t have to use any cloud services. You can just have a restic repository on some external drives. That’s what I use for my second copy of things. I also will do an annual backup to a hard disk that I leave with a friend for a second offsite copy.

        I’ve been backing up all of my stuff like this for years now. I used to use BORG which is another great tool. But restic is more flexible with allowing multiple systems to use a single repository and has native support for things like B2 that BORG doesn’t.

        We also use restic to backup control nodes for some of supercomputing clusters I manage. It’s that rock solid imho.

  • @[email protected]
    link
    fedilink
    English
    68 months ago

    You might be interested in git-annex (see the Bob use case).

    It has file tracking so you can - for example - “ask” a repository at drive A where some file is, and git-annex can tell you it’s on drives C and D.

    git-annex can also enforce rules like: “always have at least 3 copies of file X, and any drive will do”; “have one copy of every file at the drives in my house, and have another at the drives in my parents’ house”; or “if a file is really big, don’t store it on certain drives”.

  • Dave.
    link
    fedilink
    38 months ago

    Blu-Ray USB drive and M-Discs is about the best you can get at present. Keep the drive unplugged when not in use, it’ll probably last 10-20 years in storage.

    Seeing as there hasn’t been much advance past Blu-ray, keep an eye out for something useful to replace it in the future, or at least get another drive when you notice them becoming scarce.

    • astrsk
      link
      fedilink
      68 months ago

      According to this Blu-ray has some of the worst expected shelf life, with the exception of BD-RE.

      • Dave.
        link
        fedilink
        1
        edit-2
        8 months ago

        As another poster has mentioned, M-Discs are written using a Blu-ray writer and are good for a few hundred years, in theory.

      • Extras
        link
        fedilink
        5
        edit-2
        8 months ago

        Think they meant a blu-ray drive that could burn to a m-disc.

  • @[email protected]
    link
    fedilink
    28 months ago

    I use external hard drives. Two of them, and they get rsynced every time something changes, so there’s a copy if one drive should fail. Once a month, I encrypt the whole shebang with gpg and send it off into an AWS bucket.

  • DigitalDilemma
    link
    fedilink
    English
    38 months ago

    I used to write to DVD’s, but the failure rate was astronomical - like 50% after 5 years, some with physical separation of the silvering. Plus today they’re so relatively small they’re not worth using.

    I’ve gone through many iterations and currently my home setup is this:

    • I have several systems that make daily backups from various computers and save them onto a hard drive inside one of my servers.
    • That server has an external hard drive attached to it controlled by a wifi plug controlled by home assistant.
    • Once a month, a scheduled task wakes up that external hdd and copies the contents of the online backup directory onto it. It then turns it off again and emails me “Oi, minion. Backups complete, swap them out”. That takes five minutes.
    • Then I take the usb disk and put it in my safe, removing the oldest of 3 (the classic, grandfather, father, son rotation) from there and putting that back on the server for next time.
    • Once a year, I turn the oldest HDD into an “Annual backup”, replacing it with a new one. That stops the disks expiring from old age at the same time, and annual backups aren’t usually that valuable.

    Having the hdd’s in the safe means that total failure/ransomware takes, at most, a month’s worth. I can survive that. The safe is also fireproof and in another building to the server.

    This sort of thing doesn’t need to be high capacity HDDs either - USB drives and micro-SD cards are very capable now. If you’re limited on physical space and don’t mind slower write times (which when automating is generally ok), the microSd’s and clear labelling is just as good. You’re not going to kill them through excessive writes for decades.

    I also have a bunch of other stuff that is not critical - media files, music. None of that is unique and can be replaced. All of that is backed to a secondary “live” directory on the same pc - mostly in case of my incompetence in deleting something I actually wanted. But none of that is essential - I think it’s important to be clear about what you “must save” and what is “nice to save”

    The clear thing is to sit back and work out a system that is right for you. And it always, ALWAYS should be as automated as you can make it - humans are lazy sods and easily justify not doing stuff. Computers are great and remembering to do repetitive tasks, so use that.

    Include checks to ensure the backed up data is both what you expected it to be, and recoverable - so include a calendar reminder to actually /read/ from a backup drive once or twice a year.

  • @[email protected]
    link
    fedilink
    48 months ago

    There isn’t anything that meets your criteria.

    Optical suffers from separation, hard drives break down, ssds lose their charge, tape is fantastic but has a high cost of entry.

    There’s a lot of replies here, but if I were you I’d get last generation or two’s lto machine from some surplus auction and use that.

    People hate being told to use magnetic tape, but it’s very reliable, long lived, pretty cost effective once you have a machine and surprisingly repairable.

    What few replies are talking about is the storage conditions. If your archive can be relatively small and disconnected then you can easily meet some easy requirements for long term storage like temperature and humidity stability with a cardboard box, styrofoam cut to shape and desiccant packs (remember to rotate these!). An antifungal/antimicrobial agent on some level would be good too.

    • @[email protected]
      link
      fedilink
      4
      edit-2
      8 months ago

      People hate being told to use magnetic tape

      Because there are still horror stories of them falling apart and not lasting even in proper controlled conditions

          • @[email protected]
            link
            fedilink
            38 months ago

            The data is stored in little ccd cells. It’s recorded as an analog voltage. There is no difference between analog voltages and digital voltages, I’m just using the word analog to establish that the potential is a domain that can vary continuously.

            When you read the data, the levels of the voltages are checked and translated to the digital information they represent.

            To determine the level of a voltage, a small amount of current is allowed to flow between the two points being measured. It’s a very small amount. Microamps and less.

            When you draw current from a charge carrying device the charge, as represented by the potential between its negative and positive terminals, the voltage, decreases.

            When the controller in the ssd responsible for reading voltages and assembling them into porno.mov doesn’t get a clear read, it asks again. As the ssd ages, parts of it can be re queried hundreds of times just to get commonly read information into memory like system files.

            So the ssd degrades on read, and the user experiences this as “slowness”.

            Would rewriting the data fix this problem? Yes. Using either badblocks -n, dd or a program called spinrite, rewriting the data fixes that problem.

            Why doesn’t the ssd just do it? Because the ssd only has so many write cycles before its toast. Better to rely on the user or more accurately the host os to dictate those writes than to take on that responsibility.

  • @[email protected]
    link
    fedilink
    198 months ago

    This is actually a real problem… A lot of digital documents from the 90’s and early 2000’s are lost forever. Hard drives die over time, and nobody out there has come up with a good way to permanently archive all that stuff.

    I am a crazy person, so I have RAID, Ceph, and JBOD in various and sundry forms. Still, drives die.

    • @[email protected]
      link
      fedilink
      68 months ago

      It’s crazy that there isn’t a company out there making viable cold storage for the average consumer. I feel like we’re getting even further away from viability now that we use QLC by default in SSDs. The rot will be so fast.

    • @[email protected]
      link
      fedilink
      English
      128 months ago

      nobody out there has come up with a good way to permanently archive all that stuff

      Personally I can’t wait for these glass hard drives being researched to come at the consumer or even corporate level. Yes they’re only writable one time and read only after that, but I absolutely love the concept of being able to write my entire Plex server to a glass harddrive, plug it in and never have to sorry about it again.