I just got my home server up and running and was wondering what you guys recommend for backups. I figure it will probably be worth having backups on cloud servers tjay are external, are there any good services yall use for that?
Backups and archived files go to my home server which then backups to backblaze b2.
My setup exactly, with the addition of using M-Discs to backup my core important stuff.
As dumb/simple/boring as this may be…? An external hard drive.
…
…what? It doesn’t require you to be online 24/7, works at any™ PC, and the speed is really great – even on a potato.
Unless you work at NASA or at IBM or similars – then feel free to call me dum.
While I agree with you, hard drives do have a shelf life. How many years seems to be up for debate but it does exist. If you don’t have multiple drives that are of different ages you may be in a world of hurt one day.
I have a hot storage NAS that backups to a warm storage NAS.
I backup every week and scrub every month.
I have 2 x ZFS1 pools that contains 3 x 20TB disks each.
With ECC ram, scrubbing, and independent pools, it’ll take a house fire to kill my local storage.
I also have a constant backing to Backblaze and yearly encrypted backup that I ship to a friend across the world.
Why? If you check the drive once a month, and it fails once per 10 years on average, the time when both the back up drive and the main drive fail simultaneously is on average 2340 years. Of couse they are much more likely to fail if they’re old but the odds are very small.
That is great for hardware failures, but what about disasters? I would hate to lose my house to a fire and all the data (including things not replaceable, like family photos) I have on my server at the same time because my primary and backup were both destroyed.
Eh…you’ve got a point there. Then again, there is always pendrives and other extremely small devices where you can copy your (mostly important/crucial) files in and carry it along with your house/car keys or something like that.
I use OneDrive. Buy the Costco subscription and get like 15 months for around 110 CAD. GIVES 6 TB. I create some fake accountsink the sharing to my main account. I have an encrypted rxlone share for some things and others I GPG encryot the tar before sending it up. Been working fine for a couple years and I have multiple TB backed up.
I use Restic + Resticprofile to back up everything and store it on my local HDD.
Then, I use Rclone to sync the local repository to Backblaze B2.
Here’s my general setup:
/.config/restic/ ├── logs │ ├── statuses │ │ ├── restic-status-20230202T020202.json │ │ └── restic-status-20230101T010101.json │ ├── restic-check-20230202T020202.log │ └── restic-backup-20230101T010101.log ├── config │ ├── profiles.yaml │ ├── excludes.txt │ ├── rclone.conf │ └── password.txt ├── bin │ ├── restic_0.15.2_linux_arm64 │ ├── rclone_1.63.1_linux_arm64 │ └── resticprofile_0.22.0_linux_arm64
version: "1" # Schedules (https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events) {{ $SCHEDULE_RESTIC_BACKUP := "*-*-* 22:00:00" }} # Daily at 10PM {{ $SCHEDULE_RESTIC_CHECK := "Sat *-*-* 04:00:00" }} # Weekly at 4AM on Saturday {{ $SCHEDULE_SYNC_BACKUP := "Sun *-*-* 21:30:00" }} # Weekly at 11.30PM on Sunday {{ $SCHEDULE_POSTGRES_BACKUP := "Fri *-*-* 20:00:00" }} # Weekly at 8PM on Friday # Directories {{ $LOCATION_RESTIC_BINARY := "/home/deck/Desktop/.config/restic/bin/restic_0.15.2_linux_arm64" }} {{ $LOCATION_RESTIC_REPO := "/home/deck/Desktop/restic-repo" }} {{ $LOCATION_RESTIC_LOG := "/home/deck/Desktop/.config/restic/logs" }} {{ $LOCATION_RESTIC_STATUS := "/home/deck/Desktop/.config/restic/logs/statuses" }} {{ $LOCATION_RESTIC_BLOCKED_FILE := "/home/deck/Desktop/.config/restic/BLOCKED" }} {{ $LOCATION_RCLONE_BINARY := "/home/deck/Desktop/.config/restic/bin/rclone_1.63.1_linux_arm64" }} {{ $LOCATION_RCLONE_REPO := "bucket:restic-backup-12345" }} {{ $LOCATION_RCLONE_CONFIG := "/home/deck/Desktop/.config/restic/config/rclone.conf" }} {{ $LOCATION_RESTICPROFILE_LOCK := "/tmp/resticprofile-default.lock" }} {{ $LOCATION_POSTGRES_DUMP := "/home/deck/Desktop/dumps" }} {{ $LOCATION_PRIMARY_BACKUP_SOURCE := "/home/deck/Desktop/" }} # Configs {{ $CONFIG_CURRENT_TIME := .Now.Format "20060102T150405" }} {{ $CONFIG_RESTIC_PASSWORD := "/home/deck/Desktop/.config/restic/config/password.txt" }} {{ $CONFIG_RESTIC_EXCLUDE := "/home/deck/Desktop/.config/restic/excludes.txt" }} global: default-command: snapshots # Run 'snapshots' when no command is specified initialize: false # Do not initialize a repository if none exists priority: low # Use priority class on Windows and "nice" on Unixes min-memory: 100 # Minimum required RAM for Resticprofile to start restic-lock-retry-after: 5m # Retry failed restic command acquisition every 5 minutes restic-stale-lock-age: 10h # Unlock stale lock if age exceeds 10 hours restic-binary: '{{ $LOCATION_RESTIC_BINARY }}' # Location of the Restic binary default: lock: '{{ $LOCATION_RESTICPROFILE_LOCK }}' # Local lockfile to prevent concurrent profile runs force-inactive-lock: true # Detect and remove stale locks initialize: true # Initialize repository if it doesn't exist repository: '{{ $LOCATION_RESTIC_REPO }}' # Path to Restic repository password-file: '{{ $CONFIG_RESTIC_PASSWORD }}' # File containing repository password status-file: '{{ $LOCATION_RESTIC_STATUS }}/{{ $CONFIG_CURRENT_TIME }}-restic-status.json' # Output status file compression: 'max' # Maximum compression level run-after-fail: # Block syncing if there was a failure. TODO: Add an email - 'echo "The command ${PROFILE_COMMAND} has failed in ${PROFILE_NAME}. Please check the logs." > {{ $LOCATION_RESTIC_BLOCKED_FILE }}' backup: run-before: # Bring down Docker before backup - 'systemctl stop docker.socket' - 'systemctl stop docker' run-finally: - 'grep --invert-match -E "^unchanged|\(0 B added, 0 B stored\)|\(0 B added\)" {{ tempFile "backup.log" }} > {{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-backup.log' # Copy log file, stripping out any unchanced files - 'systemctl start docker' # Bring Docker back online after backup one-file-system: false # Exclude other file systems no-error-on-warning: true # Don't consider warnings as backup failures source: # Directories to back up - '{{ $LOCATION_PRIMARY_BACKUP_SOURCE }}' exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}' # File containing exclude patterns exclude-caches: true # Exclude cache files schedule: '{{ $SCHEDULE_RESTIC_BACKUP }}' # Backup schedule schedule-permission: system # Schedule permission schedule-lock-wait: 10m # Wait time for the lock during schedule schedule-log: '{{ tempFile "backup.log" }}' # Log file to /tmp. This contains all information, including unchanged files which we do not care about verbose: 2 # Log details about processed files check: schedule: '{{ $SCHEDULE_RESTIC_CHECK }}' # Verification schedule schedule-permission: system # Schedule permission schedule-lock-wait: 10m # Wait time for the lock during schedule schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-check.log' # Log file read-data: true # Verify data during check prune: dry-run: true # Only prune if safe to do so, change manually repack-uncompressed: true # Repack all uncompressed data forget: dry-run: true # Only forget if safe to do so, change manually rewrite: dry-run: true # Only rewrite if safe to do so, change manually forget: true # Remove original snapshots after creating new ones exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}' # File containing exclude patterns mount: allow-other: true # Allow other users to access the mount point rebuild-index: read-all-packs: true # Read all pack files to generate new index from scratch # The following shell profiles are simply to run other shell scripts at a scheduled time # We do not actually run the primary Restic commands listed, as we exit the process early shell-postgres: # Profile to run shell scripts only. We exit the current process before Restic can run. backup: schedule: '{{ $SCHEDULE_POSTGRES_BACKUP }}' # Postgres backup schedule schedule-permission: system # Schedule permission schedule-lock-mode: ignore # Ignore locks, if any schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-postgres-backup.log' # Log file dry-run: true # Don't write data run-before: # Dump postgres databases - 'chmod 777 /var/run/docker.sock' - 'docker exec -t immich-postgres pg_dumpall -c -U postgres | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Immich database: {{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"' - 'docker exec -t joplin-postgres pg_dumpall -c -U joplin | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Joplin database: {{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"' - 'kill $$' shell-sync: backup: schedule: '{{ $SCHEDULE_SYNC_BACKUP }}' # Sync backup schedule schedule-permission: system # Schedule permission schedule-lock-mode: ignore # Ignore locks, if any schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-rsync-backup.log' # Log file dry-run: true # Don't write data run-before: # Sync the Restic repo, after checking if the repository is in good health - 'if [ -f "{{ $LOCATION_RESTIC_BLOCKED_FILE }}" ]; then echo "There has been a problem with the Restic repository, please check the logs. If everything is okay, delete the BLOCKED file." && kill $$; fi' - '{{ $LOCATION_RCLONE_BINARY }} -v sync {{ $LOCATION_RESTIC_REPO }} {{ $LOCATION_RCLONE_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }} --b2-hard-delete' - '{{ $LOCATION_RCLONE_BINARY }} cleanup {{ $LOCATION_RESTIC_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }}' - 'kill $$'
Resticprofile doesn’t let me run other shell commands on a schedule, and because I wanted everything in a single configuration, I just created two new profiles which call the backup command. I then made the shell commands run before Restic, and then finally killed the instance before it got to actually run, which effectively does what I needed.
It’s the first time I hear about resticprofile and it looks nice. So far I’ve been using crestic for configuration files. Do you know how they compare?
It seems like they have the same objectives - allow for easier configuration of Restic. I’ve never heard of Crestic until now. I’d say stick with what you’re comfortable with
Veeam backup and replication at home and at work.
Veeam backup and replication at home and at work. At home a copy goes to a NAS, another copy goes to backblaze b2 currently.
Borgbase with Borgmatic (Borg) as the Software. As far as I know the whole Borgbase Service is from a Homelab guy (with our needs in mind).
Also 3-2-1 rule!
Also team borgmatic here. ;)
Duplicati, to a friend’s home server who lives in another town.
I hate to ask the scary question, but have you tried to restore your backups before? I used Duplicati and discovered that none of my backups were usable and ended up switching to Duplicacy.
+1 for Duplicacy. It just works, truly does. Duplicati on the other hand seems to work, but has a tendency to fail on restore, just as you described.
It works just fine for me, but I’ve heared scary storries so now Im using:
- Kopia to backblaze b2 (all data)
- Kopia to local disk (all data)
- Duplicati to google drive (only 1 folder)
This is why I switched to restic.
An important question though.
I have, when I first set it up, and again once when I needed to.
How would one realistically go about testing their backup? Do you need a bunch of empty drives?
You don’t need to do full restores, spot check random files.
I use SyncThing to backup our cell phones to my on-prem server, and then use BackBlaze Personal Backup for a cloud copy.
Backblaze b2, borgbase.com. There are also programs like dejadup that will let you backup to popular cloud drives. The alternatives are limitless.
Backblaze.
Regardless of service, if you don’t test your backups, you have none.
Ehhh I would say then you have probabilistic backups. There’s some percent chance they’re okay, and some percent chance they’re useless. (And maybe some percent chance they’re in between those extremes.) With the odds probably not in your favor. 😄
Schrodinger’s backups.
Exactly.
Not so much about testing, but one time I really needed to get to my backups I lost password to the repository (I’m using restic). Luckily a copy of it was stored in bitwarden, but until I remembered it, were perhaps one of the worst moments.
Needless to say, please test backups and store secrets in more then one place.
I use nightly borg backup to a separate box and then that box uses rclone to back up the borg repo offsite. Before running the borg backup I export all databases and docker volumes so they get picked up.
I have been with idrive since 2009. At the time they were the only ones that allowed backups of network attached storage on their cheaper personal plans. Everyone else saw that as an “enterprise” feature which required a business plan. Which was bullsh*t, because lots of home NAS devices were being sold.
Anyway, I haven’t done a recent comparison of services, but I remain happy with idrive.
Thesedays I no longer backup on a computer with a mapped drive, but directly from my NAS which runs the idrive software.
I had a catastrophic dual drive failure a few years ago, one failed and another failed during the raid rebuild! I was able to restore about 1tb of data and didn’t lose anything important.
They also offer backup and restore by shipping a drive to you if you want to avoid the huge initial backup or a total restore, but I haven’t used that feature.
They do also have a mobile app, but last time I tried it, it wasn’t great.
- restic > backblaze b2, nightly & automatic
- restic > normally unplugged drive, every couple weeks (manual, recurring reminder)