I’m trying to find a good method of making periodic, incremental backups. I assume that the most minimal approach would be to have a Cronjob run rsync
periodically, but I’m curious what other solutions may exist.
I’m interested in both command-line, and GUI solutions.
Used to use Duplicati but it was buggy and would often need manual intervention to repair corruption. I gave up on it.
Now use Restic to Backblaze B2. I’ve been very happy.
I’ve used restic in the past; it’s good but requires a great deal of setup if memory serves me correctly. I’m currently using Duplicati on both Ubuntu and Windows and I’ve never had any issues. Thanks for sharing your experience though; I’ll be vigilant.
Restic to B2 is made of win.
The quick, change-only backups in a digit executable intrigued me; the ability to mount snapshots to get at, e.g., a single file hooked me. The wide, effortless support for services like BackBlaze made me an advocate.
I back up nightly to a local disk, and twice a week to B2. Everywhere. I have some 6 machines I do this on; one holds the family photos and our music library, and is near a TB by itself. I still pay only a few dollars per month to B2; it’s a great service.
Pika Backup (GUI for
borgbackup
) is a great app for backups. It has all the features you might expect from backup software and “just works”.A separate NAS on an atom cpu with btrfs of raid 10 exposed over NFS.
I use rsync+btrfs snapshot solution.
- Use rsync to incrementally collect all data into a btrfs subvolume
- Deduplicate using
duperemove
- Create a read-only snapshot of the subvolume
I don’t have a backup server, just an external drive that I only connect during backup.
Deduplication is mediocre, I am still looking for snapshot aware
duperemove
replacement.I’m not trying to start a flame war, but I’m genuinely curious. Why do people like btrfs over zfs? Btrfs seems very much so “not ready for prime time”.
Features necessary for most btrfs use cases are all stable, plus btrfs is readily available in Linux kernel whereas for zfs you need additional kernel module. The availability advantage of btrfs is a big plus in case of a disaster. i.e. no additional work is required to recover your files.
(All the above only applies if your primary OS is Linux, if you use Solaris then zfs might be better.)
btrfs is included in the linux kernel, zfs is not on most distros
the tiny chance that an externel kernel module borking with a kernel upgrade happens sometimes and is probably scary enough for a lot of peopleFair enough
I’ve only ever run ZFS on a proxmox/server system but doesn’t it have a not insignificant amount of resources required to run it? BTRFS is not flawless, but it does have a pretty good feature set.
I run ZFS on my servers and then replicate to other ZFS servers with Syncoid.
Just keep in mind that a replica is not a backup.
If you lose or corrupt a file and you don’t find out for a few months, it’s gone on the replicas too.
Correct! I have Sanoid take daily and monthly snapshots on the source server, which replicate to the destination. Every now and then, I run a diff between the last known-good monthly snapshot and the most recent one which has been replicated to the destination. If I am happy with what files have changed, I delete the previous known-good snapshot and the one I diff’d becomes the new known-good. That helps keep me safe from ransomware on the source. The destination pulls from the source to prevent the source from tampering with the backup. Also helps when you’re running low on storage.
I rotate between a few computers. Everything is synced between them with syncthing and they all have automatic btrfs snapshots. So I have several physical points to roll back from.
For a worst case scenario everything is also synced offsite weekly to a pCloud share. I have a little script that mounts it with pcloudfs, encfs and then rsyncs any updates.
I run Openmediavault and I backup using BorgBackup. Super easy to setup, use, and modify
I use Rclone which has both an WEBUI and CLI.
Get a Mac, use Time Machine. Go all in on the eco system. phone, watch, iPad, tv. I resisted for years but it’s so good man and the apple silicon is just leaps beyond everything else.
Time Machine is not a backup, it is unreliable. I’ve had corrupted time machine backups and its backups are non-portable: You can only read the backups using an Apple machine. Apple Silicon is also not leaps beyond everything else, a 7000-series AMD chip will trade blows on performance per watt given the same power target. (source: I measured it, 60 watt power limit on a 7950X will closely match a M1 ultra given the same 60 watts of power)
Sure their laptops are tuned better out of the box and have great battery life, but that’s not because of the Apple Silicon. Apple had good battery life before, even when their laptops had the same Intel chip as any other laptop. Why? Because of software.
Like before, their new M-chips are nothing special. Apple Silicon chips are great, but so are other modern chips. Apple Silicon is not “leaps beyond everything else”.
If you look past their shiny fanboy-bait chips, you realize you pay **huge ** markups on RAM and storage. Apple’s RAM and storage isn’t anything special, but they’re a lot more expensive than any other high-end RAM and storage modules, and it’s not like their RAM or storage is better because, again, an AMD chip can just use regular RAM modules and an NVME SSD and it will match the M-chip performance given the same power target. Except you can replace the RAM modules and the SSD on the AMD chipset for reasonable prices.
In the end, a macbook is a great product and there’s no other laptop that really gets close to its performance given its size. But that’s it, that’s where Apple’s advantage ends. Past their ultra-light macbooks, you get overpriced hardware, crazy expensive upgrades, with an OS that isn’t better, more reliable or more stable than Windows 11 (source: I use macOS and Windows 11 daily). You can buy a slightly thicker laptop (and it will still be thin and light) with replacable RAM and SSD and it will easily match the performance of the magic M1 chip with only a slight reduction in potential battery life. But guess what: If you actually USE your laptop for anything, the battery life of any laptop will quickly drop to 2-3 hours at best.
And that’s just laptops. If you want actual work done, you get a desktop, and for the price of any Apple desktop you can easily get any PC to outperform it. In some cases, you can buy a PC to outperform the Apple desktop AND buy a macbook for on the go, and still have money left over. Except for power consumption ofcourse, but who cares about power consumption on a work machine? Only Apple fanboys care about that, because that’s the only thing they got going for them. My time is more expensive than my power bill.
Time Machine is such a neglected product. Time Shift is worlds beyond it.
Someone asking for Linux backup solution may prefer to avoid Apple ‘ecosystem’.
At the core it has always been rsync and Cron. Sure I add a NAS and things like rclone+cryptomator to have extra copies of synchronized data (mostly documents and media files) spread around, but it’s always rsync+Cron at the core.
Either an external hard drive or a pendrive. Just put one of those in a keychain and voila, a perfect backup solution that does not need of internet access.
…it’s not dumb if it (still) works. :^)
I use Pika backup, which uses borg backup under the hood. It’s pretty good, with amazing documentation. Main issue I have with it is its really finicky and is kind of a pain to setup, even if it “just works” after that.
Can you restore from it? That’s the part I’ve always struggled with?
The way pika backup handles it, it loads the backup as a folder you can browse. I’ve used it a few times when hopping distros to copy and paste stuff from my home folder. Not very elegant, but it works and is very intuitive, even if I wish I could just hit a button and reset everything to the snapshot.
I like rsnapshot, run from a cron job at various useful intervals. backups are hardlinked and rotated so that eventually the disk usage reaches a very slowly growing steady state.
Been using rsnapshot for years, has saved me more than once
I also use it. Big benefit is also that you don‘t need a special software to access your backup.
I do periodic backups of my system from live usb via Borg Backup to a samba share.
Most of my data is backed up to (or just stored on) a VPS in the first instance, and then I backup the VPS to a local NAS daily using rsnapshot (the NAS is just a few old hard drives attached to a Raspberry Pi until I can get something more robust). Very occasionally I’ll back the NAS up to a separate drive. I also occasionally backup my laptop directly to a separate hard drive.
Not a particularly robust solution but it gives me some piece of mind. I would like to build a better NAS that can support RAID as I was never able to get it working with the Pi.