cross-posted from: https://programming.dev/post/9319044

Hey,

I am planning to implement authenticated boot inspired from Pid Eins’ blog. I’ll be using pam mount for /home/user. I need to check integrity of all partitions.

I have been using luks+ext4 till now. I am hesistant hesitant to switch to zfs/btrfs, afraid I might fuck up. A while back I accidently purged ‘/’ trying out timeshift which was my fault.

Should I use zfs/btrfs for /home/user? As for root, I’m considering luks+(zfs/btrfs) to be restorable to blank state.

  • Possibly linux
    link
    fedilink
    English
    11 year ago

    Btrfs is good for small systems with 1-2 disks. ZFS is good for many disks and benefits heavily from ram. ZFS also has specially disks.

  • @[email protected]
    link
    fedilink
    English
    61 year ago

    My experience with btrfs is “oh shit I forgot to set up subvolumes”. Other than that, it just works. No issues whatsoever.

    • unhingeOP
      link
      fedilink
      31 year ago

      oh shit I forgot to set up subvolumes

      lol

      I’m also planning on using its subvolume and snapshot feature. since zfs also supports native encryption, it’ll be easier to manage subvolums for backups

  • @[email protected]
    link
    fedilink
    161 year ago

    Luks+btrfs with Arch as daily driver for 3 years now, mostly coding and browsing. Not a single problem so far :D

    • unhingeOP
      link
      fedilink
      41 year ago

      that sounds good.

      Have you used luks integrity feature? though it’s marked experimental in man page

  • @[email protected]
    link
    fedilink
    151 year ago

    After 4 years on btrfs I haven’t had a single issue, I never think about it really. Granted, I have a very basic setup. Snapper snapshots have saved me a couple of times, that aspect of it is really useful.

  • @[email protected]
    link
    fedilink
    81 year ago

    At some, long ago, the Ubuntu installer was offering to use zfs for the boot and root partitions. That sounded like a good idea and worked great for a long time, automatic snapshots, options to restore state at boot etc.

    Until my generous boot partition started to run out if space with all the snapshots (which were setup automatically and no obvious way to configure) OK no big deal, write a bash script that finds the old snapshots and delete them manually whenever boot is full again.

    Then one day recently my laptop wouldn’t boot anymore, Grub could no longer read the zfs on boot. Managed to boot with USB installation image, read zsf and chroot. Tried alot of things but in the end killed zfs and replace with ext4. Then made it boot again.

    Apparently I’m not the only one with this issue.

  • The Doctor
    link
    fedilink
    English
    1
    edit-2
    1 year ago

    There’s no reason you couldn’t; btrfs is pretty stable.

    Edit: Going on five years of using btrfs on production servers (storing and processing data on a 24x7 basis).

  • @[email protected]
    link
    fedilink
    English
    4
    edit-2
    1 year ago

    I haven’t used them professionally but I’ve been using ZFS on my home router (OPNsense) and NAS (TrueNAS with RAID-Z2) for many years without problem. I’ve used Btrfs on laptops and desktops with OpenSUSE Tumbleweed for the past year and a bit, also without problem. Btrfs snapshots have saved me a couple of times when I messed something up. Both seem like solid filesystems for everyday use.

  • @[email protected]
    link
    fedilink
    31 year ago

    Been using BTRFS on a couple NAS servers for 4+ years. Also did raid1 BTRFS over two USB hard drives connected to a Pi4 (yes this should be absolutely illegal).

    The USB raid1 had a couple checksum errors that were easily fixed via scrub last year and the other two servers have been running without any issues. I assume it’s been fine since they’re all connected to a UPS and since I run weekly scrubs.

    I enjoyed CoW and snapshots so much that I’ve been using it on my main Arch install’s (I use Arch btw) root drive and storage drives (in BTRFS raid1) for the last 4 months without issue.

  • @[email protected]
    link
    fedilink
    181 year ago

    Been using Btrfs for a year, I once had an issue that my filesystem was read only, I went to the Btrfs reddit and after some troubleshooting it turned out that my ssd was dying, I couldn’t believe it at first because my SMART report was perfectly clean and the SSD was only 2 years old, then a few hours later SMART began reporting thousands of dead sectors.

    The bloody thing was better than smart at detecting a dying ssd lol.

  • 0x0
    link
    fedilink
    71 year ago

    I did my first BTRFS setup over the weekend. I followed the Arch wiki to set up what I thought was RAID 1 only to find out nearly a TB of copying later that it was splitting the data between the drives, not mirroring them (only the metadata was in R1.) One command later and I’d converted the filesystem to true RAID 1. I feel like any other system would require a total redo of the entire FS, but BTRFS did it flawlessly.

    I’m still confused, however, as it seems RAID 1 only works with two drives from what I’ve read. Is that true? Why?

    • The Doctor
      link
      fedilink
      English
      71 year ago

      That is not the case. In the context of btrfs, RAID-1 means “ensure that two copies of every data block are available in the running volume,” not “ensure that every bit of both of these drives is identical at all times.” For example, I have a btrfs volume in my server with six drives in it (14 TB each) set up as a RAID-1/1 (both data and metadata are mirrored). It doesn’t really matter which two drives of the six have copies of a given data block, only that two copies exist at all.

      Compare it to… three RAID-1 metadevices (mdadm), with LVM over top, and ext4 (let’s say) on top of that. When a file is created in the filesystem (ext4), LVM ensures that it doesn’t matter on which pair of drives it was written, and mdadm’s RAID-1 functionality ensures that there are always two identical copies of the file (on two identical copies of a drive).

  • Ramin Honary
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    Linux does not support ZFS as well as operating systems like OpenBSD or OpenIndiana, but I do use it on my Ubuntu box for my backup array. It is not the best setup: RAID-Z over USB is not at all guaranteed to keep your data safe, but it was the most economical thing I was able to build myself, and it gets the job done well enough with regular scrubbing to give me piece of mind about at least having one other reliable copy of my data. And I can write files to it quickly, and take snapshots of the state of the filesystem if need be.

    I used to use Btrfs on my laptop and it worked just fine, but I did have trouble once when I ran out of disk space. A Btrfs filesystem puts itself into read-only mode when that happens, and that makes it tough to delete files to free-up space. There is a magic incantation that can restore read-write functionality, but I never learned what it was, I just decided to stop using it because Btrfs is pretty clearly not for home PC use. Freezing the filesystem in read-only mode makes sense in a data-center scenario, but not for a home user who might want to try to erase data so one can keep using it normally. I might consider using Btrfs in place of ZFS on a file server, though ZFS does seem to provide more features and seems to be somewhat better tested and hardened.

    There is also BCacheFS now as an alternative to Btrfs, but it is still fairly new, and not widely supported by default installations. I don’t know how stable it is or how well it compares to Btrfs, but I thought I would mention it.

  • the magnificent rhys
    link
    fedilink
    71 year ago

    @unhinge I run a simple 48TiB zpool, and I found it easier to set up than many suggest and trivial to work with. I don’t do anything funky with it though, outside of some playing with snapshots and send/receive when I first built it.

    I think I recall reading about some nuance around using LUKS vs ZFS’s own encryption back then. Might be worth having a read around comparing them for your use case.

    • unhingeOP
      link
      fedilink
      11 year ago

      if you happen to find the comparison, could you link it here

    • unhingeOP
      link
      fedilink
      31 year ago

      afaik openzfs provides authenticated encryption while luks integrity is marked experimental (as of now in man page).

      openzfs also doesn’t reencrypt dedup blocks if dedup is enabled Tom Caputi’s talk, but dedup can just be disabled

  • @[email protected]
    link
    fedilink
    91 year ago

    Can’t vouch for ZFS, but btrfs is great!

    You can mount root, log, and home on different subvolumes, they’d practically be on different partitions while still sharing the size limit.

    I would also take system snapshots while the system is still running with one command. No need to exclude the home or log directories, nor the pseudo fs (e.g. proc, sys, tmp, dev).