I tend to use btrfs on single disks for reasons of snapshots (that I never use…), subvolumes, CoW, etc.
For my multi-disk needs (and single-disk stuff when it gets mainlined), bcachefs is my file system of choice. I’m currently running a 90tb spinning rust + 24tb metadata and cache on ssds on my archive / nas box.
Btrfs because I’ve had constant issues with zfs in multiple different times.
Everyone praises zfs so I’m sure it’s something I’m missing, but I’ve had very little issue with btrfs, other than it’s incomplete feature set.
What kind of issues?
Its mostly that due to licensing issues it can’t be included in the Linux kernel like btrfs and thus is prone to breaking at very in-opportune times.
IMHO both are fine, but btrfs is more hassle free and good enough ™ for hobby self-hosters.
He’s talking about kernel taint. That is annoying in Linux.
Ext4 (for my laptop and stuff) and ZFS (as mirrors or raidz2 for my proxmox host and data).
Ext4 for my root filesystem, although I’ve been eyeing BTRFS for a while now as a replacement.
BTRFS for all my arrays and auxillary drives - aside from one oopsie where some drives had power issues and retained corrupted data (BTRFS managed to recover everything just fine), it’s been a fine experience I guess.
Judging by the amount of responses mentioning being burnt by data loss, I wouldn’t be surprised if most of these were probably caused by running “
btrfs fsck
” 😱😅I am also pretty interested in btrfs. I recently redid my laptop and did btrfs for everything there. No btrfs on my server yet though. Ext4 is just really optimal for data recovery. Maybe if I redo my server sometime in the future I’ll start with btrfs.
Zfs I lost so much data trying to use btrfs. And zvols are neat.
Btrfs, because of compression. And I’ve never had any issues with it.
Zfs for data, vms, lxc, file server.
Ext4 for basically root partitions, maybe ufs on the non-jailed freebsd.
Old policy, separate data and OS so you can switch quickly, had it since dos and never really switched.
Zfs is perfect but heavy, especially the arc so try not to wave it at everything. Ext4 is good enough for anything where I really don’t care about integrity.
Tried btrfs but zfs is awesome because when I’m done I can always send a snapshot from my workers to the main fileserver with zfs-send and keep it around. Zfs-send/receive really change the game, as does zfs’s trusted raid and zlog/l2arc which makes spinning rust fly.
Zfs on freebsd file server, for the error checking, error correction and flexibility.
FFS everywhere else, because I’m an OpenBSD guy. I don’t love FFS, but it works.
ext4 on an mdadm raid. It works well enough, and supports growing your array.
Although if I rebuilt this from scratch, I would skip mdadm and just let minio control all the drives. Minio has an S3 compatible API, which I’d then mount into whatever apps need it.
Love MinIO but it’s not a filesystem and mounting object storage as a filesystem is not a great experience (speaking from commercial experience).
Same experience here. S3 is essentially a key/value store to simply put and retrieve large values/blobs. Everything resembling filesystem features is just convention over how keys are named. Comminication uses HTTP, so there is a lot of overhead when working with it as an FS.
In the web you can use these properties to your advantage: you can talk to S3 with simple HTTP clients, you can use reverse proxies, you can put a CDN in front and have a static file server.
But FS utils are almost always optimized for instant block based access and fast metadata responses. Something simple like a
find
will fuck you over with S3.
Love mdadm, it’s simple and straightforward.
ZFS raidz1 or raidz2 on NetBSD for mass storage on rotating disks, journaled FFS on RAID1 on SSD for system disks, as NetBSD cannot really boot from zfs (yet).
ZFS because it has superior safeguards against corruption, and flexible partitioning; FFS because it is what works.
@Hopfgeist @sam I prefer RAID10 on rust.
What are the advantages of raid10 over zfs raidz2? It requires more disk space per usable space as soon as you have more than 4 disks, it doesn’t have zfs’s automatic checksum-based error correction, and is less resilient, in general, against multiple disk failures. In the worst case, two lost disks can mean the loss of the whole pack, whereas raidz2 can tolerate the loss of any 2 disks. Plus, with raid you still need an additional volume manager and filesystem.
@Hopfgeist Speed on large spinning disks. Faster rebuilds. Less chance of complete failure because or URE.
Btrfs except for /boot. Boot is ext4 but all other volumes are btrfs. Important stuff like docker, lxc or VMs are on subvolumes for quick snapshots just in case
deleted by creator
ZFS on server. Btrfs on laptop.
Just ext4 pooled together with mergerfs for my media files. Seems to fit my use perfectly.
Right now just ext4 because I’m just hosting a minecraft server and a website that’s not even up right now. I’m thinking about btrfs when I build my next system. Transparent file compression and sub-volumes looks appetizing