What filesystem is currently best for a single nvme drive with regard to performance read/write as well as stability/no file loss? ext4 seems very old, btrfs is used by RHEL, ZFS seems to be quite good... what do people tend to use nowadays? What is an arch users go-to filesystem?
Ext4 being "old" shouldn't put you off. It is demonstratively robust with a clear history of structure integrity. It has immense popularity and a good set of recovery tools and documentation. These are exactly what you are looking for in a filesystem.
I'm not saying EXT4 is the best for your requirements, just that age of a file system is like fine wine.
i would generally recommend XFS over ext4 for anything where a CoW filesystem isn't needed. in my experience, it performs better than ext4 at most workloads, and still supports some nifty features like reflink copies if you want them.
btrfs is great for system stability because of snapshots. You can set it up to automatically make snapshots at a timed interval or every time you run pacman.
If something breaks, you can just revert to a previous snapshot. You can even do this from grub. It's a bit hard to set up, so if you want, you could use an arch based distro which automatically sets it up like GarudaOS.
It has been suggested by some that there is no relationship between Reiser murdering wives and ReiserFS murdering file systems, but most steer clear of both out of an abundance of caution.
Even now? I remember when it was new I tried it, must have been 20 or so years ago. Super fast for the time, but had a nack for randomly corrupting data. After the third reformat, I went back to ext2.
Been using BTRFS for all disks and purposes for a few years, I would recommend it with the requirement that you research it first. There are things you should know, like how/when to disable CoW, how to manage snapshots, how to measure filesystem use, and what the risks/purposes of the various btrfs operations are. If you know enough to avoid doing something bad with it, it's very unlikely to break on you.
Be conservative and use the simplest thing that supports your needs and don't be suckered by feature lists. I have never needed more than ext4. It generally has the best all round performance and maturity is never a bad thing when it comes to filesystems. It isn't most suitable for some embedded and enterprise environments and if you are working with those you generally know the various tradeoffs.
Ext4 is old, but fast and very robust. You won't loose data or corrupt the filesystem if your system looses power. It can even survive partial wipes, if you accidentally overwrite the first few megs of you drive with a messed up dd, nearly all your data will be recoverable, including filenames and directory structure.
It doesn't have very fancy features, but it is the best tested and most robust option available. (also the fastest due to its simplicity)
Btrfs has things like copy on write files that can protect you from an accidental rm, but this won't save you from drive failures, so you still need backups for important data.
You won’t loose data or corrupt the filesystem if your system looses power.
Some secondary storage devices ignore standards and outright lie about sectors being successfully written when they are actually scheduled to be written out of order. This causes obvious problems when power failure prevents the true writes from completing. Nothing can be guaranteed for such drives.
If you are planning to have any kind of database with regular random writes, stay away from btrfs. It's roughly 4-5x slower than zfs and will slowly fragment itself to death.
I'm migrating a server from btrfs to zfs right now for this very reason. I have multiple large MySQL and SQLite tables on it and they have accumulated >100k file fragments each and have become abysmally slow. There are lots of benchmarks out there that show that zfs does not have this issue and even when both filesystems are clean, database performance is significantly higher on zfs.
If you don't want a COW filesystem, then XFS on LVM raid for databases or ext4 on LVM for everything else is probably fine.
Disabling CoW in Btrfs also disables checksums. Btrfs will not be able to detect corrupted nodatacow files. When combined with RAID 1, power outages or other sources of corruption can cause the data to become out of sync.
Ext4 is probably going to be the fastest. When it comes to reliability, old is good. If you don't need any of the features Btrfs and ZFS, you'll reap higher performance using Ext4. Otherwise ZFS is more feature-complete compared to Btrfs, however it's generally not available as root fs option in OS installers. Ubuntu used to have it as an experimental option but I think that's gone now. If you know what you're doing you can use it as a root fs. Personally I'm using Ext4 on LVMRAID on a 2-way NVMe mirror. I might be switching to ZFS on root when I get to rebuild this machine. All my storage is using ZFS.
We're moving towards more btrfs - or at least LVM+<other FS> where there's no btrfs support - on as much of our server fleet as we can, since the lack of snapshotting on the other filesystems is making consistent backups way too expensive resource- and time-wise.
With LVM it's at least possible to take a block-level snapshot - which is really inefficient compared to a file-level snapshot, but it at least provides a stable point for backups to run from, without having to pause all writes during the backup or risk running out a sliding window if allowing writes to continue.
For a home user (especially one who's not as well versed in Linux or don't have tinkering time), I'd personally suggest either ext4 - since it's been the default for a while and therefore often what's assumed in guides, or btrfs where distros include it as a default option - since they should be configured with snapshots out of the box in that case, which make it much harder for the system to break due to things like unexpected shutdowns while running updates.
I'd personally stay away from ZFS for any important data storage on home computers, since it's officially not supported, and basically guaranteed never to be either due to licensing. It can remain a reasonable option if you have the know-how - or at least the tinkering time to gain said know-how, but it's probably going to be suboptimal for desktop usage regardless.
I find btrfs pretty good for desktop use mostly due to convenience it offers with managing devices(adding new device or migrating data is trivial and does not need any downtime) and subvolumes(different mount options or excluding some data from snapshots).
I usually just use EXT4, but perhaps you should check out F2FS. It's designed for solid state storage mediums, as while most were created with traditional spinning hard discs in mind
At the end of the day though after all of our storage tests conducted on Clear Linux, EXT4 came out to being just 2% faster than F2FS for this particular Intel Xeon Gold 5218 server paired with a Micron 9300 4TB NVMe solid-state drive
source
It really depends on your priorities. Single drive is good for a home system with nothing really important on it.. once you get to wanting to keep it and where recovery from backups is too much downtime, you want at least a drive mirror.. nothing wrong with exr4+mdraid for that, although you don't get the checksumming that zfs gives it will be pretty fast & if a drive fails you can run degraded on one drive until you get the new drive in.
I've been running zfs for 10 years and not lost a single byte of data even after doing stupid shit like tripping over the sata cables and disconnecting half the drives. It's survived multiple drive failures (as long as the failures are on different bits of the disk, it recover get a clean copy onto a third drive, but it's a brown trousers moment when stuff like that happens).
Downsides, it aint fast, and it does tend to like lots of memory. You want it on your fileserver, not your gaming system.
IMO there's no point in a single drive zfs.. it'll warn you faster that the drive is f*cked but what do you do then?
I agree. Love ZFS for the NAS, but for a single drive desktop system, it is almost pointless and in my experience slower for desktop usage. ZFS is great for what it was designed for.
O use ext4 at home and in servers that are not SLES HANA DB ones.
On SLES HANA servers I use ext4 for everything but the database partitions, for which SAP and SUSE support and recommend XFS.
In a few occasions people left the non-db partitions as the default on SUSE install, btrfs, with default settings. That turned out to cause unnecessary disk and processor usage.
I would be ashamed of justifying btrfs on a server for the possibility of undoing "broken things". Maybe in a distro hopping, system tinkering, unstable release home computer, but not in a server. You don't play around in a server to "break things" that often. Linux (differently from Windows) servers don't break themselves at the software level. For hardware breakages, there's RAID, backups, and HA reduntant systems, because if it's a hardware issue btrfs isn't going to save you - even if you get back that corrupted file, you won't keep running in that hardware, nor trust that "this" was the only and last file it corrupted.
EDIT: somewhat offtopic: I never use LVM. Call me paranoid and old fashioned, but I really prefer knowing where my data is, whole.
Most comments suggesting btrfs were justifying it for the possibility of rolling back to a previous state of files when something breaks (not a btrfs breakage, but mishaps on the system requiring an "undo").
It depends, for a normal user? Ext4, maybe btrfs because in terms of stability is the best {but u lose some functions like the ability to make a swap file, wich today isn't really that useful, but u lose the ability to make one). Want something really fast fort large files? ZFS, but if u experience an energy loss it could be really catastrophic.
Ext in general is so good that even to this day android it's still using EXT2, 2!
First of all, thanks this r news for me. But I don't think is a good idea to use the swap file in btrfs.
It is supported since kernel 5.0
There are some limitations of the implementation in BTRFS and Linux swap subsystem:
filesystem - must be only single device
filesystem - must have only single data profile
subvolume - cannot be snapshotted if it contains any active swapfiles
swapfile - must be preallocated (i.e. no holes)
swapfile - must be NODATACOW (i.e. also NODATASUM, no compression)
With active swapfiles, the following whole-filesystem operations will skip swapfile extents or may fail:
balance - block groups with extents of any active swapfiles are skipped and reported, the rest will be processed normally
resize grow - unaffected
resize shrink - works as long as the extents of any active swapfiles are outside of the shrunk range
device add - if the new devices do not interfere with any already active swapfiles this operation will work, though no new swapfile can be activated afterwards
device delete - if the device has been added as above, it can be also deleted
device replace - ditto
Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me. -Linux Torvalds
Realistically you aren't going to notice what filesystem you use, be it XFS, ext4, etc. Aside from fringe cases it really doesn't matter. Don't waste brain cycles on the decision.
For both my home server and desktop I use XFS for root and ZFS (in some variety of raid or mirror) for /home and data storage. Any time I've tried btrfs for root (such as default fedora), inevitably it poops the bed. At this point, I stay far away from btrfs.
My current setup is fedora for the last 6 months. I started a live session, installed f2fs and then run the installer with a combination of f2fs + encryption. And it runs flawlessly and faster than any setup before.
oh I just read up on this last knight. Yes ext4 is old but it is used because it is still works quite well. btrfs, dis anyone say that as butfarts, can handle much larger partitions ext4 maxes out at a few tb while btfrs can get much larger. ZFS can handle a around a billion tb but it needs a lot more resources to to even start
Hi all. Apologies to hijack this thread. Figured it should be OK since it's also on the topic of file systems.
Long story short, I need to reinstall Nobara OS and I plan to install Nobara on my smaller SSD drive with btrfs and set my /home folder to my larger nvme. I'm thinking of using ext4 for my /home and have snapshots of the main system stored on the nvme. Looking for a sanity check to see if this is OK or if I should be doing things differently. Thanks.
On the contrary, my intention is to make snapshots of the OS (btrfs) and my idea is to store the snapshots on the /home nvme drive (ext4).
I don't know if that's the standard practice or if I'm over complicating things. My SSD is only 240Gb (I think) while my nvme is a 1Tb drive, thus the intention to store snapshots on the nvme. Maybe the 240Gb is sufficient for say a month's worth of snapshots plus the OS?