About a year ago I switched to ZFS for Proxmox so that I wouldn't be running technology preview.
Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can't downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.
Any reason I shouldn't go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.
Anyone else pondering or using btrfs? It seems like a solid choice.
As a general rule of thumb, allocate at least 2 GiB Base + 1 GiB/TiB-Storage. For example, if you have a pool with 8 TiB of available storage space then you should use 10 GiB of memory for the ARC.
I changed the arc size on all my machines to 4GB and it runs a bit better. I am getting much better performance. I though I had changed it but I didn't regenerate initramfs so it didn't apply. I am still having issues with VM transfers locking up the cluster but that might be fixable by tweaking some settings.
16GB might be overkill or underkill depending on what you are doing.
This is an old PC (Intel i7 3770K) with 2 HDDs (16 TB) attached to onboard SATA3 controller, 16 GB RAM and 1 SSD (120 GB). Nothing special. And it's quite busy because it's my home server with a VM and containers.
I'm seeing very similar speeds on my two-HDD RAID1. The computer has an AMD 8500G CPU but the load from ZFS is minimal. Reading / writing a 50GB /dev/urandom file (larger than the cache) gives me:
It's possible, but you should be able to see it quite easily. In my case, the CPU utilization was very low, so the same test should also not be CPU-bottlenecked on your system.