OpenZFS and Ubuntu tweaks

· 956 words · 5 minute read

Recently, I embarked on a project to optimize my home server setup by transitioning my disk storage pool from RAID5 using mdadm and XFS to ZFS RAIDZ-1. Honestly, it’s been a huge improvement in both performance and functionality, and I’m pretty excited about the results so far. My home server functions as a lab environment, where I primarily serve videos, experiment with Kubernetes clusters, and run various virtual machines (VMs) — mostly OpenBSD and FreeBSD instances.

Why the move to ZFS RAIDZ? 🔗

I’ve always admired ZFS for its robustness and advanced features. By shifting to ZFS, I gained access to native snapshots, data integrity checks, and better handling of bit rot—features I was missing in my previous RAID5 setup. ZFS RAIDZ-1 offers a good balance of redundancy and storage efficiency, making it perfect for my needs in a home lab where data integrity matters but full enterprise-level RAID setups (like RAIDZ-2) might be overkill.

Additionally, with ZFS’s built-in compression and deduplication features, I could more efficiently store data, especially for things like repetitive Docker or VM images.

The ZFS caching problem on Ubuntu 🔗

However, one issue quickly surfaced. ZFS, in its greatness, comes with its own cache management system: the ARC (Adaptive Replacement Cache). On the other hand, Ubuntu’s regular Linux filesystem has its own caching mechanisms. The two, unfortunately, don’t talk to each other, which leads to double memory usage and redundant caching. ZFS is already extremely efficient with its ARC, but Linux was layering its own file cache on top of that, causing excessive memory consumption and unnecessary caching.

To solve this, I needed to tweak a few settings to make memory usage more efficient. I wanted to:

  1. Lower Swappiness to prevent premature swapping.
  2. Make Linux more aggressive in evicting file system cache to prevent memory overload.
  3. Limit the ZFS ARC cache so it wouldn’t gobble up too much RAM.
  4. Add SSD-based L2ARC for faster reads, using an NVMe SSD, to improve overall system responsiveness.

Here’s how I did it:


Step 1: Lower Swappiness 🔗

Swappiness determines how aggressively Ubuntu uses swap space. In a home server where I want maximum performance, I don’t want the system to start swapping prematurely, especially since my server has a decent amount of RAM (32GB). I want Linux to prioritize keeping data in RAM.

I set the swappiness value low by adding this line to /etc/sysctl.conf:

vm.swappiness=10

A lower swappiness ensures that RAM is used to its fullest extent before swap is touched. In this case, a value of 10 tells the system to only use swap when absolutely necessary, freeing more RAM for ZFS ARC and other applications.


Step 2: More Aggressive File Cache Eviction 🔗

Next, I needed the Linux file system to be more aggressive about releasing cached files when memory is tight. I increased the vfs_cache_pressure value to 200, which tells the system to favor releasing unused file system caches faster.

To do this, I added this line to /etc/sysctl.conf:

vm.vfs_cache_pressure=200

By default, this value is set to 100, but bumping it up to 200 forces Linux to evict more cached data when it’s no longer needed, allowing more room for ZFS ARC.


Step 3: Limiting ZFS ARC Cache Size 🔗

One of ZFS’s powerful features is the ARC (Adaptive Replacement Cache), which dynamically adjusts based on memory usage. However, if left unchecked, ZFS ARC will happily eat up a large chunk of RAM, potentially leading to memory pressure for other services like my Kubernetes cluster or VMs.

To limit the ZFS ARC, I edited the zfs.conf file, which is responsible for setting ZFS kernel module parameters:

  1. Open /etc/modprobe.d/zfs.conf and add the following line to limit the ARC to 4GB:

    options zfs zfs_arc_max=4294967296
    

    This configuration caps the ARC size at 4GB, which I found to be a sweet spot in my setup after some experimentation. ZFS still gets enough RAM to cache frequently accessed data, but it doesn’t starve other processes.

  2. After saving the file, I ran the following command to update the initial RAM filesystem (initramfs), which ensures the changes take effect on reboot:

    sudo update-initramfs -u
    

Step 4: Adding SSD-based L2ARC Cache 🔗

For an extra performance boost, especially for read-heavy tasks like serving media files, I added a second layer of cache using an SSD as ZFS’s L2ARC (Level 2 Adaptive Replacement Cache). This is where ZFS excels: it can use fast SSDs to hold cached data, improving read performance by keeping frequently accessed data closer to the CPU.

In my case, I used a 500GB NVMe SSD for the L2ARC, and the performance difference was immediately noticeable.

To add the SSD as a cache device to the pool, I used the following command:

zpool add mypool cache /dev/nvme0n1

This added the NVMe SSD as an L2ARC to my ZFS pool, where it acts as a secondary cache, speeding up reads significantly by offloading data from the slower spinning disks to the SSD. The combination of ARC and L2ARC creates a hybrid caching system that leverages both RAM and SSD speed.


Final Thoughts 🔗

Moving from mdadm RAID5 with XFS to ZFS RAIDZ-1 has been a game-changer for my home server. Not only am I now benefiting from ZFS’s advanced features like snapshots and data integrity, but with the added SSD-based L2ARC and careful memory management, my system performs much better under load. Whether it’s spinning up Kubernetes pods or serving large video files, everything runs smoother and faster now.

ZFS’s flexibility really shines when you tweak it according to your specific workload, and the combination of ARC, L2ARC, and fine-tuning memory management has turned my home lab into a much more responsive and efficient environment. If you’re looking to level up your storage game at home, I highly recommend making the switch to ZFS.

zfs