Return to RAID: The Ars readers “What If?” edition

Enlarge

/

I get anxious if I can't watch the blinkenlights in the big Terminal window in the background while the tests run.

Jim Salter

reader comments

110

with 69 posters participating, including story author

Share this story

Share on Facebook

Share on Twitter

Share on Reddit

Storage fundamentals

OpenZFS 2.1 is out—let’s talk about its brand-new dRAID vdevs

Return to RAID: The Ars readers “What If?” edition

ZFS versus RAID: Eight Ironwolf disks, two filesystems, one winner

ZFS 101—Understanding ZFS storage and performance

Understanding RAID: How performance scales from one disk to eight

View more stories

In earlier coverage pitting ZFS against Linux kernel RAID, some readers had some concerns that we had missed some tricks for mdraid tuning. In particular, Louwrentius

wanted

us to retest mdadm with bitmaps disabled, and targetnovember

thought

that perhaps XFS might outperform ext4.

Write intent

bitmaps

are an mdraid feature that allows disks that have dropped off and re-entered the array to resync rather than rebuild from scratch. The "age" of the bitmap on the returning disk is used to determine what data has been written in its absence—which allows it to be updated with the new data only, rather than rebuilt from scratch.

XFS and ext4 are simply two different filesystems. Ext4 is the default root filesystem on most distributions, and XFS is an enterprise heavy-hitter most commonly seen in arrays in the hundreds or even thousands of tebibytes. We tested both this time, with bitmap support disabled.

Running the entire panoply of tests we used in earlier articles isn't trivial—the full suite, which tests a wide range of topologies, blocksizes, process numbers and I/O types, takes around 18 hours to complete. But we found the time to run some tests against the heavyweight topologies—that is to say, the ones with all eight disks active.

A note on today's results

The

framework

we used for the ZFS testing automatically destroys, builds, formats, and mounts arrays as well as running the actual tests. Our original mdadm tests were run individually and manually. To make sure we had the best apples-to-apples experience, we adapted the framework to function with

mdadm

.

During this adaptation, we discovered a problem with our 4KiB asynchronous write test. For ZFS, we used

--numjobs=8 --iodepth=8 --size=512M

. This creates eight separate files of 512MiB apiece, for the eight separate

fio

processes to work with. Unfortunately, this filesize is just small enough for mdraid to decide to commit the entire test in a single sequential batch, rather than actually doing 4GiB worth of random writes.

In order to get mdadm to cooperate, we needed to adjust upwards until we reached

--size=2G

—at which point

mdadm

's write throughput plummeted to less than 20 percent of its "burst" throughput when using smaller files. Unfortunately, this also extends the 4KiB asynchronous write test duration enormously—and even

fio

's

--time_based

option doesn't help, since in the first few hundred milliseconds,

mdraid

has already accepted the entire workload into its write buffer.

Advertisement

Since our test results would otherwise be from slightly different

fio

configurations, we ran new tests for both ZFS and mdraid with default bitmaps enabled, in addition to the new

--bitmap none

and XFS filesystem tests.

RAIDz2 vs mdraid6

Although we're only testing eight-disk wide configurations today, we are testing both striped parity and striped mirror configurations. First, we'll compare our parity options—ZFS RAIDz2 and Linux mdraid6.

Blocksize 1MiB

Removing the bitmap support speeded up mdraid6's asynchronous writes.

Setting bitmap=none didn't help with sync writes.

1MiB reads are unaffected by the bitmap feature.

When we created a new eight-disk mdraid6 array with bitmap support disabled, our asynchronous writes sped up significantly—but the extra 27.9-percent bump still didn't bring mdraid6 anywhere within shouting distance of the ZFS defaults, much less the properly

recordsize=1M

result.

Both reads and synchronous writes were unaffected by bitmap support or lack thereof. RAIDz2 writes are more than double the speed of mdraid6 writes, even with the bitmap, while mdraid6 reads are a little less than double the speed of RAIDz2 reads.

Despite only being tested without bitmap, XFS lagged behind ext4 in every 1MiB test.

Blocksize 4KiB

Removing the bitmaps had no significant effect on 4KiB writes. But for the first time, XFS outperforms ext4.

Removing bitmaps didn't help 1MiB sync write, and they don't help 4KiB sync write either.

On reads, XFS and ext4 run neck and neck, at a bit more than double ZFS' speed.

Small random operations are any conventional RAID6's nightmare. They're not RAIDz2's ideal scenario either—but RAIDz2's ability to avoid being trapped in a read-modify-rewrite cycle brings it a 6:1 write performance advantage vs. mdraid6. Mdraid6 fares much better on random reads, with a 2:1 read advantage.

In these small block tests, XFS held its own with ext4—and even slightly outperformed it on 4KiB asynchronous writes. None of these changes—filesystem or bitmap support—made much impact on mdraid6's 4KiB performance overall.

ZFS Mirrors vs mdraid10

Administrators who need maximum performance should leave the parity arrays behind and move to mirrors. On the mdraid side, mdraid10 outperforms mdraid6 in every performance metric we test—and a ZFS pool of mirrors similarly outperforms mdraid10 in every metric tested.

Advertisement

Blocksize 1MiB

Disabling bitmaps helps 1MiB writes for mdraid10, too—but only by 5 percent or so.

Disabling bitmaps helps mdraid10 a little on sync writes, too.

Read speed isn't affected by a bitmap (or lack thereof).

Much like the parity arrays, mdraid10 gains a 1MiB write boost—but a much smaller one than mdraid6 got, and that small boost doesn't materially change mdraid10's relationship to the faster ZFS mirrors.

Disabling bitmaps has no impact on read performance at all—and, unlike RAIDz2, ZFS mirrors win on 1MiB read performance as well.

XFS once again trails ext4 on all metrics tested.

Blocksize 4KiB

Bitmaps have no impact on mdraid10's 4KiB write performance—but XFS turns in lower numbers than ext4.

mdraid10 performs the same for 4KiB sync writes whether XFS or ext4, internal bitmaps or no bitmaps.

Bitmaps still don't affect read speed—nor should you expect them to.

At the 4KiB blocksize, RAID10 has one moderate advantage over ZFS mirrors—uncached reads are roughly 35-percent faster. But mdraid10 gives up a 4:1 advantage in write and a 12:1 advantage in synchronous write.

The presence or absence of bitmaps has no visible difference on any 4KiB operation. XFS performance is equal to ext4's on sync writes and reads, but a little slower on asynchronous writes.

Conclusions

While disabling bitmap support does have some impact on mdraid6's and mdraid10's write performance, it's not night and day in our testing and does not materially alter either topology's relationship to its closest ZFS equivalent.

We don't recommend disabling bitmaps whether you care about that performance relationship to ZFS or not. Safety features are

important

, and mdraid is a little more fragile without bitmaps. There is an option for "external" bitmaps, which can be stored on a fast SSD, but we don't recommend that, either—we've seen quite a few complaints about problems with corrupt external bitmaps.

If your big criterion is performance, we can't recommend XFS over ext4, either. XFS trailed ext4 in nearly every test, sometimes significantly. Administrators with massive arrays—hundreds of tebibytes or more—may have other, more stability- and testing-related reasons to choose XFS. But hobbyists with a few disks are well served with either and, it seems, can get a little more performance out of ext4.

Listing image by

Jim Salter

Populární články