How To Enable Storage Pooling And Mirroring Using Btrfs For Linux

Danny Stieben 05-07-2014

If you have multiple hard drives in your Linux system, you don’t have to treat them all as different storage devices. With Btrfs, you can very easily create a storage pool out of those hard drives.


Under certain conditions, you can even enable mirroring so you won’t lose your data due to hard drive failure. With everything set up, you can just throw whatever you want into the pool and make the most use of the storage space you have.

There isn’t a GUI configuration utility that can make all of this easier (yet), but it’s still pretty easy to do with the command line. I’ll walk you through a simple setup for using several hard drives together.

What’s Btrfs?

Btrfs (called B-tree filesystem, “Butter FS”, or “Better FS”) is an upcoming filesystem Ext4 vs. Btrfs: Which Linux File System Should You Use? Not sure which Linux file system to choose? These days, the smart choice is btrfs versus ext4... but which one should YOU use? Read More  that incorporates many different features at the filesystem level normally only available as separate software packages. While Btrfs has many noteworthy features (such as filesystem snapshots), the two we’re going to take a look at in this article are storage pooling and mirroring.

If you’re not sure what a filesystem is, take a look at this explanation of a few filesystems for Windows What A File System Is & How You Can Find Out What Runs On Your Drives What is a file system and why do they matter? Learn the differences between FAT32, NTFS, HPS+, EXT, and more. Read More . You can also check out this nice comparison of various filesystems From FAT To NTFS To ZFS: File Systems Demystified Different hard drives and operating systems may use different file systems. Here's what that means and what you need to know. Read More to get a better idea of the differences between existing filesystems.

Btrfs is still considered “not stable” by many, but most features are already stable enough for personal use — it’s only a few select features where you might encounter some unintended results.


While Btrfs aims to be the default filesystem for Linux at some point in the future, it’s still best to use ext4 for single hard drive setups or for setups that don’t need storage pooling and mirroring.

Pooling Your Drives

For this example, we’re going to use a four hard drive setup. There are two hard drives (/dev/sdb and /dev/sdc) with 1TB each, and two other hard drives (/dev/sdd and /dev/sde) with 500GB for a total of four hard drives with a total of 3TB of storage.

You can also assume that you have another hard drive (/dev/sda) of some arbitrary size which contains your bootloader and operating system. We’re not concerning ourselves about /dev/sda and are solely combining the other four hard drives for extra storage purposes.

Creating A Filesystem

To create a Btrfs filesystem on one of your hard drives, you can use the command:


sudo mkfs.btrfs /dev/sdb

Of course, you can replace /dev/sdb with the actual hard drive you want to use. From here, you can add other hard drives to the Btrfs system to make it one single partition that spans across all hard drives that you add. First, mount the first Btrfs hard drive using the command:

sudo mount /dev/sdb /mnt

Then, run the commands:


sudo mkfs.btrfs /dev/sdc mkfs.btrfs /dev/sdd mkfs.btrfs /dev/sde

Now, you can add them to the first hard drive using the commands:

sudo btrfs device add /dev/sdc /mnt btrfs device add /dev/sdd /mnt btrfs device add /dev/sde /mnt

If you had some data stored on the first hard drive, you’ll want the filesystem to balance it out among all of the newly added hard drives. You can do this with the command:


sudo btrfs filesystem balance /mnt

Alternatively, if you know before you even begin that you want a Btrfs filesystem to span across all hard drives, you can simply run the command:

sudo mkfs.btrfs -d single /dev/sdb /dev/sdc /dev/sdd /dev/sde

Of course this is much easier, but you’ll need to use the method mentioned above if you don’t add them all in one go.

You’ll notice that I used a flag: “-d single”. This is necessary because I wanted a RAID 0 configuration (where the data is split among all the hard drives but no mirroring occurs), but the “single” profile is needed when the hard drives are different sizes. If all hard drives were the same size, I could instead use the flag “-d raid0”. The “-d” flag, by the way, stands for data and allows you to specify the data configuration you want. There’s also an “-m” flag which does the exact same thing for metadata.

Besides this, you can also enable RAID 1 using “-d raid1” which will duplicate data across all devices, so using this flag during the creation of the Btrfs filesystem that spans all hard drives would mean that you only get 500GB of usable space, as the three other hard drives are used for mirroring.

Lastly, you can enable RAID 10 using “-d raid10”. This will do a mix of both RAID 0 and RAID 1, so it’ll give you 1.5TB of usable space as the two 1TB hard drives are paired in mirroring and the two 500GB hard drives are paired in mirroring.

Converting A Filesystem

If you have a Btrfs filesystem that you’d like to convert to a different RAID configuration, that’s easily done. First, mount the filesystem (if it isn’t already) using the command:

sudo  mount /dev/sdb1 /mnt

Then, run the command:

sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt

This will change the configuration to RAID 1, but you can replace that with whatever configuration you want (so long as it’s actually allowed — for example, you can’t switch to RAID 10 if you don’t have at least four hard drives). Additionally, the -mconvert flag is optional if you’re just concerned about the data but not the metadata.

If Hard Drive Failure Occurs

If a hard drive fails, you’ll need to remove it from the filesystem so the rest of the pooled drives will work properly. Mount the filesystem with the command:

sudo mount -o degraded /dev/sdb /mnt

Then fix the filesystem with:

sudo btrfs device delete missing /mnt

If you didn’t have RAID 1 or RAID 10 enabled, any data that was on the failed hard drive is now lost.

Removing A Hard Drive From The Filesystem

Finally, if you want to remove a device from a Btrfs filesystem, and the filesystem is mounted to /mnt, you can do so with the command:

sudo btrfs device delete /dev/sdc /mnt

Of course, replace /dev/sdc with the hard drive you want to remove. This command will take some time because it needs to move all of the data off the hard drive being removed, and will likewise fail if there’s not enough room on the other remaining hard drives.

Automatic Mounting

If you want the Btrfs filesystem to be mounted automatically, you can place this into your /etc/fstab file:

sudo /dev/sdb /mnt btrfs device=/dev/sdb,device=/dev/sdc,device=/dev/sdd,device=/dev/sde 0 0

Mount Options

One more bonus tip! You can optimize Btrfs’s performance in your /etc/fstab file under the mount options for the Btrfs filesystem. For large storage arrays, these options are best: compress-force=zlib,autodefrag,nospace_cache. Specifically, compress=zlib will compress all the data so that you can make the most use of the storage space you have. For the record, SSD users can use these options: noatime,compress=lzo,ssd,discard,space_cache,autodefrag,inode_cache. These options go right along with the device specifications, so a complete line in /etc/fstab for SSD users would look like:

sudo /dev/sdb /mnt btrfs device=/dev/sdb,device=/dev/sdc,device=/dev/sdd,device=/dev/sde,
noatime,compress=lzo,ssd,discard,space_cache,autodefrag,inode_cache 0 0

How Big Is Your Storage Pool?

Btrfs is a fantastic option for storage pooling and mirroring that is sure to become more popular once it is deemed completely stable. It also wouldn’t hurt for there to be a GUI to make configuration easier (besides in some distribution installers), but the commands you have to use in the terminal are easy to grasp and apply.

What’s the biggest storage pool you could make? Do you think storage pools are worthwhile? Let us know in the comments!

Image Credit: William Hook

Affiliate Disclosure: By buying the products we recommend, you help keep the site alive. Read more.

Whatsapp Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *

  1. Heath
    March 9, 2015 at 12:41 am

    "Besides this, you can also enable RAID 1 using '-d raid1' which will duplicate data across all devices, so using this flag during the creation of the Btrfs filesystem that spans all hard drives would mean that you only get 500GB of usable space, as the three other hard drives are used for mirroring."

    This is wrong. From the btrfs wiki(

    This [raid1] does not do the 'usual thing' for 3 or more drives. Until "N-Way" (traditional) RAID-1 is implemented: Loss of more than one drive might crash the array. For now, RAID-1 means 'one copy of what's important exists on two of the drives in the array no matter how many drives there may be in it'.

    So no, your array would not be reduced to 500 GB, but instead to approximately the same 1.5 TB limit as in the scenario you described for raid10, just re-arranged logistically from the filesystem perspective.

  2. nvm
    March 2, 2015 at 10:11 pm

    Sudo in fstab? *facepalm

  3. Dan the Man
    July 8, 2014 at 6:43 pm

    I use hardware RAID for pooling disks and LVM to divide the space into resizable file systems - for the time being.

  4. Melvin Cureton
    July 6, 2014 at 7:56 pm

    I currently have you run of the mill software raid setup on my *nix box but I think I have been convinced there might just be a better way. Thanks for the article.

    • Danny S
      July 31, 2014 at 9:20 pm

      I'm glad this was useful! :)

  5. Tetja
    July 6, 2014 at 3:21 pm

    Hi, even for raid1/5/6 the devices can have every size, btrfs will just do the right thing. -d single is not needed. Well you atleast 2 Drives for raid1 and 3/4 for raid5/6.

  6. likefunbutnot
    July 6, 2014 at 1:04 am

    I've had the best luck with large storage pools using ZFS on BSD, but the PITA involved in upgrading a storage pool made me reconsider its use. I've certainly used LVM as well, but I've found that I get consistently better SoftRAID performance out of ReFS on Windows Server systems. Windows Storage Pools unfortunately lack some of the redundancy options found on *nix, so I use them in conjunction with SnapRAID snapshots and, yes, LTO tapes.

    I have a pool of just under 150TB worth of drives in my home, but I don't think it's a good idea to let any single pool of space get larger than 15 or 16TB; I'm concerned about the limitations of CRC checks to maintain the consistency of any single array and when storage pools and the number of files they contain get very large, bit rot becomes an ever-greater concern no matter the underlying filesystem.

    • Tetja
      July 6, 2014 at 3:30 pm

      Bit Rot is not a big Issue with Filesystems like ZFS or BTRFS, even reconstruct time is with BTRFS much less of a hassle, because the Filesystem knows allready which chunks to rewrite, lowering reconstruct time by a real big amount, keeping the chance of another corruption low.

      "RAID" Syntax is getting replaced with a copy + parity Syntax, so you can make 4 copies + 4 parity sets, if you realy want. Also with the current naming scheme people gonna think it is RAID, while in fact it is not.