It’s time for another ZFS pool expansion. When I last expanded; I doubled the capacity of my 2nd vdev β€” going from 16 TB to 32 TB raw capacity. This time I’m doubling again β€” replacing the last limiting drive, and fully utilizing the all the 8 TB drives πŸ˜ƒ

Just a quick recap on why by 2nd vdev has been a mix of different sized drives:

My 2nd VDEV is a mix of disks, as I didn’t have the πŸ’° to buy all 8 TB when setting it up.

With that out of the way β€” let’s proceed with the actual expansion πŸ˜ƒ

I’m keeping track of all drive serial numbers, IDs, and bays in NetBox β€” so locating the right drive is easy πŸ™‚ Now, let’s take it offline:

$ sudo zpool offline tank0 wwn-0x5000xxxxxxxxxxxx

Checking the pool status:

$ zpool status

  pool: tank0
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.

  	NAME                        STATE     READ WRITE CKSUM
	tank0                       DEGRADED     0     0     0
	  raidz2-0                  ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5001xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5001xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5001xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5001xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5001xxxxxxxxxxxx  ONLINE       0     0     0
	  raidz2-1                  DEGRADED     0     0     0
	    wwn-0x5000xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx  ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx  OFFLINE      0     0     0

Then we shut the server down and physically swap the drives.

$ sudo shutdown now
Replacing drive in bay

After booting back up; we need to identify the new drive that we just installed. One way to do that is to list drives by ID, and look for one that doesn’t have any partitions:

$ ls -l /dev/disk/by-id | grep wwn

lrwxrwxrwx 1 root root  9 Feb 24 23:34 wwn-0x5000xxxxxxxxxxxx -> ../../sdp
lrwxrwxrwx 1 root root  9 Feb 24 23:34 wwn-0x5000xxxxxxxxxxxx -> ../../sdh
lrwxrwxrwx 1 root root 10 Feb 24 23:34 wwn-0x5000xxxxxxxxxxxx-part1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 Feb 24 23:34 wwn-0x5000xxxxxxxxxxxx-part9 -> ../../sdh9

Here we can see that sdh has part1 and part9, while sdp doesn’t. We can double check to make sure that this is the new drive:

$ sudo smartctl -i /dev/sdp

And then instruct ZFS to replace the offline drive:

$ sudo zpool replace tank0 \
    wwn-0x5000cxxxxxxxxxxx \
    /dev/disk/by-id/wwn-0x50000xxxxxxxxxxx

$ zpool status

  pool: tank0
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.

	NAME                          STATE     READ WRITE CKSUM
	tank0                         DEGRADED     0     0     0
	  raidz2-0                    ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5001xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5001xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5001xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5001xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5001xxxxxxxxxxxx    ONLINE       0     0     0
	  raidz2-1                    DEGRADED     0     0     0
	    wwn-0x5000xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx    ONLINE       0     0     0
	    wwn-0x5000xxxxxxxxxxxx    ONLINE       0     0     0
	    replacing-7               DEGRADED     0     0     0
	      wwn-0x5000cxxxxxxxxxxx  OFFLINE      0     0     0
	      wwn-0x50000xxxxxxxxxxx  ONLINE       0     0     0  (resilvering)

The new drive is now being resilvered…

ZFS pool resilvering

…once that is done β€” we can see that we have expandable space available:

$ zpool list

NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH
tank0      58T  42.1T  15.9T        -       29T    12%    72%  1.00x    ONLINE

So we expand, notice the -e flag:

$ sudo zpool online -e tank0 wwn-0x50000xxxxxxxxxxx

$ zpool list

NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH
tank0    87.1T  42.1T  45.0T        -         -     7%    48%  1.00x    ONLINE

Nice!

My tank0 pool now has 96 TB raw storage β€” 72 TB usable πŸ‘

  • raidz2-0: 8Γ—4 TB
  • raidz2-1: 8Γ—8 TB

The next expansion is going to be costly, hopefully it’s a long way away πŸ™‚