It’s time for another ZFS pool expansion. When I last expanded; I doubled the capacity of my 2nd vdev — going from 16 TB to 32 TB raw capacity. This time I’m doubling again — replacing the last limiting drive, and fully utilizing the all the 8 TB drives 😃
Just a quick recap on why by 2nd vdev has been a mix of different sized drives:
My 2nd VDEV is a mix of disks, as I didn’t have the 💰 to buy all 8 TB when setting it up.
With that out of the way — let’s proceed with the actual expansion 😃
I’m keeping track of all drive serial numbers, IDs, and bays in NetBox — so locating the right drive is easy 🙂 Now, let’s take it offline:
$ sudo zpool offline tank0 wwn-0x5000xxxxxxxxxxxx
Checking the pool status:
$ zpool status
pool: tank0
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
NAME STATE READ WRITE CKSUM
tank0 DEGRADED 0 0 0
raidz2-0 ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5001xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5001xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5001xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5001xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5001xxxxxxxxxxxx ONLINE 0 0 0
raidz2-1 DEGRADED 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx OFFLINE 0 0 0
Then we shut the server down and physically swap the drives.
$ sudo shutdown now
After booting back up; we need to identify the new drive that we just installed. One way to do that is to list drives by ID, and look for one that doesn’t have any partitions:
$ ls -l /dev/disk/by-id | grep wwn
lrwxrwxrwx 1 root root 9 Feb 24 23:34 wwn-0x5000xxxxxxxxxxxx -> ../../sdp
lrwxrwxrwx 1 root root 9 Feb 24 23:34 wwn-0x5000xxxxxxxxxxxx -> ../../sdh
lrwxrwxrwx 1 root root 10 Feb 24 23:34 wwn-0x5000xxxxxxxxxxxx-part1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 Feb 24 23:34 wwn-0x5000xxxxxxxxxxxx-part9 -> ../../sdh9
Here we can see that sdh
has part1
and part9
, while sdp
doesn’t. We can double check to make sure that this is the new drive:
$ sudo smartctl -i /dev/sdp
And then instruct ZFS to replace the offline drive:
$ sudo zpool replace tank0 \
wwn-0x5000cxxxxxxxxxxx \
/dev/disk/by-id/wwn-0x50000xxxxxxxxxxx
$ zpool status
pool: tank0
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
NAME STATE READ WRITE CKSUM
tank0 DEGRADED 0 0 0
raidz2-0 ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5001xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5001xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5001xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5001xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5001xxxxxxxxxxxx ONLINE 0 0 0
raidz2-1 DEGRADED 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
wwn-0x5000xxxxxxxxxxxx ONLINE 0 0 0
replacing-7 DEGRADED 0 0 0
wwn-0x5000cxxxxxxxxxxx OFFLINE 0 0 0
wwn-0x50000xxxxxxxxxxx ONLINE 0 0 0 (resilvering)
The new drive is now being resilvered…
…once that is done — we can see that we have expandable space available:
$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
tank0 58T 42.1T 15.9T - 29T 12% 72% 1.00x ONLINE
So we expand, notice the -e
flag:
$ sudo zpool online -e tank0 wwn-0x50000xxxxxxxxxxx
$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
tank0 87.1T 42.1T 45.0T - - 7% 48% 1.00x ONLINE
Nice!
My tank0
pool now has 96 TB raw storage — 72 TB usable 👍
raidz2-0
: 8×4 TBraidz2-1
: 8×8 TB
The next expansion is going to be costly, hopefully it’s a long way away 🙂
Last commit 2024-04-05, with message: Tag cleanup.