ZFS: Creating Mirrored StorageSubmitted by adchen on Fri, 2009/07/17 - 01:25 |
Creating ZFS Mirrors
- Simple 2-way mirror
- Detach drive from mirror
- Attach drive to mirror
- Make a 3-way (or n-way) mirror
- Growing usable space in a mirror
Since we're focusing on redundant and protected storage, we're going to jump right in and talk about ZFS mirrors. We'll cover normal ZFS striping and concatenated storage shortly though.
The easy way to practice using ZFS is to use files as our virtual devices (vdevs). Normally vdevs are usually entire hard drives or large slices of them. But vdevs be just about anything, including files. For production purposes you'd never use files as vdevs, since they're layered on top of your operating system and subject to some performance overhead. Besides your vdevs would also be at the mercy of things that goes on in your file system (e.g. wayward "rm" commands, etc).
But for our examples, we'll make a few 100meg virtual devices, with the mkfile
command:
# cd /tmp
# mkfile 100m 100meg1
# mkfile 100m 100meg2
# mkfile 100m 100meg3
...
[snip]
So we can create a bunch of these files to use for our vdevs.
Make a 2-way mirror
So let's start by creating a simple 2-way mirror:
# zpool create mymirror mirror /tmp/100meg1 /tmp/100meg2
A quick check to make sure it looks right:
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mymirror 95.5M 152K 95.4M 0% ONLINE -
zpool list
shows we have a ~100MB volume, which is right since we're mirroring two 100MB vdevs.
# zpool status mymirror
pool: mymirror
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mymirror ONLINE 0 0 0
mirror ONLINE 0 0 0
/tmp/100meg1 ONLINE 0 0 0
/tmp/100meg2 ONLINE 0 0 0
Let's fill our mirror with some data:
# mkfile 20m /Volumes/mymirror/20megfile
# zpool list mymirror
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mymirror 95.5M 20.2M 75.3M 21% ONLINE -
So under "USED" we see 20MB is used in our mirror.
Detach one of the mirror vdevs
So let's take out one of the mirror vdevs:
# zpool detach mymirror /tmp/100meg2
So we still have 100MB usable space, but it's unmirrored as we're down to a single drive:
# zpool list mymirror
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mymirror 95.5M 20.2M 75.3M 21% ONLINE -
# zpool status mymirror
pool: mymirror
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mymirror ONLINE 0 0 0
/tmp/100meg1 ONLINE 0 0 0
Add a Drive to Mirror
So let's add the other mirror vdev back in:
# zpool attach mymirror /tmp/100meg1 /tmp/100meg2
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mymirror 95.5M 20.2M 75.3M 21% ONLINE -
# zpool status mymirror
pool: mymirror
state: ONLINE
scrub: resilver completed with 0 errors on Sun Mar 29 22:32:16 2009
config:
NAME STATE READ WRITE CKSUM
mymirror ONLINE 0 0 0
mirror ONLINE 0 0 0
/tmp/100meg1 ONLINE 0 0 0
/tmp/100meg2 ONLINE 0 0 0
So we're back to a 2-way mirror of 100MB of space.
Upgrade 2-way Mirror to 3-way
So if we want to add another vdev to our existing 2-way mirror, we just need to zpool attach
another one:
# zpool attach mymirror /tmp/100meg1 /tmp/100meg3
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mymirror 95.5M 20.2M 75.3M 21% ONLINE -
We still have 100 MB, so it looks good so far.
# zpool status mymirror
pool: mymirror
state: ONLINE
scrub: resilver completed with 0 errors on Sun Mar 29 22:34:49 2009
config:
NAME STATE READ WRITE CKSUM
mymirror ONLINE 0 0 0
mirror ONLINE 0 0 0
/tmp/100meg1 ONLINE 0 0 0
/tmp/100meg2 ONLINE 0 0 0
/tmp/100meg3 ONLINE 0 0 0
Since we have very little data in the mirror, the newly attached vdev was populated with data almost immediately. You see the note that says "resilver completed". Resilvering refers to the process where the ZFS array rebuilds a new drive that's been attached to your array so that you regain the redundancy.
With full-sized drives and more data, you'd normally see a status showing how much of the resilvering process is left.
So zpool status
shows us that we know have a 3-way mirror running. Of course, you're not limited to 3-way mirrors. You can make n-way mirrors, although in practice anything beyond a 3-way mirror probably is overkill and whatever you're trying to do could be more efficiently accomplished with another configuration.
# zpool attach mymirror /tmp/100meg1 /tmp/100meg4
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mymirror 95.5M 206K 95.3M 0% ONLINE -
# zpool status mymirror
pool: mymirror
state: ONLINE
scrub: resilver completed with 0 errors on Sun Mar 29 22:37:15 2009
config:
NAME STATE READ WRITE CKSUM
mymirror ONLINE 0 0 0
mirror ONLINE 0 0 0
/tmp/100meg1 ONLINE 0 0 0
/tmp/100meg2 ONLINE 0 0 0
/tmp/100meg3 ONLINE 0 0 0
/tmp/100meg4 ONLINE 0 0 0
So using zpool mirrors is pretty easy. It's trivial to add extra mirrors or detach mirrors on the fly. But what if you want to actually grow the amount of usable space, and not just add more spindles or redundancy? And to do this without losing your mirror redundancy?
Growing Zpool Mirror Space
So what happens if you started to add in mixed sized vdevs to your zpool mirror? Our existing mirror is 100MB. If I attach a 200MB vdev, let's see what happens:
# zpool attach mymirror /tmp/100meg1 /tmp/200meg1
Looks like, ZFS keeps the mirror at the original size, which makes sense since the other 100MB vdevs couldn't magically grow to handle 200MB of space, just because the new vdev was that size.
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mymirror 95.5M 156K 95.3M 0% ONLINE -
# zpool status
pool: mymirror
state: ONLINE
scrub: resilver completed with 0 errors on Sun Mar 29 22:49:45 2009
config:
NAME STATE READ WRITE CKSUM
mymirror ONLINE 0 0 0
mirror ONLINE 0 0 0
/tmp/100meg1 ONLINE 0 0 0
/tmp/100meg2 ONLINE 0 0 0
/tmp/100meg3 ONLINE 0 0 0
/tmp/200meg1 ONLINE 0 0 0
The unofficial trick here is that ZFS will make the mirror usable size be the size of the SMALLEST vdev in the configuration. So if once the last 100MB vdev is removed and replaced with a 200MB vdev, the usable space on the mirror is automatically bumped up to this new size
Fortunately, since removing a drive out of a mirrored set is easy, after attaching at least one larger vdev AND letting it resilver (so that is a redundant copy), we can start to detach all the smaller vdevs and attach bigger vdevs:
# zpool detach mymirror /tmp/100meg1
# zpool detach mymirror /tmp/100meg2
# zpool detach mymirror /tmp/100meg3
So after detaching all the 100MB vdevs, we're left with just the single 200MB vdev in the "mirror" (not redundant anymore for now, remember!):
# zpool status
pool: mymirror
state: ONLINE
scrub: resilver completed with 0 errors on Sun Mar 29 22:49:45 2009
config:
NAME STATE READ WRITE CKSUM
mymirror ONLINE 0 0 0
/tmp/200meg1 ONLINE 0 0 0
Now our mirror is 200MB in size:
# zpool list mymirror
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mymirror 196M 158K 195M 0% ONLINE -
Now we can make it a proper mirror and attach some additional 200MB vdevs and let them resilver:
# zpool attach mymirror /tmp/200meg1 /tmp/200meg2
# zpool list mymirror
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mymirror 196M 212K 195M 0% ONLINE -
# zpool status mymirror
pool: mymirror
state: ONLINE
scrub: resilver completed with 0 errors on Sun Mar 29 22:57:34 2009
config:
NAME STATE READ WRITE CKSUM
mymirror ONLINE 0 0 0
mirror ONLINE 0 0 0
/tmp/200meg1 ONLINE 0 0 0
/tmp/200meg2 ONLINE 0 0 0
- adchen's blog
- Login to post comments