Blogs
Submitted by adchen on Fri, 2009/07/17 - 01:33
|
I know it's hardly news, but after the announcements at WWDC in June, it's apparent that Apple's upcoming Mac OS version 10.6 aka "Snow Leopard" won't have ZFS like everyone thought it would.
And with MacZFS stuck at build 119 and with still some potentially serious problems, I've kind of put my plans for any sort of major Mac ZFS setup on the backburner. Maybe 10.7? Who knows.
I guess the Drobo is looking better and better these days.
Submitted by adchen on Fri, 2009/07/17 - 01:25
|
Creating ZFS Mirrors
Since we're focusing on redundant and protected storage, we're going to jump right in and talk about ZFS mirrors. We'll cover normal ZFS striping and concatenated storage shortly though.
The easy way to practice using ZFS is to use files as our virtual devices (vdevs). Normally vdevs are usually entire hard drives or large slices of them. But vdevs be just about anything, including files. For production purposes you'd never use files as vdevs, since they're layered on top of your operating system and subject to some performance overhead. Besides your vdevs would also be at the mercy of things that goes on in your file system (e.g.
Submitted by adchen on Fri, 2009/04/10 - 03:15
|
Data Robotics, maker of the Drobo storage products has rolled out a nice looking high-end product that is essentially a double-wide Drobo, taking on up to 8 SATA drives:
Images courtesy of Drobo
The Drobo Pro has 8 drive bays and 3 ways of connecting to a host system: iSCSI, Firewire 800, USB 2.0.
With up to 8 drives, the Drobo now supports double-parity to survive the outright failure of 2 drives. With typical ZFS RAIDZ2 you'd need at least 5 drives (3 data + 2 parity) to support double-parity, so I'm going to assume you'd need at least 5 drives in the Drobo Pro as well for double-parity. Although technically they could default to mirroring, if you only had 4 drives, but that would cut usable space by half.
The Drobo Pro even comes with a rack-mount kit, although looks a little goofy with the 2 "wings". Seems like they could just fill it out to 12 drives across and make use of the dead space. Also, they don't list the depth of the unit, but I imagine it's relatively short compared to most rack-mount gear.
Submitted by adchen on Fri, 2009/04/03 - 09:15
|
Avoid Prompts for the Admin Password when Writing to ZFS volumes
When you're creating zpools and ZFS volumes you have to be doing it with root privileges, so naturally any volume you create is going to be owned by root. So if you are trying to write to these volumes as your normal Mac login, it's going to prompt for the admin password every time.
The easy fix is just to make yourself the owner of that volume:
# chown -R UserName:GroupName /Volumes/myzpool
If you have multiple users, then use chmod to allow access to other users.
Submitted by adchen on Thu, 2009/04/02 - 15:15
|
Empty Trash won't work
Badness level, on a scale of 1 (not bad) - 100 (WTF?): 5
If you delete files on a ZFS volume via the Finder, you won't be able to use Empty Trash properly. The files don't show up in the Finder anymore, but they're actually just moved to a hidden directory called ".Trashes". You have to use "rm" in the Terminal to really remove the file (or just "rm -rf .Trashes").
As far as most things go, this is not a huge deal, but you could easily forget that this issue exists and accidentally let your zpool fill up over time and then waste a bunch of time troubleshooting it. Maybe some sort of Applescript/Automator/cronjob approach would provide a semi-seamless workaround.
Submitted by adchen on Thu, 2009/04/02 - 02:42
|
Mac ZFS Kernel Panic Crashing
Badness level, on a scale of 1 (not bad) - 100 (WTF?): 100
First up is the more egregious behavior that I've seen so far with Mac ZFS. As I've been experimenting, I've noticed that it fails in the most ungraceful way possible (kernel panic and immediate Grey Screen of Death) when any of these conditions occur:
- You're using an external USB/FW drive for ZFS and you unplug the drive before you unmount any ZFS volumes and "zfs export -f poolname" the ZFS pool on that external drive.
- If a vdev/drive in a zpool has a failure that will put you into a FAULTED state. So on a RAIDZ (single-parity) if you lose 2 drives, that causes the kernel panic. Or on a RAIDZ2 (double-parity), if you lose 3 drives, same deal. Or in a 2-way mirror if you lose both halves of the mirror.
- Drive went to sleep or got spun down (like on a laptop or if you have the spin-down set in your Energy Saver settings).
Apparently ZFS panics (literally!) when there are an asynchronous writes that already returned "success" and would
make disk state inconsistent. The supposed reason for defaulting to panics is "maintaining data integrity" because "ZFS cannot guarantee that the information in the cache, ZIL, and media will be consistent." [original post].
Submitted by adchen on Mon, 2009/03/30 - 02:33
|
I don't pretend to have the best backup regime yet, but I didn't even regularly do backups until around 2004. My main system at the time was a PowerMac G3 (Blue/white). I was obliviously going day-to-day with all my primary storage on a single 80GB Maxtor drive (back when it would all fit on such a small drive).
Then suddenly one day the drive started to emit a horrible clicking sound (the dreaded Click of Death). After some attempts to scrub the drive with the lowly Disk Utility (Mac OS's fsck), the drive finally stopped mounting at all. I even tried Disk Warrior but it couldn't do anything with the drive because it wouldn't even mount. In hindsight it wasn't smart to keep doing this, as potentially there was some debris loose in the case that was ripping up the platters. At first sign of trouble, I should have stopping forcing it.
Desperation time. Searches on the Internet yielded various suggestions. One method that seemed to keep cropping up seemed a little strange, but there was enough anecdotal testimonials about it to make it worth trying. After all, the drive couldn't get much deader.
If this attempt was a bust, then worst case would be to either shell out big bucks to a drive recovery service, or just lose the data. Suddenly the cost of NOT doing regular backups came back to bite me. Pay a few hundred up front and be safe, or pay perhaps much more later on and not even be assured of getting all your data back.
Anyway, the last-ditch method I read about was the semi well-known "freezer" trick.
Submitted by adchen on Fri, 2009/03/20 - 17:18
|
Data Overflow
There is never enough storage. I'm sure when computer punch cards were all the rage, you would have to constantly add more storage cabinets to hold all the cards that you collected with those way-cool math programs that would print out pi to 500 places. Then it was mag-tape reels, floppies (8-inch, 5.25-inch, 3.5-inch), optical media (CD/DVDs), and the brief popularity of the removable storage formats (remember SyQuestZip/Jaz drives? I still have some, in case I need to ummmm, boot System 7 on my Mac Quadra).
Your computer, when you bought it, came with a "massive" 80gb hard drive, then you bumped it to 320gb a couple years later. Then maybe you bumped it again to 750gb. Then you started tacking on one or two external drives to handle backups or maybe your iTunes files overflow. Now we have hard drives coming out of our ears, with 2TB drives coming on the market. Now we run out of drive bays in the computer, or have a rats nest of external hard drives and their cables strewn everywhere.
The stacks of older smaller hard drives start to pile up, and are almost like floppies now. And why not? There's no point trying to keep enough drive enclosures around, even if you get some monster 12-drive chassis.
Submitted by adchen on Tue, 2008/05/13 - 17:53
|
For a while now, my trusty 20gb iPod Photo has been having uncharacteristically weird problems syncing with iTunes. It would often stop syncing with the error "Disk cannot be read from or written to". Finally, it got so bad that it hardly sync'd any files at all. Looking in the Mac OS Console utility (/Applications/Utilites/Console.app), I finally noticed everytime this happened, the Mac would report "data underrun" errors.
So at this point, I thought perhaps it was a corrupted iPod hard drive. I tried restoring my iPod a few times, even completely erased the parition at one point using Disk Utility. I even tried an anecdotal method that requires that you change your Mac's timezone to PST, restore the iPod and then revert your timezone. But the iPod still had the same problem syncing.
Then I came across some forum postings that mention that iTunes will choke on the syncing, if the file it's trying to copy is not readable by iTunes. I checked all my file permissions but they were all fine (readable/writeable by my user on the Mac), so that was a dead end.
Then I found some sites that mentions the iPod's hidden diagnostic menus. Running the iPod diagnostics helped point me in the right direction. I ran all the iPod hard drive tests and they all said it was OK. So I figured that the drive was probably still okay. I also tried my own crude write tests to the iPod drive:
$ cd /Volumes/iPod
$ mkfile 5g 5gig.file
"mkfile" is a UNIX level command that let's you create a file of a given size easily. I was able to create pretty much any size file on the iPod without any errors. So at this point, I figure it's maybe an OS or application level issue.
Submitted by adchen on Thu, 2007/08/02 - 22:13
|
Why Grep When You can Flog?
As a UNIX sysadmin I find myself spending a significant amount of time sifting through log files with grep 's, pipes and more grep 's. A pretty typical scenario when sifting through log files is usually something like this:
% cat syslog | grep "Jul 27"
[...hundreds of lines...]
LINE 1234: Jul 27 03:12:19 server2 sendmail[20573]: [ID 801593 mail.info] l6R7BUF0020573: host48-184.pppoe.inetcomm.ru did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA
LINE 1235: Jul 27 03:12:19 server2 sendmail[20573]: [ID 801593 mail.info] l6R7BUF0020573: host48-184.pppoe.inetcomm.ru did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA
LINE 1236: Jul 27 03:12:22 server2 sendmail[20574]: [ID 801593 mail.info] l6R7BYWq020574: host48-184.pppoe.inetcomm.ru did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA
[...still more lines...]
That's usually followed by progressively tacking on more greps to whittle the output down to what I'm looking for:
% cat syslog | grep "Jul 27" | grep -v 2006 | grep sendmail | grep l6R7BUF0020573
Jul 27 03:12:19 server2 sendmail[20573]: [ID 801593 mail.info] l6R7BUF0020573: host48-184.pppoe.inetcomm.ru did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA
|