One of the most frequently asked questions on the
PostgreSQL Performance list is "what disk controller works best with PostgreSQL?" It's a tough question; every product has different things it's good and bad at, and it can be hard to through all that to figure out what makes sense for your application. You need to focus equally on performance and reliability, as disk drives are very prone to failure.
I have a mini-server at home I use mainly for performance testing. The main thing that makes it a mini-server rather than an oversized desktop is that I've got an
Areca ARC-1210 as the disk controller, which when I bought it cost more than the whole rest of the computer I installed it in. Areca is a popular recommendation because for a while there, their products were considerably faster per dollar than the competition in the SATA RAID space (mainly 3Ware and LSI Logic), and reports of how the controllers handled disk failure suggested reasonable behavior there.
I knew the performance was good, but with the relatively small number of drives I run at home I hadn't actually run into a SATA disk failure to comment on that part of the controller's firmware yet. Luckily Murphy stalks my life, and during a weekend I really didn't have time to fool around with hardware I got my chance. Snipped from /var/log/messages:
sd 0:0:1:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE,SUGGEST_OK
sd 0:0:1:0: [sdb] Sense Key : Medium Error [current]
Info fld=0x0
kernel: sd 0:0:1:0: [sdb] Add. Sense: Unrecovered read error
end_request: I/O error, dev sdb, sector 33470855
sd 0:0:1:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE,SUGGEST_OK
kernel: sd 0:0:1:0: [sdb] Sense Key : Medium Error [current]
kernel: Info fld=0x0
kernel: sd 0:0:1:0: [sdb] Add. Sense: Unrecovered read error
end_request: I/O error, dev sdb, sector 33470855
Here the end_request lines show the sector on the drive that went bad--two of them here, there were plenty more. This disk was part of a 3-disk software RAID0 set, which meant the file these sectors were in the middle of was trashed. Obviously I wasn't optimizing for reliability using RAID0 so the file wasn't important. What was important is that I finally had a chance to see exactly how the Areca controller handled this situation.
Normally what you do when a drive looks like it's going bad is to run smartctl. Actually, you should be
running smartctl proactively all the time to try and catch these errors as soon as possible, and getting e-mailed the results. Again, this is a development and test system so I hadn't set all that up. And the results when I tried to run a scan on this disk were disappointing:
# smartctl --all /dev/sdb
smartctl version 5.36 [x86_64-redhat-linux-gnu] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
Device: WDC WD1600AAJS-00WAA Version: R001
Serial number: WD-WCAS20994885
Device type: disk
Transport protocol: Fibre channel (FCP-2)
Local Time is: Sun Aug 24 02:23:58 2008 EDT
Device supports SMART and is Enabled
Temperature Warning Disabled or Not Supported
SMART Health Status: OK
Current Drive Temperature: 30 C
Drive Trip Temperature: 25 C
Manufactured in week 30 of year 2002
Current start stop count: 256 times
Recommended maximum start stop count: 4278190080 times
Elements in grown defect list: 0
Error counter log:
Errors Corrected by Total Correction Gigabytes Total
ECC rereads/ errors algorithm processed uncorrected
fast | delayed rewrites corrected invocations [10^9 bytes] errors
read: 0 0 0 0 0 0.000 0
write: 0 0 0 0 0 0.000 0
Non-medium error count: 0
Device does not support Self Test logging
The data returned here is clearly bogus--every time I looked at it after encountering errors, the counters were still all 0. Generally the best way to figure out what's going on is to run a self-test:
# smartctl -t long /dev/sdb
smartctl version 5.36 [x86_64-redhat-linux-gnu] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
Long (extended) offline self test failed [unsupported scsi opcode]
Oops. Much like how the other counters aren't supported, the Areca controller doesn't pass through SMART test requests. I vaguely recalled that from when I was doing the shopping for this card. Areca supplies a couple of utilities on their
FTP site to handle chores like this. The first one, the HTTP server, lets you access all the card configuration you might normally do as part of the BIOS setup from a web browser. You run that utility and then access it via ports 81 and 82 on localhost. That worked OK but didn't tell me too much.
The other thing they provide, the part I was intending to rely on here, is the command line interface program. They provide two versions for both 32-bit and 64-bit Linux. The problem with this utility is that unlike the disk driver, the CLI is not open source.
The Areca driver made its way into the mainstream Linux kernel as of 2.6.19, and has been improved significantly by contributors outside of Areca since. The version that ships with kernel 2.6.22 is the one I recommend, that rev is solid. Areca's CLI they provide in binary form, tested against a couple of supported distributions, and that's all you get.
My server system was recently upgraded to run Ubuntu 8.04.1 in addition to the CentOS 5.2 it's had on it for a while now. The CLI tool didn't work at all on Ubuntu; it didn't find the device and I have no idea why. If you're really committed to having an true open-source stack for your controller card, perhaps because you value being able to understand and possibly resolve issues yourself when they happen, cross Areca off your list. Their essential CLI utility can easily break on you and leave you stuck.
Luckily I still had the CentOS partition setup, and as that's a derivative of the supported RHEL their CLI is aimed at it worked fine there. Here's a little session I had with it, looking for what went wrong here:
Copyright (c) 2004 Areca, Inc. All Rights Reserved.
Areca CLI, Version: 1.80A, Arclib: 260, Date: Oct 12 2007( Linux )
Controllers List
----------------------------------------
Controller#01(PCI): ARC-1210
Current Controller: Controller#01
----------------------------------------
CMD Description
==========================================================
main Show Command Categories.
set General Settings.
rsf RaidSet Functions.
vsf VolumeSet Functions.
disk Physical Drive Functions.
sys System Functions.
net Ethernet Functions.
event Event Functions.
hw Hardware Monitor Functions.
mail Mail Notification Functions.
snmp SNMP Functions.
exit Exit CLI.
==========================================================
Command Format: cmd [Sub-Command] [Parameters].
Note: Use cmd -h or -help to get details.
CLI> event info
Date-Time Device Event Type
===============================================================================
2008-08-24 06:14:51 H/W MONITOR Raid Powered On
2008-08-24 06:01:35 H/W MONITOR Raid Powered On
2008-08-24 05:48:13 Proxy Or Inband HTTP Log In
2008-08-24 05:44:24 IDE Channel #02 Reading Error
2008-08-24 05:44:16 IDE Channel #02 Reading Error
2008-08-24 05:44:08 IDE Channel #02 Reading Error
2008-08-24 05:43:54 IDE Channel #02 Reading Error
2008-08-24 05:43:46 IDE Channel #02 Reading Error
2008-08-24 05:37:12 Proxy Or Inband HTTP Log In
2008-08-24 05:20:18 H/W MONITOR Raid Powered On
That's pretty clear: adjusting for time zone the disk attached to channel #02 (which is in fact /dev/sdb) had a read error at the same time the I/O errors showed up in the kernel logs. So the controller did catch that. Clearly any production system with an Areca controller should be monitoring the output from "cli
event info" for these messages.
If I were running this in a hardware RAID environment, I suspect the Areca controller would have kicked the disk out after that. Sometimes, though, the SMART errors are more subtle. I like to schedule an extended disk test for any production drive at least once a week, trying to find issues before they actually wipe data out. Here's the options there, and what the SMART info for a good disk in the system looks like:CLI> disk -h
smart Parameter:
Fn: Display Disk S.M.A.R.T Data.
Ex: Display Disk#1 S.M.A.R.T Data.
Command: disk smart drv=1 [Enter]
dctest Parameter: .
Fn: Perform A S.M.A.R.T Off-Line Data Collection.
Ex: Perform A Test On Disk#1~4.
Command: disk dctest drv=1~4 [Enter]
sttest Parameter: [mode=].
Fn: Perform A S.M.A.R.T Off-Line Short or Extended Self-Test.
Ex: Perform A Short Self-Test On Disk#1~4.
Command: disk sttest drv=1~4 mode=short [Enter]
Note: Default Mode=Short.
CLI> disk smart drv=1
S.M.A.R.T Information For Drive[#01]
# Attribute Items Flag Value Thres State
===============================================================================
1 Raw Read Error Rate 0x0f 200 51 OK
3 Spin Up Time 0x03 152 21 OK
4 Start/Stop Count 0x32 100 0 OK
5 Reallocated Sector Count 0x33 200 140 OK
7 Seek Error Rate 0x0e 100 51 OK
9 Power-on Hours Count 0x32 99 0 OK
10 Spin Retry Count 0x12 100 51 OK
11 Calibration Retry Count 0x12 100 51 OK
12 Device Power Cycle Count 0x32 100 0 OK
192 Power-off Retract Count 0x32 200 0 OK
193 Load Cycle Count 0x32 200 0 OK
194 Temperature 0x22 102 0 OK
196 Reallocation Event Count 0x32 200 0 OK
197 Current Pending Sector Count 0x12 200 0 OK
198 Off-line Scan Uncorrectable Sector Count 0x10 200 0 OK
199 Ultra DMA CRC Error Count 0x3e 200 0 OK
===============================================================================
Here's what the bad one reported:
CLI> disk smart drv=2
S.M.A.R.T Information For Drive[#02]
# Attribute Items Flag Value Thres State
===============================================================================
1 Raw Read Error Rate 0x0f 200 51 OK
3 Spin Up Time 0x03 153 21 OK
4 Start/Stop Count 0x32 100 0 OK
5 Reallocated Sector Count 0x33 200 140 OK
7 Seek Error Rate 0x0e 100 51 OK
9 Power-on Hours Count 0x32 99 0 OK
10 Spin Retry Count 0x12 100 51 OK
11 Calibration Retry Count 0x12 100 51 OK
12 Device Power Cycle Count 0x32 100 0 OK
192 Power-off Retract Count 0x32 200 0 OK
193 Load Cycle Count 0x32 200 0 OK
194 Temperature 0x22 104 0 OK
196 Reallocation Event Count 0x32 200 0 OK
197 Current Pending Sector Count 0x12 193 0 OK
198 Off-line Scan Uncorrectable Sector Count 0x10 193 0 OK
199 Ultra DMA CRC Error Count 0x3e 200 0 OK
===============================================================================
Ah ha--so the old "Uncorrectable Sector Count" quality factor
is going down. That's usually a sign of early failure. But how to
trigger a more comprehensive test?
CLI> disk sttest drv=2 mode=extended
ErrMsg: Drive 2 Is Not Available
CLI> disk dctest drv=2
ErrMsg: Drive 2 Is Not Available
Oops. A bit of research reinforced my fear here: if you've got a device that's part of a RAID group or, like in this case, assigned as a JBOD disk, the Areca controller won't let you run a SMART test on it. This is really unreasonable behavior for a disk controller aimed at enterprise use. I only ran into these bad sectors because I used them and then tried to read the result back. The extended test may have caught the read problem before I did that, saving me from the data loss. That's certainly how it's worked for the last two disk failures I ran into at home.
The next question, then, is how to run a full scan over this disk if I can't do it via smart? First I tried rebuilding the RAID0 /dev/md0 filesystem this disk is part of using the bad block checker you can invoke as part of mkfs:
mkfs -j -c -c /dev/md0
This returned clean, no I/O errors at the kernel level. It took many hours to complete though. What about the Areca logs?
Date-Time Device Event Type
===============================================================================
2008-08-24 12:26:37 IDE Channel #02 Reading Error
2008-08-24 12:26:21 IDE Channel #02 Reading Error
2008-08-24 12:26:13 IDE Channel #02 Reading Error
2008-08-24 12:26:05 IDE Channel #02 Reading Error
2008-08-24 12:25:57 IDE Channel #02 Reading Error
2008-08-24 12:25:43 IDE Channel #02 Reading Error
2008-08-24 12:25:35 IDE Channel #02 Reading Error
2008-08-24 12:25:27 IDE Channel #02 Reading Error
2008-08-24 12:25:06 IDE Channel #02 Reading Error
2008-08-24 12:24:59 IDE Channel #02 Reading Error
...
That goes on for a while. Seeing a bunch of errors in the controller that didn't show up in the kernel is kind of disconcerting. I'm left distrusting the way mkfs does this test; perhaps the writes it does to disk are staying cached by the controller, causing the read to return with that cached data? Not sure what's going on here, but I sure don't like it.
Time to test this out at the filesystem level by forcing a read of everything instead, which is easier to do anyway because it doesn't write:
e2fsck -c /dev/md0
Ah, here come the kernel level errors in /var/log/messages. The end_request lines are the important ones here, here's the first of those (so many showed up they didn't even all appear in the logs):
end_request: I/O error, dev sdb, sector 34426095
end_request: I/O error, dev sdb, sector 34426127
end_request: I/O error, dev sdb, sector 34426151
end_request: I/O error, dev sdb, sector 34428623
end_request: I/O error, dev sdb, sector 34428623
end_request: I/O error, dev sdb, sector 34444831
end_request: I/O error, dev sdb, sector 34453535
end_request: I/O error, dev sdb, sector 34453631
It ends saying that it made modifications to the filesystem, presumably to mark the blocks that were bad. But how do you find out where those are? You can get that info with the superblock header option of dumpe2fs:
# dumpe2fs -f /dev/md0
Bad blocks: 12909760, 12909778, 12909779, 12909785, 12909790, 12909791, 12909796, 12909797, 12909802, 12909803, 12909809, 12909814, 12909815, 12909820, 12909821, 12910712, 12910714, 12916792, 12920048, 12920092
What have we learned:
- The Areca controller has serious problems reporting SMART errors when you're using it in JBOD mode. It seems it's only really designed for handling all that internally
- The CLI they provide is a closed source mess that you must test for compatibility
- You must monitor the Areca event log using their CLI if you're using their controller in production
- The controller doesn't support any user-requested SMART scans
- On a caching controller, using "e2fsck -c" may give more reliable results than the full "mkfs -c"
Since the disk here is obviously trashed, and I can't do any of the SMART operations to try and make it better when attached to the Areca controller, at this point I moved it to another system and replaced it with the spare I had for this array. I didn't lose any data here, but I did find myself a lot less comfortable recommending Areca controllers.
I just pointed out last month that current 3ware controllers have caught up with Areca's performance. Of course, as far as I can tell, 3ware's tw_cli utility is just as closed source as Areca's. But since you can run smart tests directly with the Linux smartctl on 3ware controllers, all you really need the CLI for is looking inside the hardware RAID to see what the disk status there is.
As for what I went through trying to get the failing drive back to operating order again, that's a whole 'nother story.