A friend offered me the chance to get my hands on a Sunfire x4150 1U rack server for the cost of shipping. I snatched it up and a few days ago the machine arrived on my doorstep.
Specs:
- 2 Xeon x5460 quad-core CPUs at 3.16GHz
- 8GB DDR2 ECC RAM (soon to be upgraded to 20GB)
- 8 146GB 10K RPM SAS drives attached to an Adaptec STK RAID controller card
- 16 40mm dual-rotor cooling fans, because if you’re going to sound like a jet engine, you might as well go all out.
- 2 redundant 650W hot-swappable PSUs
There’s not much to say about the server itself. It’s a rack server. It’s (exceptionally) loud. It draws a lot of power (500W under full load). It’s easy to work on and it’s built for high availability. (For example, 14 of the fans can be swapped out while the server is running, and it seems that the server has enough cooling capacity to keep running even if a few of the fans are offline.)
While setting it up, the Adaptec RAID controller whined that its battery was missing or damaged. The RAID controller has a battery-backed cache, which speeds up write operations considerably. When the OS requests a write, the controller immediately returns success, freeing up the OS to do other things, while in fact the controller is storing the write request in a DRAM cache until the disks can get around to it. A power outage at an inopportune time can wipe the temporary cache and nuke whatever write operations were pending. Most controllers have a battery or capacitor to power the cache through medium-duration power outages (a day or two).
With the Adaptec’s battery out of commission, the write cache was disabled. Such slow. Wow performance tanked. Much complaining RAID controller. I popped the top lid off the server and removed the card; the Lithium-Ion battery was bloated. Dead, dead, dead.
Another friend/hardware sugar daddy found an HP P410 SmartArray controller and sent it my way as well. The P410 has an external battery pack; finding a place to tuck it in a 1U rack server that wasn’t designed for it was interesting. Right now it’s sitting on top of the motherboard. Fingers crossed that it won’t break off anything.
There’s one problem with the P410: it refuses to allow me into its BIOS. I can’t enter its configuration utility to set up the array. The P410 is supposed to pop up a message prompting me to press F8, but it doesn’t. The card runs its OptionROM, pauses as it takes its time “Initializing…” the card, and immediately goes to the next screen.
Fortunately, there’s a way around this. HP offers an offline Array Configuration Utility, which is a live CD that allows you to configure the controller. I downloaded an old version here (8.75.12.0 [22 Jun 2011], newer versions seem to have dropped support for the P410) and ran it. It seems you need a license key for RAID6/60, but I was happy with RAID10 + 2 spares, so I didn’t need a license.
Once I got the array configured, the controller detected the logical drive just fine at boot and soon I had ESXi installed.
My theory is that there’s an incompatibility between the x4150’s BIOS and the P410’s BIOS. The x4150 seems to grab the F8 key for its boot-device selection menu. I don’t know what the equivalent key is on HP servers. It’s possible a firmware update for the card would fix this, but I haven’t tried that yet.
Regardless, if you’re trying to use a HP P410 SmartArray controller in non-HP hardware, try the above configuration utility. There may also be OS-level tools you can use to manage your array after you’ve set it up for the first time.
Does this server run on Ubuntu 14.04? I’m trying to get one to test it out myself.
I don’t know but I don’t see why not! I put ESX 6.0 on it though (I’m using it to host VMs).
I came across your article while trying to solve an issue with a HP P410 controller installed in a non-HP machine. Like you, I configured the controller using a copy of ACU (on an old SmartStart CD that I found in our office) and it seems to work OK. I have RAID1 with 2 * 4TB SATA drives. It’s been working fine for a couple of months.
Today I thought I would test to see what happens if a drive fails, so after backing it all up I shut down the machine and disconnected one of the drives, then powered back up again. The machine failed to boot, which sort of defeats the object of having mirrored drives.
I think it’s to do with the absent F8 message at boot – when a drive fails, it should pop up a different message something like “Press F2 to fail the missing drive(s) and continue in recovery mode.” but that doesn’t happen. Apparently all drives remain disabled until F2 is pressed. The ACU (booted from CD) says the entire array is failed, although it shows one drive is still attached.
Have you tested fault tolerance with your P410? Right now I seem to have a more flaky system than if I just had a single drive (2 drives = 2 * risk of failure).