Getting started the hard way with RAID

David Fletcher dave at
Sat Dec 3 13:36:27 UTC 2011

On Sat, 2011-12-03 at 12:39 +0000, Avi Greenbury wrote:
> Kevin O'Gorman wrote:

> Generally, you will need a new, working, raid card of at least the same
> family, sometimes the same model, to get the data off the disks. This
> is the huge disadvantage to hardware (or even fake) raid, and why
> generally I'd just stick with software raid unless I can have another
> raid card on standby. Dmraid's worth a look; I've never had cause to
> try it.
> Well, you only need raid if you need high-availability. In the vast
> majority of cases, the additional disk in a raid pair can be put to
> better use as a backup.
> -- 
> Avi

I was wondering about using RAID myself, when I built my home server a
few years back. I talked about it with the other guys at the LUG. For my
application, the points against it were something like:-

1) Adding extra hard drives means I have more electricity to pay for

2) The more parts you put into a box, the quicker something will fail

3) If it's a hardware RAID board that goes wrong, you're screwed unless
you can get another, compatible one

4) It adds complexity to the system.

So, I've got just a single, 1TB Samsung EcoGreen I think it's called,
hard drive that's now been running 24/7 for about 18 months now. Apart
from the couple of power cuts that were long enough to empty the UPS

What I believe to be true of hard drives is that they're a little bit
like car engines in that the worst thing you can do to them is keep
switching them on and off. The so-called fluid dynamic bearings that
they now all use are apparently pretty similar to the big end bearings
in an engine in that they run on an oil film and if operating as
intended, avoid all metal-metal contact. Not that that's the only thing
that can go wrong, with all the included electronics.

So, now that my drive has been running long enough to obviously not be
an infant death candidate, and is kept at a pretty comfortable
temperature, and if the long MTBF numbers claimed for hard drives these
days are to be believed, it should last for a good few years.

IMHO I'd never run a RAID myself, but in a data centre with loads of
identical servers where any failed hard drives need to be hot swappable,
and presumably a stock of spare hardware is maintained, I guess it is


More information about the ubuntu-users mailing list