hardware raid solutions?

Eric S. Johansson esj at harvee.org
Sun Jul 9 22:53:15 UTC 2006


David Abrahams wrote:
> "Eric S. Johansson" <esj at harvee.org> writes:
> Do you really mean "rate" and not "raid?"  I guess rate calculation is
> not something I knew was an issue.

sorry, speech-recognition error.  I talk, it types, sometimes we agree.
> 
>> with software raid, it would be roughly equivalent if you had one
>> system just service the raid array and your original system just
>> does the build/test runs.  obviously there'll be some form of a
>> network involved between two systems unless we had some sort of
>> magic piece of hardware that could make your raid system look like a
>> physical disk to the other system.  Maybe USB 2.0 might work.
> 
> Like http://www.meritline.com/neadspydvraa.html

yes, that's a real good example.  But for that price it better come with 
the UPS, automatic shutdown etc. etc..  I was actually thinking of a 
ordinary PC with a FireWire card for the other side of the transaction. 
  I'm not sure of the terminology for FireWire.  It's probably something 
like client or slave device.  That way the second PC would act as a 
device on the masterpiece sees FireWire.  A do it yourself equivalent of 
the device you pointed out.

>>
>> In theory you can achieve that with software raid if you have an
>> external UPS but your window of damage is more on the order of two
>> minutes (shut down time) rather than two seconds.  
> 
> Oh, that *does* look significant...

it's only significant with a relation to the amount of battery backup 
and notification time.  For example, if a machine takes two minutes to 
shut down including flushing all of its buffers, then shuts itself off, 
hardware raid would then shut down a couple of seconds later after 
having detected the host power down and written the raid information to 
disk.  But if you are using software raid, and you have good detection 
of power failure and response, then it's functionally the same thing.

but I still prefer hardware raid.  More below.

> 
>> besides, I will tell you from personal experience that is difficult
>> to get a good and inexpensive UPS with any capacity.  Don't make the
>> same mistake I did, stay away from Belkin.  They are only desktop
>> units and not very good ones at that.  I'm not that fond of APC
>> either.  I have a three kVA APC sitting dead in the basement.
> 
> Hmm, compelling, so far...
> 
> ...but if the disk write buffers are full when the machine loses
> power in the middle of an update, it doesn't really matter if the
> disks can park before they lose power, does it?  I mean, you'll still
> get data corruption.

there are two types of corruption you need to deal with on a raid 
system.  the first type is filesystem corruption.  We know this form 
very well and have a variety of strategies for dealing with it.  The 
second level is of the raid set.  Unfortunately, loss of raid data, at 
best, means a raid set reconstruction.  So now we need a whole 
additional set of reconstruction strategies.

But it's important to see which type of failure also introduces apparent 
filesystem corruption.  For example, crashing a mirror raid system, 
especially one that does not do synchronized writes, can sometimes 
create a condition where you have two different apparently correct 
disks.  The question is, which one do you treat as right?

this is why things like raid five or six are nice.  In theory, you can 
crash them and from the parity information determine your "right" data. 
  But, they are also slower on writes.

both systems are good but raid one protects you mostly against disk 
failure where as raid five protects you against disk failure and crashes 
from God knows what.

> I'm following this path at the moment, in absence of a hardware
> solution...
> 
> I've got raid1 across hda1-hdb1, hda2-hdb2, etc.
> 
> I suppose I could have tried to go with something like raid5, but it
> looks dicey (and inefficient) to me, with only 2 disks.
> 

looks reasonable.  Although, I've often debated the wisdom of using two 
devices on parallel ATA cables.  If a master dies, what happens to the 
slave?  Do you screw to disks in a raid set?  I tend to use only one 
disk per cable which means I need lots of controllers.  :-)

> 
> Not with dmraid (FakeRAID), I think.  It starts working in the BIOS
> and at the level where it's configured, it can't see any of my
> physical partitions.

I understand.  You might be better off bypassing fake rate entirely.


> ssh!  Remember where you are!
> 
> (Seriously, I'm still trying to get free of XP myself.)

I would love to except I'm bound to Windows as long as I need to use 
NaturallySpeaking.  Sucks being handicapped.  especially from keyboards. 
  Sort of like the devices I work with had their revenge on me.

>> happening. Unfortunately, I have no time to make this happen.
> 
> Hear, hear!

isn't it amazing.  How many good ideas there are and how little funding 
there is for them.  I'm still trying to find time to put my anti-spam 
system into a VM Ware bubble.  Need to do that for someone that actually 
paying me to do some of the work.  Then I have my small/medium scale web 
framework toolkit (akasha) to publish.  That's waiting for me to figure 
out how to create Python modules you can install.  I also have an idea 
of how to create Ajax like interactions without doing any serious 
JavaScript.  But that's another god knows how long time sink.
> 
> And the time I've spent on SW raid is certainly worth more than the
> cost of a simple HW raid array.  However, I don't know how much time
> *that* would take either ;-)

That's quite often true.  usually, unless you've done something before, 
figure on about five times the cost of the device is the cost of set up. 
  If you are doing this for a customer, this means you will get paid a 
small fraction of what your time is worth.  But typically setting up a 
raid array with a hardware device is something on the order of two or 
three days to get the basic system working including developing a 
recovery process etc.

> I trying to make a stable fileserver + multi-virtual-OS build/test
> machine.

OK.  then you can use either hard or soft as long as you have a good UPS 
with good notification and hardware control.  From the difficulty you 
are having, I would suggest skipping the fake raid entirely.  I assume 
you're using something like the Belkin fake raid dual ATA controller? 
If so, get enough controllers so that there is only one disk per 
channel.  And add a spare disk.  I haven't done that and it's making me 
nervous.

> 
>> I also want to write an article called "pimp my raid" and I know
>> exactly what the case is going to look like.  Clear plexiglass, fans
>> with LEDs in them, activity lights in a row across the top of the
>> case, purple fluorescent lights in the bottom.  
> 
> And a flat-panel TV with nintendo inside the case door, for when you
> get bored?

ahem.  That flat panel display is for monitoring "system health"...

I'm also seriously thinking about trying to score a couple of those iPod 
speakers in clear plastic and mount them on either side of the array. 
This array will be so tricked out...

--- eric





More information about the ubuntu-users mailing list