hardware raid solutions?

David Abrahams dave at boost-consulting.com
Mon Jul 10 05:35:49 UTC 2006


"Eric S. Johansson" <esj at harvee.org> writes:

> David Abrahams wrote:
>> "Eric S. Johansson" <esj at harvee.org> writes:
>
>> Really?  Remarkably good results.  I used to work for Dragon Systems
>> on NaturallySpeaking and found its (best-of-breed) 97% accuracy to be
>> frustrating at best.
>
> well, I've lived with speech recognition for 10 years (since
> DragonDictate on a 486) and I've learned how to adapt to its failings
> (somewhat).  Checking your consulting site, it looks like if I threw a
> rock for 20 kilometers just north of East from here, I could damage
> the paint on your car.  ;-)

Please don't then ;-)

> but seriously, it beats not being able to communicate at all.  I can
> write Python code assuming I have the appropriate macros without too
> much difficulty.  And if it lets me keep my typing down to a thousand
> keystrokes a day, then I can do other things like driving my car or
> preparing food.

Sounds like a bargain.  

> and most importantly, it lets me write.  I will admit that it's much
> harder to pick out misrecognition errors from fiction but, it can be
> done with the help of some patient friends.

With fast machines these days, maybe it's gotten more accurate?

>> Oh, I had taken you to mean that the HW raid battery backup only
>> needed to last a couple of seconds, but now it sounds like you're
>> saying it needs to keep the raid alive through the full 2:02, which
>> begs the question of where's the advantage?
>
> remember the split perspective on discs.  In order to preserve the
> metadata for the raid, you only need about two seconds after host
> power failure in order to write all the data and shut the drives down
> safely. This can be done with a really big capacitor.

Which data, though?  Are we talking about data on the RAID controller
or data that's still on the host in a buffer somewhere?

If the latter, are you assuming the host will respond to the power
failure by flushing buffers within 2 seconds?  That doesn't seem
count-on-able if the load is high.

>>> But if you are using software raid, and you have good detection of
>>> power failure and response, then it's functionally the same thing.
>> Right.  Even better; I don't need those extra 2 seconds of uptime
>> ;-)
>
> but if you are trying to preserve your file format data
> (i.e. filesystem changes, file data changes) then you need the two
> minute shutdown capacity.

...Which, ideally, I would like to have.

>>> there are two types of corruption you need to deal with on a raid
>>> system.  the first type is filesystem corruption.  We know this form
>>> very well and have a variety of strategies for dealing with it.  The
>>> second level is of the raid set.  
>> What does that mean?
>> 
>>> Unfortunately, loss of raid data, 
>> What's raid data?
>
> let's see if this analogy works.  A raid set is a virtual disk.  

yes

> You take a series of physical discs (a raid set) and make them look
> like one disk.  

yes

> the data that that allows you to do this is raid
> metadata.  

I interpret "data that allows me to do this" to mean data about how
various physical disks map to the virtual disk presented to the host
by the RAID system.  That data should, for all intents and purposes,
be almost unchanging and atomically updated.  I can't imagine that
crashing while updating that data is a concern.  What am I missing?

> That plus any cached data is what you want to preserve in
> that two second window.  

Like in the host's disk buffers, I guess.

> At the very least you'll be able to get back your raid set which
> will give you a fighting chance to get back your file data.
>
>> Yeah, problem.  Does Linux software raid do synchronized writes?
>
> probably not unless the system has two I/O channels that can use for
> the disk writes.  the best you can hope for is writing out all the
> data within 1-3 disk revolutions.

I think it might.  My disks are on ports 0 and 1 of SATA0, for what
that's worth.  I also have an SATA1, so I suppose I could move one of them.

>> Yeah, but have you tried to do this with Dapper?  The partitioner in
> ...
>> course.  So I used Fedora Core 5 to build the filesystem and Fedora is
>> installing just fine now.  If I decide I really want Ubuntu I wonder
>> if there will be an easy path to using the newly-created filesystem?
>
> this was a problem with 510 as well.  There seems to be serious blind
> spot with the installation process on LVM and raid.
>> Unless there's something *really* unusual about my system, the
>> Ubuntu
>> installer people should be really embarassed about this, IMO.  Do you
>> suppose anyone tested LVM-over-RAID?
>
> somehow I don't think so and it's also enough of a corner case that I
> don't believe it's going to be fixed except by volunteer effort.

Whoops, after a few false starts, I now have a BIOS/dm/FakeRAID Fedora
system.  I tried first with Linux SW raid but, silly me, I put the
/boot partition on a SW raid device and grub failed.  Had I a brain
I'd have just un-raided those two partitions and used one of them for
/boot, but instead I decided to try the builtin BIOS RAID support and
it just worked.

> Amateurs are going to burn all sorts of cycles and then go back to
> just running with a single disk because they can.  

Yep.

> Large-scale server
> farms are more likely than not going to use hardware raid because they
> can afford to.  It's poor sod's like us that suffer and will need to
> pay for fixing the problem.  

Or switch distros ;-)

> This sucks, but that's the nature of the economics of open source.

I guess.  I doubt the HW raid solutions are well-supported on ubuntu
either.  Notice how many people have piped up with helpful info?

Speaking of which, have you seen http://linas.org/linux/raid.html ?
Lots to learn here.  Left me wondering why EVMS hasn't taken over the
world.  Ubuntu starts it up by default but doesn't give you the option
of using it in the installer (!?)

>>> Although, I've often debated the wisdom of using two devices on
>>> parallel ATA cables.  
>> It's SATA, so I suppose you mean parallel serial ATA? ;-)

Smiley aside, that's a real question.  I am using serial ATA.

>>> If a master dies, what happens to the slave?
>> IIRC, each drive has it's own cable to the motherboard.
>> 
>>> Do you screw to disks in a raid set?  
>> Sorry, can't parse.
>
> to == 2 sorry about missing that.  If you have two discs (master and
> slave) on a single cable and they are both part of a raid set, what
> happens when the master dies?  Do you also screw up the slave?

Couldn't tell ya.  This may be the typical problem with SW coming back
in a different guise: almost nobody *really* tests their error
handling/recovery code well.  Maybe the same for RAID setups.

>> The downside being no dual-booting to (ahem) Windows on RAID.
>> Of course, Fedora seems to understand the FakeRAID, but I don't want
>> to be tied to that distro.
>
> well, you can always run Windows in a vmware bubble.  As long as
> you're not trying to do speech recognition, it seems to run OK.

That is my intention for this machine anyway (maybe some other
virtualization though -- win4lin?).  I just wanted to save the
dual-boot option "in case."  Now that I'm back on FakeRAID I could get
that option back if I wanted (but I don't).

Still trying to grok this:

>>> I assume you're using something like the Belkin fake raid dual ATA
>>> controller?  If so, get enough controllers so that there is only
>>> one disk per channel.  

Whatever controller I have is built into my motherboard and/or the
BIOS and/or the linux dmraid tool.  I haven't been able to figure out
exactly what the BIOS contributes here.  So I don't know how to follow
that advice.

>>> And add a spare disk.  I haven't done that
>>> and it's making me nervous.

Spare as in not-in-use?

>> One expense at a time... :( But thanks for the advice.
>
> you're welcome.

I guess I'd better start with a UPS.  And I need a couple external
drives for backups.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com





More information about the ubuntu-users mailing list