hardware raid solutions?
David Abrahams
dave at boost-consulting.com
Mon Jul 10 00:09:58 UTC 2006
"Eric S. Johansson" <esj at harvee.org> writes:
> David Abrahams wrote:
>> "Eric S. Johansson" <esj at harvee.org> writes:
>> Do you really mean "rate" and not "raid?" I guess rate calculation is
>> not something I knew was an issue.
>
> sorry, speech-recognition error. I talk, it types, sometimes we agree.
Really? Remarkably good results. I used to work for Dragon Systems
on NaturallySpeaking and found its (best-of-breed) 97% accuracy to be
frustrating at best.
>>> with software raid, it would be roughly equivalent if you had one
>>> system just service the raid array and your original system just
>>> does the build/test runs. obviously there'll be some form of a
>>> network involved between two systems unless we had some sort of
>>> magic piece of hardware that could make your raid system look like a
>>> physical disk to the other system. Maybe USB 2.0 might work.
>> Like http://www.meritline.com/neadspydvraa.html
>
> yes, that's a real good example. But for that price it better come
> with the UPS, automatic shutdown etc. etc.. I was actually thinking
> of a ordinary PC with a FireWire card for the other side of the
> transaction.
Hah, neat idea...
> I'm not sure of the terminology for FireWire. It's
> probably something like client or slave device. That way the second
> PC would act as a device on the masterpiece sees FireWire. A do it
> yourself equivalent of the device you pointed out.
...but very expensive (in research time at least) to build it. And
then you have the 2-minute UPS shutdown problem again.
>>> In theory you can achieve that with software raid if you have an
>>> external UPS but your window of damage is more on the order of two
>>> minutes (shut down time) rather than two seconds.
>>
>> Oh, that *does* look significant...
>
> it's only significant with a relation to the amount of battery backup
> and notification time. For example, if a machine takes two minutes to
> shut down including flushing all of its buffers, then shuts itself
> off, hardware raid would then shut down a couple of seconds later
> after having detected the host power down and written the raid
> information to disk.
Oh, I had taken you to mean that the HW raid battery backup only
needed to last a couple of seconds, but now it sounds like you're
saying it needs to keep the raid alive through the full 2:02, which
begs the question of where's the advantage?
> But if you are using software raid, and you have good detection of
> power failure and response, then it's functionally the same thing.
Right. Even better; I don't need those extra 2 seconds of uptime ;-)
>>> besides, I will tell you from personal experience that is difficult
>>> to get a good and inexpensive UPS with any capacity. Don't make the
>>> same mistake I did, stay away from Belkin. They are only desktop
>>> units and not very good ones at that. I'm not that fond of APC
>>> either. I have a three kVA APC sitting dead in the basement.
>>
>> Hmm, compelling, so far...
>> ...but if the disk write buffers are full when the machine loses
>> power in the middle of an update, it doesn't really matter if the
>> disks can park before they lose power, does it? I mean, you'll still
>> get data corruption.
>
> there are two types of corruption you need to deal with on a raid
> system. the first type is filesystem corruption. We know this form
> very well and have a variety of strategies for dealing with it. The
> second level is of the raid set.
What does that mean?
> Unfortunately, loss of raid data,
What's raid data?
> at best, means a raid set reconstruction. So now we need a whole
> additional set of reconstruction strategies.
>
> But it's important to see which type of failure also introduces
> apparent filesystem corruption. For example, crashing a mirror raid
> system, especially one that does not do synchronized writes, can
> sometimes create a condition where you have two different apparently
> correct disks. The question is, which one do you treat as right?
Yeah, problem. Does Linux software raid do synchronized writes?
> this is why things like raid five or six are nice. In theory, you can
> crash them and from the parity information determine your "right"
> data. But, they are also slower on writes.
Yep.
> both systems are good but raid one protects you mostly against disk
> failure where as raid five protects you against disk failure and
> crashes from God knows what.
>
>> I'm following this path at the moment, in absence of a hardware
>> solution...
>> I've got raid1 across hda1-hdb1, hda2-hdb2, etc.
>> I suppose I could have tried to go with something like raid5, but it
>> looks dicey (and inefficient) to me, with only 2 disks.
>>
>
> looks reasonable.
Yeah, but have you tried to do this with Dapper? The partitioner in
the installer is so f'd up I can't believe it. Even just creating an
LVM setup is next to impossible (I think I ended up using fdisk and
LVM command-line tools), and apparently software raid plus LVM really
*is* impossible. I get cryptic error messages about all of the raid
volumes being unusable until after reboot... but of course after
reboot the installer doesn't see them anymore. After pressing
"ignore" ten times (what else are you gonna do?) and continuing the
installation process, it eventually fails to write to my LVM, of
course. So I used Fedora Core 5 to build the filesystem and Fedora is
installing just fine now. If I decide I really want Ubuntu I wonder
if there will be an easy path to using the newly-created filesystem?
Unless there's something *really* unusual about my system, the Ubuntu
installer people should be really embarassed about this, IMO. Do you
suppose anyone tested LVM-over-RAID?
> Although, I've often debated the wisdom of using two devices on
> parallel ATA cables.
It's SATA, so I suppose you mean parallel serial ATA? ;-)
> If a master dies, what happens to the slave?
IIRC, each drive has it's own cable to the motherboard.
> Do you screw to disks in a raid set?
Sorry, can't parse.
> I tend to use only one disk per cable which means I need lots of
> controllers. :-)
>
>> Not with dmraid (FakeRAID), I think. It starts working in the BIOS
>> and at the level where it's configured, it can't see any of my
>> physical partitions.
>
> I understand. You might be better off bypassing fake rate entirely.
The downside being no dual-booting to (ahem) Windows on RAID.
Of course, Fedora seems to understand the FakeRAID, but I don't want
to be tied to that distro.
>> ssh! Remember where you are!
>> (Seriously, I'm still trying to get free of XP myself.)
>
> I would love to except I'm bound to Windows as long as I need to use
> NaturallySpeaking. Sucks being handicapped.
Sorry to hear it.
> especially from keyboards.
You mean, they gave you RSI?
> Sort of like the devices I work with had their revenge on
> me.
Somehow guitar playing seems to counteract the effects of keyboarding
for me.
>>> happening. Unfortunately, I have no time to make this happen.
>> Hear, hear!
>
> isn't it amazing. How many good ideas there are and how little
> funding there is for them. I'm still trying to find time to put my
> anti-spam system into a VM Ware bubble. Need to do that for someone
> that actually paying me to do some of the work. Then I have my
> small/medium scale web framework toolkit (akasha) to publish. That's
> waiting for me to figure out how to create Python modules you can
> install. I also have an idea of how to create Ajax like interactions
> without doing any serious JavaScript. But that's another god knows
> how long time sink.
Yeah, I'm working on the funding angle myself ;-)
>> And the time I've spent on SW raid is certainly worth more than the
>> cost of a simple HW raid array. However, I don't know how much time
>> *that* would take either ;-)
>
> That's quite often true. usually, unless you've done something
> before, figure on about five times the cost of the device is the cost
> of set up. If you are doing this for a customer, this means you will
> get paid a small fraction of what your time is worth. But typically
> setting up a raid array with a hardware device is something on the
> order of two or three days to get the basic system working including
> developing a recovery process etc.
I'm doing this for myself, so there's nobody to pay for my time. It
just soaks money I'd make doing something else.
>> I trying to make a stable fileserver + multi-virtual-OS build/test
>> machine.
>
> OK. then you can use either hard or soft as long as you have a good
> UPS with good notification and hardware control.
Didn't even consider the UPS thing :(
> From the difficulty
> you are having, I would suggest skipping the fake raid entirely. I
> assume you're using something like the Belkin fake raid dual ATA
> controller?
No, it's onboard NVRaid (from NVidia). I've got a Tyan S2895 Thunder
K8WE
> If so, get enough controllers so that there is only one
> disk per channel. And add a spare disk. I haven't done that and it's
> making me nervous.
One expense at a time... :(
But thanks for the advice.
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
More information about the ubuntu-users
mailing list