hardware raid solutions?
David Abrahams
dave at boost-consulting.com
Sun Jul 9 20:05:30 UTC 2006
"Eric S. Johansson" <esj at harvee.org> writes:
> David Abrahams wrote:
>>
>> "Eric S. Johansson" <esj at harvee.org> writes:
>>
>>> Software raid also has problems when you're doing anything disk
>>> intensive like backup every hour or two. Yes, hardware has similar
>>> problems but is not my CPU that's being eaten alive. :-)
>> I am setting this raid up on a machine that will be used for large
>> build/test runs. That can get disk- and compute-intensive, so I
>> suppose that issue could be relevant to me.
>
> yes. Remember when you are using hardware raid (even though it is
> implemented in firmware) you are letting some other device do your
> rate calculations for you.
Do you really mean "rate" and not "raid?" I guess rate calculation is
not something I knew was an issue.
> with software raid, it would be roughly equivalent if you had one
> system just service the raid array and your original system just
> does the build/test runs. obviously there'll be some form of a
> network involved between two systems unless we had some sort of
> magic piece of hardware that could make your raid system look like a
> physical disk to the other system. Maybe USB 2.0 might work.
Like http://www.meritline.com/neadspydvraa.html
> But like I said, the primary advantage in this context for hardware
> raid is that something else does the computation and raid management.
>
> at the same time you need to figure out what type of raid will also
> help you. Ray one is very fast because you are writing to two discs
> at the same time and you achieve proper parallel I/O channels with
> disc index synchronization, the delay is only slightly more than a
> single disk.
>
> raid five however is much more expensive as you are writing to N disks
> with different data. Again, hardware makes it faster from the system
> perspective because all of the write queuing is handled by "somebody
> else".
>
> in case you haven't noticed, I'm a big fan of the "somebody else"
> school of processing. With queuing, it's a very powerful model for
> organizing system components.
Just to save us both time: I understand the advantages of
parallelism. The question is just whether the cost of not having it
will become significant for me.
>>> So, convince me that I can use a software raid solution for my system
>>> for all partitions and still be able to boot if the raid array has
>>> failed (total failure, one disk, or 2 disk).
>> Sorry, but I must be missing something important. How can you
>> expect
>> to boot if your disk storage has *total* failure?
>
> sorry, I was exaggerating a bit. There are multiple modes of failure
> rate. We have the obvious disk failures, controller failure, power
> supply failure. But when you have software raid, you also have
> problems caused by device driver failures or kernel buffer corruption
> by physical RAM problems or errant code. For example, let's say the
> system crashes in the middle of a raid update? Yes you can
> reconstruct when you power up but what if something important is
> corrupted?
>
> In contrast with hardware raid, the right hardware will let you apply
> battery backup to not only the raid controller but to the disk drives
> so that all of the raid information can be written out safely in a few
> seconds and then the raid controller can shut down cleanly. Your
> files may not be intact but the raid set will be.
>
> In theory you can achieve that with software raid if you have an
> external UPS but your window of damage is more on the order of two
> minutes (shut down time) rather than two seconds.
Oh, that *does* look significant...
> besides, I will tell you from personal experience that is difficult
> to get a good and inexpensive UPS with any capacity. Don't make the
> same mistake I did, stay away from Belkin. They are only desktop
> units and not very good ones at that. I'm not that fond of APC
> either. I have a three kVA APC sitting dead in the basement.
Hmm, compelling, so far...
...but if the disk write buffers are full when the machine loses
power in the middle of an update, it doesn't really matter if the
disks can park before they lose power, does it? I mean, you'll still
get data corruption.
>>> if I were to use software raid, I would break up the large disks (200
>>> GB) into 50 GB physical partitions, Raid physical partitions (from
>>> different disks duh),
>> Does "raid physical partitions" mean you'd set up raid 1 mirroring
>> across pairs of real physical partitions?
>
> I would set up a Raid array between hda1, hdb1, hdc1,.... and then
I'm following this path at the moment, in absence of a hardware
solution...
I've got raid1 across hda1-hdb1, hda2-hdb2, etc.
I suppose I could have tried to go with something like raid5, but it
looks dicey (and inefficient) to me, with only 2 disks.
> build my logical volume set (bigraid) out of md0, md1, md2, ...
>
> at which point I would then create working partitions out of the
> logical volume set to something like
> /dev/bigraid/root
> /dev/bigraid/home
>
> and so on
OK, for now, that's what I'll try.
>>> LVM the raid physical partitions together
>> Meaning you'd create an LVM volume group containing all the
>> "physical
>> partitions" presented by software raid?
>>
>>> and then create logical volumes for all of my system partitions.
>>>
>>> the reason for the strategy is that if rate needs to reconstruct, it
>>> will take less time to rebuild an individual partition than the entire
>>> disks.
>> Ah, I think that's not a strategy I could expect to use with dmraid,
>> because IIUC it will only allow me to create raid 1 mirroring across
>> whole disks.
>
> you should be able to create physical partitions first and then raid
> up the physical partitions on multiple discs
Not with dmraid (FakeRAID), I think. It starts working in the BIOS
and at the level where it's configured, it can't see any of my
physical partitions.
>>> If all need to be rebuilt, well it's part of a very painful
>>> experience. ;-)
>>>
>>> also convince me that there's a good notification process that
>>> something has failed.
>> That's an issue with hardware, too, innit? You probably need some
>> kind of software monitor to go along with it.
>
> yes indeed. I must admit my fantasy for system monitoring would be a
> little windows
ssh! Remember where you are!
(Seriously, I'm still trying to get free of XP myself.)
> systray icon that changes between red, yellow, green
> based on the status of the systems I'm monitoring. Then when
> something goes wrong, it lets me drill down to figure out what's
> happening. Unfortunately, I have no time to make this happen.
Hear, hear!
And the time I've spent on SW raid is certainly worth more than the
cost of a simple HW raid array. However, I don't know how much time
*that* would take either ;-)
>>> I'm willing to be convinced. Make your best argument for the software
>>> raid. And failing that, hardware cards suggestions would also be
>>> welcome. :-)
>> Me too. I'm still trying to gather as much information as possible
>> before I waste another week on dmraid. Thanks in advance for your
>> answers.
>
> No problem. We should probably stay in touch as I think we're heading
> down the same path and if we can reduce our mutual pain, this is a
> good thing.
I'd be very glad to, thanks.
> I'm doing this because I have a system supporting a few
> organizations and their mailing lists and I want to make it as
> stable as possible.
I trying to make a stable fileserver + multi-virtual-OS build/test
machine.
> I also want to write an article called "pimp my raid" and I know
> exactly what the case is going to look like. Clear plexiglass, fans
> with LEDs in them, activity lights in a row across the top of the
> case, purple fluorescent lights in the bottom.
And a flat-panel TV with nintendo inside the case door, for when you
get bored?
> I'm telling you man,
> it will be amazing. I might even go for little electric actuators to
> make the case bounce up and down. ;-)
>
> lo-raid-ers, here I come...
Mmmm you are indeed a pun-gent!
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
More information about the ubuntu-users
mailing list