RAID Cards performance issue
Christopher Chan
christopher.chan at bradbury.edu.hk
Tue Dec 9 01:40:44 GMT 2008
Scott Balneaves wrote:
> On Mon, Dec 08, 2008 at 08:21:38AM +0800, Christopher Chan wrote:
>
>> Software raid5 get performance penalties but a hardware raid card with
>> sufficient cache memory (must be battery backed if you want to minimize
>> data loss) and processing power can do raid5 and perform as well as or
>> even better than raid10 depending on the number of drives involved.
>
> No, RAID5's a compromise. If it's a compromise in software on the server,
> it'll be just as much of a compromise on a dedicated controller, where
> it'll be implemented in software running on the card's controller.
When I say 'Software raid5 get performance penalties', that is in
relation to the performance you can get with hardware raid cards. That
is, in this day and age.
Popular hardware raid cards using the puny and useless Intel i960
processor a decade ago were absolutely creamed by software raid5 in
performance.
Even if you do have the processing power on the hardware raid card, you
still need sufficient buffering for the processor as the tests done by
the Gleb research group in the link below will tell you.
http://www.chemistry.wustl.edu/~gelb/castle_raid.html
The 3ware 850x series has no cache memory. They will therefore perform
absolutely poorly in raid5 mode. As the tests indicate, hardware raid5
performance is really poor on the 3ware 8506. However, when they
switched the card into JBOD mode and used Linux software raid5 to
implement raid5, the performance they got was comparable to the 3ware
hardware raid10 figures. If that is not an argument for raid5
performance, I don't know what is.
Today, bus traffic on the mainboard gives software raid5 performance
penalties compared to a hardware raid card doing raid5 since the disks
are directly connected to the raid board/processor. But the raid board
needs sufficient processing power and onboard cache to be able to pull
off the performance.
> I've run tests myself on 3Ware controllers, and they are MUCH slower in
> a RAID5 config than either a RAID1 or RAID10 config. This isn't a smack
> at 3ware controllers: I use them myself, and they're great. Solid, dependable
> raid controllers. It's just a limitation of RAID5. RAID5 tries to be
> everything to everybody ("More Space!!" "Fault Tolerant!!" "Less Filling!!")
> and in the end, doesn't really satisfy anyone.
Your problem is most probably because you did tests with the 3ware 750x
or 850x series. I know those boards suck at raid5 from personal
experience. I also know that newer 3ware boards with cache memory have
solved the raid5 performance disparity. I had a ten disk raid5 array on
a 3ware 9550 that was loaned to me and I used it as a mail queue and it
rocked. I was the MTA guy at Outblaze Ltd. at that time.
Here is a newer test that also uses six disks like the Gleb research
group did but with a different controller that has both the processing
power and sufficient cache memory.
http://www.linux.com/feature/140734
Not surprisingly, raid5 beat the pants off raid10. Why? For six disks,
on raid5, you have all six disks for input and output. On raid10, you
need to make three mirrors and so you are effectively reduced to three
disks for input and output. Given sufficient resources on the raid
board, raid5 with six spindles will beat raid0 with 3 spindles. Raid5 is
effectively raid0 + uber processing. Therefore, if 'uber processing'
keeps up, you are really doing an unfair raid0 with 6 logical disks
versus raid0 with 3 logical disks knockout.
You also get software raid performance comparison in that article and
you will see what I have said about software raid performance suffering
performance penalties proven except for one thing that I found most
interesting...software raid 5 got the best read performance and that
simply blows my mind away.
Enjoy.
More information about the edubuntu-users
mailing list