Getting pretty off-topic: database systems
adavid at adavid.com.au
Sat May 21 00:05:28 UTC 2005
On 21/05/2005, at 2:22 AM, James Wilkinson wrote:
> Vincent Trouilliez wrote:
>> The only ones I ever saw, was last summer, when I worked for a month
>> Bull, assembling and configuring their machines, well, learning with a
>> technician teaching me really, basically IBM H/W with AIX. I was
>> learning on their smallest offering, a 42U (IIRC, was about 2meter
>> cabinet, a couple 8 CPUs nodes (PL820R anyone ?), each with 6 drives
>> IIRC, and of course 2 or 3 disk arrays/DAS (can't remember... 14 disks
>> each ?) and a few more similarly sized cabinets with no nodes, just
>> fully loaded with disks.
> Interesting. I know that IBM offers smaller p-series servers: I'm
> working on them. (There's supposed to be little difference between
> Bull's AIX servers and the IBM ones).
>> Can't remember the size, I think it was 73GB
>> 15K rpm drives. So rougly 100 disks or so per cabinet, 7.5 TB just for
>> one little cabinet ! :-O We used RAID5, so I don't think we lost much
>> capacity, compared to RAID1 anyway...
> Actually, RAID 5 *isn't* recommended for most databases. These days,
> most databases have more of a requirement for speed than disk capacity:
> you can get disks big enough, but you can't get disks that spin fast
> enough. In that case, you measure disk performance by access time and
> the number of "spindles". So RAID 1's halving of capacity isn't an
> issue. It only takes two writes (on different disks) to update it.
> But with RAID 5, when you're doing lots of updating of small 8K blocks
> (that's a lot less than stripe size), you need to read enough data to
> able to work out what the parity data should be. And that usually means
> at least two reads followed by two writes one disk revolution (~0.006
> seconds) later. Much slower.
That is why all decent RAID controllers have NVRAM cache so it send the
"write complete - SCSI/IDE" message once the write block is cached and
it can write the block and the parity at the disk's speed.
> But I suspect your customers would then re-configur the disks the way
> they wanted, anyway.
>> Nice machines (if just a tad noisy...), too bad I stayed only 4 weeks
>> there, hardly enough time to learn anything properly really :o(
>> Would you believe it, instead of AIX, they also offered Mandrake on
>> these monsters, upon explicit customer request of course.
>> Would be fun to see Ubuntu run these things, but does Ubuntu
>> on IBM servers ?
> It's not beyond the bounds of possibility: they aren't too dissimilar
> Apple Macs.
>> But about fragmentation on these, no idea... can't even remember the
>> file system used. I just remember that we tested the disk of the nodes
>> with 'dd', then created a few RAID 5 devices to check that the DAS
>> working, and that's about it... ready for 24 hours of stress-testing.
> If it's AIX, it would be JFS or JFS2 (what the Linux world knows as
> JFS). And no, it's not particularly prone to fragmentation.
>> I wonder what the performance of the DAS was... if only I knew how to
>> test it back then ! Too late now :o(
> Transactions per minute. Either some standardised version (TPM) or
> (better) the number of transactions while doing typical work for the
> E-mail address: james | Let He who Taketh the Plunge
> @westexe.demon.co.uk | Remember to Return it by Tuesday.
> ubuntu-users mailing list
> ubuntu-users at lists.ubuntu.com
More information about the ubuntu-users