[ubuntu-uk] Software verses Hardware raid (was RE: Anyone evertried kolab on feisty)
Daniel Lamb
daniel.lamb at dlcomputing.co.uk
Thu Sep 27 15:58:30 BST 2007
Personally I feel raids only real use is data protection, as hard drives are
sometimes very unreliable anyway how is really worried about speed? This is
in a server which means its not reading or writing at a very fast speed
anyway.
Regards,
Daniel
-----Original Message-----
From: ubuntu-uk-bounces at lists.ubuntu.com
[mailto:ubuntu-uk-bounces at lists.ubuntu.com] On Behalf Of Matthew Larsen
Sent: 27 September 2007 15:26
To: British Ubuntu Talk
Subject: Re: [ubuntu-uk] Software verses Hardware raid (was RE: Anyone
evertried kolab on feisty)
I remember a test some magazine did with a Raid 0 with 2 10k SATAII
Raptors: and only got something like a 2% performance increase. I
would advise not using RAID for anything apart from redundancy.
Regards,
On 27/09/2007, Daniel Lamb <daniel.lamb at dlcomputing.co.uk> wrote:
> No problems, I meant the driver for the hardward raid card does not work,
it
> shows them up as 3 devices,
> rather than the one device which then goes on to cause problems with grub
> loading, this was a know problem
> on the dells which is why we went for feisty.
> Regards,
> Daniel
>
> -----Original Message-----
> From: ubuntu-uk-bounces at lists.ubuntu.com
> [mailto:ubuntu-uk-bounces at lists.ubuntu.com] On Behalf Of Alan Pope
> Sent: 27 September 2007 13:49
> To: British Ubuntu Talk
> Subject: Re: [ubuntu-uk] Software verses Hardware raid (was RE: Anyoneever
> tried kolab on feisty)
>
> Hi Daniel,
>
> On Thu, 2007-09-27 at 12:45 +0100, Daniel Lamb wrote:
> > Good arguments for it, I knew it was a very good system but never
> > looked at it in to great detail as we have always used cards as we use
> > linux and windows servers and like to keep the hardware quite alike.
> >
>
> It's worth a play even if you don't actually use it in anger. Nice to
> know that the feature is there. Maybe one day if you get time you could
> compare the two and make an informed decision about which is appropriate
> for your use. (hmm, that sounds more condescending that I intended, I
> just mean 'have a play' :) )
>
> > I like the perc cards, prefer to set it all up before installs as I do
not
> > want to accidentally lose data,
> > or does the linux software raid protect against you selecting the wrong
> > drivers and overwriting them?
> >
>
> When you say wrong drivers, I am not sure what you mean.
>
> When you setup Linux software raid devices you don't need additional
> drivers to make that work. So long as Linux can see the disks hanging
> off the controller there's nothing else to do other then configure each
> disk for RAID, and create the RAID multi-disk device. You'll then see
> (for example) a new device called /dev/md0 which might be an array of
> multiple disks.
>
> I use Linux software RAID on my main desktop PC:-
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/md1 14G 2.4G 11G 18% /
> /dev/md3 215G 200G 4.0G 99% /home
> /dev/hda 513M 513M 0 100% /media/cdrom0
>
> So you can see here that I have a 15G partition for / and a 215G
> partition for /home. md1 is made up of two RAID 1 partitions
> across /dev/sda and /dev/sdb, and so is md3. (md2 is my swap part).
>
> $ cat /proc/mdstat
> Personalities : [raid1]
> md1 : active raid1 sda2[0] sdb2[1]
> 14843968 blocks [2/2] [UU]
>
> md2 : active raid1 sda1[0] sdb1[1]
> 1606400 blocks [2/2] [UU]
>
> md3 : active raid1 sda3[0] sdb3[1]
> 228661056 blocks [2/2] [UU]
>
> The [UU] means both disks in the array are available. The neat thing
> here is that I actually setup RAID 1 with only one disk, one missing.
> Then added the second disk later. So initially it showed as [U_] where
> the underscore indicates a missing disk.
>
> You can also do funky things like fail a disk out of the array:-
>
> # mdadm --manage /dev/md3 -f /dev/sdb3
> mdadm: set /dev/sdb3 faulty in /dev/md3
>
> # cat /proc/mdstat
> Personalities : [raid1]
> md1 : active raid1 sda2[0] sdb2[1]
> 14843968 blocks [2/2] [UU]
>
> md2 : active raid1 sda1[0] sdb1[1]
> 1606400 blocks [2/2] [UU]
>
> md3 : active raid1 sda3[0] sdb3[2](F)
> 228661056 blocks [2/1] [U_]
>
> Note md3 now has one failed disk.
>
> Now I can remove it:-
>
> # mdadm --manage /dev/md3 -r /dev/sdb3
> mdadm: hot removed /dev/sdb3
>
> # cat /proc/mdstat
> Personalities : [raid1]
> md1 : active raid1 sda2[0] sdb2[1]
> 14843968 blocks [2/2] [UU]
>
> md2 : active raid1 sda1[0] sdb1[1]
> 1606400 blocks [2/2] [UU]
>
> md3 : active raid1 sda3[0]
> 228661056 blocks [2/1] [U_]
>
> Neat!
>
> Lets add it back in again:-
>
> # mdadm --manage /dev/md3 --re-add /dev/sdb3
> mdadm: re-added /dev/sdb3
>
> And check the status of the mirror:-
>
> # cat /proc/mdstat
> Personalities : [raid1]
> md1 : active raid1 sda2[0] sdb2[1]
> 14843968 blocks [2/2] [UU]
>
> md2 : active raid1 sda1[0] sdb1[1]
> 1606400 blocks [2/2] [UU]
>
> md3 : active raid1 sdb3[2] sda3[0]
> 228661056 blocks [2/1] [U_]
> [>....................] recovery = 0.1% (241600/228661056)
> finish=47.2min speed=80533K/sec
>
> Groovy. It's now recovering by resyncing sdb3 and sda3.
>
> Can you see I like software RAID :)
>
> Cheers,
> Al.
>
>
> --
> ubuntu-uk at lists.ubuntu.com
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-uk
> https://wiki.kubuntu.org/UKTeam/
>
--
Matthew G Larsen
> mat.larsen at gmail.com
> matthew.larsen at logicacmg.com
--
ubuntu-uk at lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-uk
https://wiki.kubuntu.org/UKTeam/
More information about the ubuntu-uk
mailing list