Software Raid Question
Tom H
tomh0665 at gmail.com
Wed Jun 9 11:19:42 UTC 2010
On Tue, Jun 8, 2010 at 6:11 PM, James Bensley <jwbensley at gmail.com> wrote:
> I have an Ubuntu 9.10 box with 7 SATA II drives attached, each one has
> a single partition that is size of the entire drive and they are in a
> software RAID 6.
>
> (all drives are the same size, make and model etc).
>
> I believe one of my drives is dying because when I boot up the drive
> is only correctly picked up say 1in every 10 boots but I am not
> bothered (or at least, was not bothered) because I have a RAID 6 so
> that this wouldn't be a problem however I find that when I boot the
> machine up with the problem drive not attached the RAID won't mount
> but with it attached, like I said the drive is only correctly detected
> by the BIOS every say 1 in to 10 boots, so I have to reboot over and
> over until the drive is picked up, then I can use the RAID.
>
> Surely this is not normal behaviour for a RAID? Also I just booted the
> box up only to find that the other six drives are now marked as spares
> (I believe that is what the 's' in curly braces means yes?)
>
> bensley at ubuntu:~$ cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : inactive sdf1[0](S) sdg1[5](S) sde1[1](S) sdc1[6](S) sdb1[4](S) sdd1[3](S)
> 5860559616 blocks
>
> unused devices: <none>
>
> Anyone got any idea why the RAID doesn't mount and why all disks are
> now marked as spares?
I cannot explain the "(S)"s but am curious whether you can assemble
the array with "--run" or reconnect the failed drive and remove it
from the array with
mdadm /dev/md0 --fail /dev/sda1
mdadm /dev/md0 --remove /dev/sda1
and then assemble it, with "--run" if necessary.
More information about the ubuntu-users
mailing list