mdadm RAID problem -- won't boot

Rick Bragg rbragg at
Mon Aug 20 19:39:18 UTC 2012

> On Mon, Aug 20, 2012 at 1:36 PM, Rick Bragg <rbragg at> wrote:
>> I am having a problem booting my system.  My boot disk is not a raid array,
>> however, I do have 4 other disks making a raid 10 array that I mount at /mnt/md0.
>> My problem is that when I boot my system, I get to a point where it says it
>> it can't start the degraded array, and asks me if I want to start the degraded
>> array.  If I say yes or no, it always drops me to a shell.  At the shell, I do a
>> "cat /proc/mdadm" and I can see 2 arrays!  One is /dev/md0 started, degraded with
>> only 3 of my disks (sda1, sdc1, sdd1.)  The other array is /dev/md127 with the
>> other disk all by itself (sdb1) and not started.  Again, I am booting from a
>> different disk entirely (sde1.)  I tried to remove the md127 array altogether,
>> and
>> re-add sdb1 into the md0 array, and it syncs up fine.  After syncing and seeing
>> that the md0 array is fine, I reboot.  After rebooting, I get the same problem
>> over
>> and over again.
>> My question is:
>> How can I fix this so that I only have one array at /dev/md0 with all 4 disks
>> synced?  Also, how can I bypass this and boot my system without any raid at all
>> so
>> I can fix that later?  I am using ubuntu server 10.04 LTS.
> The use of md127 usually means that the array's recognized as a
> foreign array. Does "mdadm --examine" on sda1 and sdb1 return the same
> "local to host" value on the "Name" line?
> Did you zero the superblock before re-adding sdb1?

I didn't zero the superblock on anything.  Should I zero it on sdb1?

for --examine, I'm seeing:
RaidDevice       State
0                active sync      /dev/sda1
1                faulty removed
2                active sync      /dev/sdc1
3                active sync      /dev/sdd1
4                spare            /dev/sdb1

Not sure why sdb is a spare, and dev 1 is faulty removed...

Also, for mdadm --detail /dev/md0 I see:

RaidDevice       State
0                active sync        /dev/sda1
1                spare rebuilding   /dev/sdb1
2                active sync        /dev/sdc1
3                active sync        /dev/sdd1

After the spare rebuilds and is active sync like the others, I usually reboot and I
get the same problem all over from the start.  Do I need to somehow re-create the
entire array? or zero a superblock somewhere after this syncs?

Thanks again!

More information about the ubuntu-users mailing list