Raid5 w/ mdadm request for help.
chris
lostpkts at gmail.com
Fri Sep 5 15:06:56 UTC 2008
/* cross posted to forums as well */
I have an issue that I'm hitting a brick wall on and hope that someone
here can help me out.
I have 2 servers. Hoth v2 and Hoth v3 v3 being the new better
faster server. :)
Hoth v2 = Gutsy
Hoth v3 = Hardy
Brought up v3 and added in the new hard drvies for the array 8x500
with 1 more for a spare (9 total).
rsynced the data from v2 to v3.
After 4 days of rsync... I did the following.
o removed the partitions on the drvies in v2.
o shut the box down and gutted it.
o put the 8x 400 w/ 1 spare (9 total) from v2 itno v3.
Turned on v3. v3 FREAKED. wouldn't bring up /dev/md0 which was the
original 500s it was built with. Figured out that it was b/c drive
assignments changed. Fixed that
v3 sees the follwoing for the 400s.
root at hoth:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md1 : inactive sdi1[0](S) sdp1[8](S) sdo1[6](S) sdn1[5](S) sdm1[4](S)
sdl1[3](S) sdk1[2](S) sdj1[1](S)
3125669888 blocks
they all spares.
so I did a
mdadm --examine --scan /dev/sd[ijklmnop]1
then did a
mdadm --assemble --uuid=<string> /dev/md1
Said not enough devices. So tried this.
root at hoth:~# mdadm --assemble --run /dev/md1
mdadm: failed to RUN_ARRAY /dev/md1: Input/output error
mdadm: Not enough devices to start the array.
dmesg is showing the follwoing now..
[ 412.949638] md: md1 stopped.
[ 412.949649] md: unbind<sdi1>
[ 412.949653] md: export_rdev(sdi1)
[ 412.949693] md: unbind<sdp1>
[ 412.949696] md: export_rdev(sdp1)
[ 412.949713] md: unbind<sdo1>
[ 412.949715] md: export_rdev(sdo1)
[ 412.949743] md: unbind<sdn1>
[ 412.949746] md: export_rdev(sdn1)
[ 412.949762] md: unbind<sdm1>
[ 412.949764] md: export_rdev(sdm1)
[ 412.949780] md: unbind<sdl1>
[ 412.949782] md: export_rdev(sdl1)
[ 412.949798] md: unbind<sdk1>
[ 412.949800] md: export_rdev(sdk1)
[ 412.949815] md: unbind<sdj1>
[ 412.949818] md: export_rdev(sdj1)
[ 413.053861] md: bind<sdl1>
[ 413.053978] md: bind<sdn1>
[ 413.054097] md: bind<sdo1>
[ 413.054245] md: bind<sds1>
[ 413.054393] md: bind<sdj1>
[ 413.054417] md: kicking non-fresh sds1 from array!
[ 413.054422] md: unbind<sds1>
[ 413.054425] md: export_rdev(sds1)
[ 413.054428] md: kicking non-fresh sdo1 from array!
[ 413.054431] md: unbind<sdo1>
[ 413.054434] md: export_rdev(sdo1)
[ 413.054436] md: kicking non-fresh sdn1 from array!
[ 413.054439] md: unbind<sdn1>
[ 413.054442] md: export_rdev(sdn1)
[ 413.054444] md: kicking non-fresh sdl1 from array!
[ 413.054447] md: unbind<sdl1>
[ 413.054449] md: export_rdev(sdl1)
[ 413.079463] raid5: device sdj1 operational as raid disk 1
[ 413.079467] raid5: not enough operational devices for md1 (7/8 failed)
[ 413.079514] RAID5 conf printout:
[ 413.079516] --- rd:8 wd:1
[ 413.079517] disk 1, o:1, dev:sdj1
[ 413.079518] raid5: failed to run raid set md1
[ 413.079554] md: pers->run() failed ...
so now
root at hoth:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md1 : inactive sdj1[1]
390708736 blocks
What I'm wanting to do is to wipe md1 off the earth and rebuild it
from scratch with /dev/sd[ijklmnop]1
I tried to --zero-superblock --force /dev/md1 but that didn't work either.
root at hoth:~# mdadm --zero-superblock --force /dev/md1
mdadm: Unrecognised md component device - /dev/md1
Can someone please help?
Thanks
More information about the ubuntu-users
mailing list