[Bug 1077650] Re: booting from raid in degraded mode ends in endless loop
Bernd Schubert
1077650 at bugs.launchpad.net
Tue Oct 7 20:59:38 UTC 2014
Today a disk did not come up anymore in the morning and I just noticed that the bug is not fixed yet in mdadm-3.2.5-5ubuntu4.1.
In fact, it got worse "bootdegraded=yes" is now default (ubuntu 14.04) and cannot be disabled anymore and so the system stays in an endless loop of "mdadm: CREATE group disk not found" messages. The only chance to survice the system was to boot a rescue system. As I didn't have a spare disk I had to get the system to boot in degraded mode.
Below some diagnostics. Please note, I'm not familiar at all how the
Ubuntu initramfs scripts are assembled from their pieces.
Diagnostic 1) In /usr/share/initramfs-tools/scripts/mdadm-functions
I disabled (commented out) the incremental if-branch
( mdadm --incremental --run --scan; then), instead only the assemble mdadm command run. After re-creating the initramfs and rebooting the "mdadm: CREATE group disk not found" message was only shown *once*, it then complained that it didn't find root partition and dropped to the busybox shell. MUCH BETTER!
Investigating on the shell I noticed that md devices had been assembled in degraded mode. Also, running "mdadm --assemble --scan --run" and it brought up the same disk group message. So seems to be a bug in mdadm to show this message and to return an error code.
After running "vgchange -ay" I could leave the shell and continue to boot
Diagnostic 2) I now changed several things as we needed this system to
boot up automatically
2.1) I mad mountroot_fail to *always* execute 'vgchange -ay'
mountroot_fail()
{
mount_root_res=1
message "Incrementally starting RAID arrays..."
if mdadm --incremental --run --scan; then
message "Incrementally started RAID arrays."
mount_root_res=0
else
if mdadm --assemble --scan --run; then
message "Assembled and started RAID arrays."
mount_root_res=0
else
message "Could not start RAID arrays in degraded mode."
fi
fi
# note, if someone does that, she probably should change it to vgchange -ay || true
vgchange -ay
return mount_root_res
}
2.2) /usr/share/initramfs-tools/scripts/init-premount/mdadm
case mountroot_fail now exits with 0, not with the exist code of
mountroot_fail
case $1 in
# get pre-requisites
prereqs)
prereqs
exit 0
;;
mountfail)
mountroot_fail
exit 0
;;
esac
. /scripts/functions
I think that is all I changed and the system now boots up in degraded mode like a charm.
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to mdadm in Ubuntu.
https://bugs.launchpad.net/bugs/1077650
Title:
booting from raid in degraded mode ends in endless loop
Status in “mdadm” package in Ubuntu:
Confirmed
Bug description:
Its basically the same as reported here:
http://efreedom.com/Question/6-103895/Can-Boot-Degraded-Mdadm-Array
So I just installed a new system, which is supposed to get later on an
additional disk. For now I created md raid1 devices with one disk
missing. To get ubuntu booting at all without complaining about a
missing disk I already added "bootdegraded=yes" to the kernel command
line. And now it ends in an endless loop of
unused devices: <none>
Attempting to start the RAID in degraded mode...
mdadm: CREATE group disk not found
Started the RAID in degraded mode.
ProblemType: Bug
DistroRelease: Ubuntu 12.10
Package: initramfs-tools 0.103ubuntu0.2
ProcVersionSignature: Ubuntu 3.5.0-17.28-generic 3.5.5
Uname: Linux 3.5.0-17-generic x86_64
ApportVersion: 2.6.1-0ubuntu3
Architecture: amd64
Date: Sun Nov 11 16:26:55 2012
PackageArchitecture: all
ProcEnviron:
LANGUAGE=en
TERM=xterm
PATH=(custom, no user)
LANG=en_US.UTF-8
SHELL=/bin/bash
SourcePackage: initramfs-tools
UpgradeStatus: Upgraded to quantal on 2012-01-08 (308 days ago)
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1077650/+subscriptions
More information about the foundations-bugs
mailing list