kees at ubuntu.com
Fri Sep 7 00:47:24 BST 2007
On Thu, Sep 06, 2007 at 05:16:54PM -0500, Jerome Haltom wrote:
> Currently there are udev rules which look like they SHOULD work. They
> don't. Regardless whether they do or don't, they're not exactly
> adequate, in my opinion. The simply attempt to activate all non-degraded
In Gutsy, the udev rules work for systems that have been set up
correctly using UUIDs in their /etc/fstab. If there are
(non-degraded) situations where LVM-on-MD does _not_ boot, it should be
considered a bug, and a new report should be opened.
(NB my desktop system is LVM-on-MD with some amazingly
slow-to-initialize SATA drives, so I've seen a lot of odd problems in
the past that are all fixed currently in Gutsy, AFAICT.)
> What SHOULD happen is they should attempt to activate only non-degraded
> volumes until some timeout is reached, at which time they should
> activate degraded volumes too.
> I think udev should only automatically activate non-degraded volumes.
Right. This is the current configuration. The current udev rules will
not bring up newly degraded volumes, and the consensus seems to be that
this is "correct": there should be some kind of manual
intervention/acknowledgement that you're trying to boot without all the
> Second in that is the best way to deal with the degraded timeout.
> Currently the initramfs spins for 5 minutes or something before dropping
> to a console. I think I'd like to alter this so it only spins for 1
> minute or so, after which it attempts to execute scripts
> in /scripts/local-timeout. Basically this would introduce local-timeout
> along with local-top, local-bottom, and folks. mdadm will then install a
> file into here which force activates all arrays whether degraded or not.
> After this script the local script will attempt to find ROOT again, if
> not, it drops to console as usual, if so it can proceed.
Right, the initramfs already drops to a shell after the 3 min timeout.
(Unless you have a buggy BIOS that causes your ACPI timer not to tick --
have I mentioned all the crazy stuff I've banged my head against on this
desktop?) I'd agree, 3 minutes seems too long. I think it would be safe
to lower this to 1 minute (or even 30 seconds) since it should only be a
timeout for the root device.
However, I'd be curious to see what happens with large chains of SCSI
drives where each one spins up sequentially. I once played with an
E10000 with 6 cabinets of disks -- it took 45 minutes just to spin up
all the drives. :P However, for special cases, it seems adding the
"rootdelay=[many seconds here]" boot option would be good enough.
Are there any people that currently wait >30 seconds for drives to
I like the idea of local-timeout, though I'm still generally against
automatically bring up the degraded arrays -- however, a detailed report
that tries to figure out what's wrong and spews help to the console would
be nice, something like:
Your root device (UUID=deadbeef-1234-1234-1234-feedfacebeef) was not found.
If your drives take a very long time to initialize, please specify how
many seconds to wait for them on the kernel command line, with:
Also, there appear to be degraded RAID devices, and your root device may
depend on the RAID devices being online. The following RAID devices are
md1 : active raid1 sda2
3903680 blocks [1/2] [U_]
md0 : active raid1 sda1
96256 blocks [1/2] [U_]
If you want to attempt to boot with the RAID in degraded mode, type:
mdadm -R /dev/md1
mdadm -R /dev/md0
This should be relatively straight forward to write, do you want to take
a stab at it? I'd be happy to test. Heck, I was involuntarily testing
this a few weeks ago when one of my SATA controllers failed. ;)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: Digital signature
Url : https://lists.ubuntu.com/archives/ubuntu-devel/attachments/20070906/89ced907/attachment.pgp
More information about the ubuntu-devel