[Bug 1351528] Re: LVM based boot fails on degraded raid due to missing device tables at premount

dblade listmail at triad.rr.com
Mon Aug 4 22:39:14 UTC 2014


** Summary changed:

- boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time
+ LVM based boot fails on degraded raid due to missing device tables at premount

** Summary changed:

- LVM based boot fails on degraded raid due to missing device tables at premount
+ boot fails for LVM on degraded raid due to missing device tables at premount

** Description changed:

- Trusty installation is combined root + /boot within LVM on top of mdraid (type 1.x) RAID1 with one missing disk (degraded).
- [method:  basically create the setup in shell first then point install at the lvm.  at end of install create a chroot and add mdadm pkg]
+ This is a Trusty installation with combined root + /boot within LVM on top of mdraid (type 1.x) RAID1.  Raid1 was built with one missing disk (degraded).
+ [method:  basically create raid/VG/LV setup in shell first then point installer at the lvm.  At the end of the install create a chroot, add the mdadm pkg, and update-initramfs before reboot.]
  
- boot fails with the following messages:
-  Incrementally starting RAID arrays...
-  mdadm: CREATE user root not found
-  mdadm: CREATE group disk not found
-  Incrementally starting RAID arrays...
+ The boot process fails with the following messages:
+  Incrementally starting RAID arrays...
+  mdadm: CREATE user root not found
+  mdadm: CREATE group disk not found
+  Incrementally starting RAID arrays...
  and slowly repeats the above at this point.
  
  workaround:
  - add break=premount to grub kernel line entry
- - for continue visibility of boot output also remove quiet,  splash and possibly set gxmode 640x480
+ - for continued visibility of text boot output also remove quiet,  splash and possibly set gxmode 640x480
  
  now @ initramfs prompt:
  mdadm --detail /dev/md0 should indicate a state of clean, degraded, array is started so this part is ok
  
  lvm lvs output attributes are as follows:
  -wi-d----  (instead of the expected -wi-a----)
  lvs manpage this means device tables are missing (device mapper?)
  
  FIX: simply run lvm vgchange -ay and exit initramsfs.  This will lead to
  a booting system.

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1351528

Title:
  boot fails for LVM on degraded raid due to missing device tables at
  premount

Status in “lvm2” package in Ubuntu:
  New

Bug description:
  This is a Trusty installation with combined root + /boot within LVM on top of mdraid (type 1.x) RAID1.  Raid1 was built with one missing disk (degraded).
  [method:  basically create raid/VG/LV setup in shell first then point installer at the lvm.  At the end of the install create a chroot, add the mdadm pkg, and update-initramfs before reboot.]

  The boot process fails with the following messages:
   Incrementally starting RAID arrays...
   mdadm: CREATE user root not found
   mdadm: CREATE group disk not found
   Incrementally starting RAID arrays...
  and slowly repeats the above at this point.

  workaround:
  - add break=premount to grub kernel line entry
  - for continued visibility of text boot output also remove quiet,  splash and possibly set gxmode 640x480

  now @ initramfs prompt:
  mdadm --detail /dev/md0 should indicate a state of clean, degraded, array is started so this part is ok

  lvm lvs output attributes are as follows:
  -wi-d----  (instead of the expected -wi-a----)
  lvs manpage this means device tables are missing (device mapper?)

  FIX: simply run lvm vgchange -ay and exit initramsfs.  This will lead
  to a booting system.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+subscriptions



More information about the foundations-bugs mailing list