[Bug 1196693] Re: Cannot boot degraded RAID1 array with LUKS partition

xor 1196693 at bugs.launchpad.net
Sat Nov 3 22:44:03 UTC 2018


Ubuntu 18.04 is still affected with a blank test installation.
Please update the bugtracker entry to reflect this.

I would be really happy if this could be fixed, it's been 5 years and
this breaks using RAID with dm-crypt :(

Steps to reproduce:

- Install via network installer, create the following partition layout manually:
{sda1, sdb1} -> md RAID1 -> btrs -> /boot
{sda2, sdb2} -> md RAID1 -> dm-crypt -> btrfs -> /

- After the system is installed and confirmed as working, shutdown and
remove sdb

- Boot will now hang at "Begin: Waiting for encrypted source device
...". That will timeout eventually and drop to an initramfs shell,
complaining that the disk doesn't exist.

** Changed in: initramfs-tools (Ubuntu)
       Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to initramfs-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1196693

Title:
  Cannot boot degraded RAID1 array with LUKS partition

Status in initramfs-tools package in Ubuntu:
  Confirmed
Status in initramfs-tools-ubuntu-core package in Ubuntu:
  Invalid

Bug description:
  When pulling out a disk on my 12.04.2 RAID1 setup, which contains a
  LUKS container inside an md device, my system won't boot. Plugging the
  second disk back in worked, but I wanted to replace my disks, and if a
  disk is broken you don't have that option...

  Debugging the initramfs boot sequence seems to indicate that the
  crypto handling is done before degraded array handing, rendering the
  BOOTDEGRADED flag ineffective.

  I've looked at other bugs (#1077650 #1003309 #728435 #106215) but I
  think it's a different problem.

  
  Situation

  I've got a LVM-in-LUKS-in-RAID1 setup, with a separate, RAID'ed
  bootpartition.

  # cat /proc/mdstat 
  Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
  md126 : active raid1 sda2[0] sdb2[2]
        523968 blocks super 1.2 [2/2] [UU]
        
  md127 : active raid1 sda1[0] sdb1[2]
        976106048 blocks super 1.2 [2/2] [UU]
        
  unused devices: <none>

  md127 contains a LUKS container, called ugh2_lvm.
  ugh2_lvm contains an LVM with a volume group called ugh2_vg.
  ugh2_vg contains LV's called "root" (the root filesystem) and "swap".

  # mount | grep /dev/m
  /dev/mapper/ugh2_vg-root on / type ext4 (rw,relatime)
  /dev/md126 on /boot type ext4 (rw)

  # cat crypttab 
  ugh2_lvm UUID=69ade3d3-817d-42ee-991b-ebf86e9fe685 none luks

  # grep 'DEGRADED=' /etc/initramfs-tools/conf.d/mdadm 
  BOOT_DEGRADED=true

  
  Symptoms

  Booting seems to hang with a message "evms_activate is not available".
  I'm not using EVMS so the message is not really indicative of the
  problem. Perhaps you get dropped to a shell after a lot of time (3
  minutes? I saw a time-out of 180 seconds in the scripts somewhere) but
  that took too long for me.

  
  Diagnosis

  Interrupting the boot process with break=premount let me take a look
  at the situation. Turns out the degraded arrays assembled, but
  inactive; the BOOT_DEGRADED handling activates the degraded arrays
  (scripts/local-premount/mdadm). However, it does not get the chance to
  do so before the scripts try to open the LUKS device with the
  configured UUID, since this is done by scripts/local-top/cryptroot.
  "*-top" scripts are run before "*-premount" scripts.

  
  Workaround / solution

  I made it work again by linking /usr/share/initramfs-tools/scripts
  /local-premount/mdadm  ->  /etc/initramfs-tools/scripts/local-
  top/mdadm, then rebuilding my initramfs (update-initramfs -u).

  It seems to work well. Not sure if it's the best or even a clean
  approach.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1196693/+subscriptions



More information about the foundations-bugs mailing list