[Bug 1386637] Re: multipath shows non-mpath disks as being multipath

Mathieu Trudel-Lapierre mathieu.tl at gmail.com
Tue Aug 18 08:40:26 UTC 2015


** Description changed:

  [Impact]
  Users of multipath-tools on systems which exhibit both multipathed and non-multipathed drives or some USB devices (some USB devices may still be picked up by multipath-tools if multipath-capable).
  
  [Test case]
  - Boot on a system with multipathed disks with the previous version of multipath-tools
  - Upgrade to the new version; verify that /etc/multipath/wwids is created and contains all the right device WWIDs, and that 'sudo multipath -ll' has the same devices.
  - Reboot and verify that the system still boots correctly.
+ - Using Fibre-Channel hardware: verify that multipath still behaves correctly for path degradation scenarios (see below).
  
  After rebooting, you should verify that /proc/mounts correctly lists the
  multipath device (/dev/mapper/*), and that /proc/swaps also lists any
  swap partitions that should be on a multipathed disk as a /dev/dm-*
  device.
+ 
+ PATH DEGRADATION:
+ -- This test requires fibre-channel hardware or another method to be able to disconnect paths to a drive.
+ 1) verify that on disconnecting a path, it is properly detected as being in degraded mode by multipath-tools; and shows as such in 'multipath -ll'.
+ 2) verify that on reconnection, the path shows up again as ready or OK in 'multipath -ll'.
+ 
  
  [Regression potential]
  Systems in complex setups mixing both multipathed and non-multipathed disks, or with particularly slow disk controllers may fail to boot or show delays in booting. Systems requiring a particular set of partitions to be available in early boot (past the typical installs on /, some may require /usr to be available for some daemons to load on boot) could fail to properly start up all the software on boot; these should be considered as regressions only if devices do not properly show up in 'sudo multipath -ll' output.
  
  Users seeing issues should preferrably include the output of 'sudo
  multipath -v4' to bug reports to help developers in debugging.
  
  ---
- 
  
  Problem Description
  ======================
  non-multpath disks are classified as multipath after the multipath-tools package is installed.
  
  ---uname output---
  Linux uu04g1 3.16.0-21-generic #28-Ubuntu SMP Mon Oct 6 15:57:32 UTC 2014 ppc64le ppc64le ppc64le GNU/Linux
  
  Machine Type = Tuleta pKVM guest
  
  Steps to Reproduce
  =========================
  1) Create a guest with an unused, non-mpath disk (not the install disk) (a qemu img file is fine).
  2) apt-get install multipath-tools.
  3) multipath -ll.
  4)  Note that  your file backed disks are showing up as mpath disks.
  
  Userspace tool common name: multipath
  The userspace tool has the following bit modes: 64
  
  System Dump Info:
    The system is not configured to capture a system dump.
  
  Userspace rpm: multipath-tools
  Userspace tool obtained from project website:  multipath-tools v0.4.9 (05/33, 2016)
  
  == Comment: #1 - Edward R. Cheslek <echeslak at us.ibm.com> - ==
  root at uu04g1:~# multipath -ll
  0QEMU    QEMU HARDDISK   drive-scsi0-0-1-0 dm-0 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 0:0:1:0 sdb 8:16 active ready running
  0QEMU    QEMU HARDDISK   drive-scsi0-0-2-0 dm-1 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 0:0:2:0 sdc 8:32 active ready running
  root at uu04g1:~# lsblk
  NAME                                              MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
  sda                                                 8:0    0   20G  0 disk
  ??sda1                                              8:1    0    7M  0 part
  ??sda2                                              8:2    0 19.1G  0 part  /
  ??sda3                                              8:3    0  896M  0 part  [SWAP]
  sdb                                                 8:16   0   20G  0 disk
  ??0QEMU\x20\x20\x20\x20QEMU\x20HARDDISK\x20\x20\x20drive-scsi0-0-1-0
                                                    252:0    0   20G  0 mpath
  sdc                                                 8:32   0   20G  0 disk
  ??0QEMU\x20\x20\x20\x20QEMU\x20HARDDISK\x20\x20\x20drive-scsi0-0-2-0
                                                    252:1    0   20G  0 mpath
  root at uu04g1:~#
  root at uu04g1:~# lsscsi
  [0:0:0:0]    disk    QEMU     QEMU HARDDISK    2.0.  /dev/sda
  [0:0:1:0]    disk    QEMU     QEMU HARDDISK    2.0.  /dev/sdb
  [0:0:2:0]    disk    QEMU     QEMU HARDDISK    2.0.  /dev/sdc
  
  == Comment: #2 - Edward R. Cheslek <echeslak at us.ibm.com> - 2014-10-08 22:27:09 ==
  This is the xml file of the affected guest.  Please note that all disks are file backed.
  
  root at uu04g1:~# multipath -l
  0QEMU    QEMU HARDDISK   drive-scsi0-0-1-0 dm-0 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=-1 status=active
    `- 0:0:1:0 sdb 8:16 active undef running
  0QEMU    QEMU HARDDISK   drive-scsi0-0-2-0 dm-1 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=-1 status=active
    `- 0:0:2:0 sdc 8:32 active undef running
  
  root at uu04g1:~# multipath -v2
  root at uu04g1:~#

** Description changed:

  [Impact]
  Users of multipath-tools on systems which exhibit both multipathed and non-multipathed drives or some USB devices (some USB devices may still be picked up by multipath-tools if multipath-capable).
  
  [Test case]
  - Boot on a system with multipathed disks with the previous version of multipath-tools
  - Upgrade to the new version; verify that /etc/multipath/wwids is created and contains all the right device WWIDs, and that 'sudo multipath -ll' has the same devices.
  - Reboot and verify that the system still boots correctly.
- - Using Fibre-Channel hardware: verify that multipath still behaves correctly for path degradation scenarios (see below).
+ - [udev] Using Fibre-Channel hardware: verify that multipath still behaves correctly for path degradation scenarios (see below).
  
  After rebooting, you should verify that /proc/mounts correctly lists the
  multipath device (/dev/mapper/*), and that /proc/swaps also lists any
  swap partitions that should be on a multipathed disk as a /dev/dm-*
  device.
  
  PATH DEGRADATION:
  -- This test requires fibre-channel hardware or another method to be able to disconnect paths to a drive.
  1) verify that on disconnecting a path, it is properly detected as being in degraded mode by multipath-tools; and shows as such in 'multipath -ll'.
  2) verify that on reconnection, the path shows up again as ready or OK in 'multipath -ll'.
- 
  
  [Regression potential]
  Systems in complex setups mixing both multipathed and non-multipathed disks, or with particularly slow disk controllers may fail to boot or show delays in booting. Systems requiring a particular set of partitions to be available in early boot (past the typical installs on /, some may require /usr to be available for some daemons to load on boot) could fail to properly start up all the software on boot; these should be considered as regressions only if devices do not properly show up in 'sudo multipath -ll' output.
  
  Users seeing issues should preferrably include the output of 'sudo
  multipath -v4' to bug reports to help developers in debugging.
  
  ---
  
  Problem Description
  ======================
  non-multpath disks are classified as multipath after the multipath-tools package is installed.
  
  ---uname output---
  Linux uu04g1 3.16.0-21-generic #28-Ubuntu SMP Mon Oct 6 15:57:32 UTC 2014 ppc64le ppc64le ppc64le GNU/Linux
  
  Machine Type = Tuleta pKVM guest
  
  Steps to Reproduce
  =========================
  1) Create a guest with an unused, non-mpath disk (not the install disk) (a qemu img file is fine).
  2) apt-get install multipath-tools.
  3) multipath -ll.
  4)  Note that  your file backed disks are showing up as mpath disks.
  
  Userspace tool common name: multipath
  The userspace tool has the following bit modes: 64
  
  System Dump Info:
    The system is not configured to capture a system dump.
  
  Userspace rpm: multipath-tools
  Userspace tool obtained from project website:  multipath-tools v0.4.9 (05/33, 2016)
  
  == Comment: #1 - Edward R. Cheslek <echeslak at us.ibm.com> - ==
  root at uu04g1:~# multipath -ll
  0QEMU    QEMU HARDDISK   drive-scsi0-0-1-0 dm-0 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 0:0:1:0 sdb 8:16 active ready running
  0QEMU    QEMU HARDDISK   drive-scsi0-0-2-0 dm-1 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 0:0:2:0 sdc 8:32 active ready running
  root at uu04g1:~# lsblk
  NAME                                              MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
  sda                                                 8:0    0   20G  0 disk
  ??sda1                                              8:1    0    7M  0 part
  ??sda2                                              8:2    0 19.1G  0 part  /
  ??sda3                                              8:3    0  896M  0 part  [SWAP]
  sdb                                                 8:16   0   20G  0 disk
  ??0QEMU\x20\x20\x20\x20QEMU\x20HARDDISK\x20\x20\x20drive-scsi0-0-1-0
                                                    252:0    0   20G  0 mpath
  sdc                                                 8:32   0   20G  0 disk
  ??0QEMU\x20\x20\x20\x20QEMU\x20HARDDISK\x20\x20\x20drive-scsi0-0-2-0
                                                    252:1    0   20G  0 mpath
  root at uu04g1:~#
  root at uu04g1:~# lsscsi
  [0:0:0:0]    disk    QEMU     QEMU HARDDISK    2.0.  /dev/sda
  [0:0:1:0]    disk    QEMU     QEMU HARDDISK    2.0.  /dev/sdb
  [0:0:2:0]    disk    QEMU     QEMU HARDDISK    2.0.  /dev/sdc
  
  == Comment: #2 - Edward R. Cheslek <echeslak at us.ibm.com> - 2014-10-08 22:27:09 ==
  This is the xml file of the affected guest.  Please note that all disks are file backed.
  
  root at uu04g1:~# multipath -l
  0QEMU    QEMU HARDDISK   drive-scsi0-0-1-0 dm-0 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=-1 status=active
    `- 0:0:1:0 sdb 8:16 active undef running
  0QEMU    QEMU HARDDISK   drive-scsi0-0-2-0 dm-1 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=-1 status=active
    `- 0:0:2:0 sdc 8:32 active undef running
  
  root at uu04g1:~# multipath -v2
  root at uu04g1:~#

** Description changed:

  [Impact]
  Users of multipath-tools on systems which exhibit both multipathed and non-multipathed drives or some USB devices (some USB devices may still be picked up by multipath-tools if multipath-capable).
  
  [Test case]
  - Boot on a system with multipathed disks with the previous version of multipath-tools
  - Upgrade to the new version; verify that /etc/multipath/wwids is created and contains all the right device WWIDs, and that 'sudo multipath -ll' has the same devices.
  - Reboot and verify that the system still boots correctly.
  - [udev] Using Fibre-Channel hardware: verify that multipath still behaves correctly for path degradation scenarios (see below).
  
  After rebooting, you should verify that /proc/mounts correctly lists the
  multipath device (/dev/mapper/*), and that /proc/swaps also lists any
  swap partitions that should be on a multipathed disk as a /dev/dm-*
  device.
  
  PATH DEGRADATION:
  -- This test requires fibre-channel hardware or another method to be able to disconnect paths to a drive.
  1) verify that on disconnecting a path, it is properly detected as being in degraded mode by multipath-tools; and shows as such in 'multipath -ll'.
  2) verify that on reconnection, the path shows up again as ready or OK in 'multipath -ll'.
  
  [Regression potential]
  Systems in complex setups mixing both multipathed and non-multipathed disks, or with particularly slow disk controllers may fail to boot or show delays in booting. Systems requiring a particular set of partitions to be available in early boot (past the typical installs on /, some may require /usr to be available for some daemons to load on boot) could fail to properly start up all the software on boot; these should be considered as regressions only if devices do not properly show up in 'sudo multipath -ll' output.
  
  Users seeing issues should preferrably include the output of 'sudo
  multipath -v4' to bug reports to help developers in debugging.
+ 
+ Udev rules have been changed in this case, because they interfere with
+ the proper behavior of multipath 0.4.9 with the patches included as
+ backported from 0.5.0; this has the potential to strongly impact the
+ detection of change events on multipath devices; if path degradation or
+ new device detection fails, this should be considered an important
+ regression of multipath-tools.
  
  ---
  
  Problem Description
  ======================
  non-multpath disks are classified as multipath after the multipath-tools package is installed.
  
  ---uname output---
  Linux uu04g1 3.16.0-21-generic #28-Ubuntu SMP Mon Oct 6 15:57:32 UTC 2014 ppc64le ppc64le ppc64le GNU/Linux
  
  Machine Type = Tuleta pKVM guest
  
  Steps to Reproduce
  =========================
  1) Create a guest with an unused, non-mpath disk (not the install disk) (a qemu img file is fine).
  2) apt-get install multipath-tools.
  3) multipath -ll.
  4)  Note that  your file backed disks are showing up as mpath disks.
  
  Userspace tool common name: multipath
  The userspace tool has the following bit modes: 64
  
  System Dump Info:
    The system is not configured to capture a system dump.
  
  Userspace rpm: multipath-tools
  Userspace tool obtained from project website:  multipath-tools v0.4.9 (05/33, 2016)
  
  == Comment: #1 - Edward R. Cheslek <echeslak at us.ibm.com> - ==
  root at uu04g1:~# multipath -ll
  0QEMU    QEMU HARDDISK   drive-scsi0-0-1-0 dm-0 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 0:0:1:0 sdb 8:16 active ready running
  0QEMU    QEMU HARDDISK   drive-scsi0-0-2-0 dm-1 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 0:0:2:0 sdc 8:32 active ready running
  root at uu04g1:~# lsblk
  NAME                                              MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
  sda                                                 8:0    0   20G  0 disk
  ??sda1                                              8:1    0    7M  0 part
  ??sda2                                              8:2    0 19.1G  0 part  /
  ??sda3                                              8:3    0  896M  0 part  [SWAP]
  sdb                                                 8:16   0   20G  0 disk
  ??0QEMU\x20\x20\x20\x20QEMU\x20HARDDISK\x20\x20\x20drive-scsi0-0-1-0
                                                    252:0    0   20G  0 mpath
  sdc                                                 8:32   0   20G  0 disk
  ??0QEMU\x20\x20\x20\x20QEMU\x20HARDDISK\x20\x20\x20drive-scsi0-0-2-0
                                                    252:1    0   20G  0 mpath
  root at uu04g1:~#
  root at uu04g1:~# lsscsi
  [0:0:0:0]    disk    QEMU     QEMU HARDDISK    2.0.  /dev/sda
  [0:0:1:0]    disk    QEMU     QEMU HARDDISK    2.0.  /dev/sdb
  [0:0:2:0]    disk    QEMU     QEMU HARDDISK    2.0.  /dev/sdc
  
  == Comment: #2 - Edward R. Cheslek <echeslak at us.ibm.com> - 2014-10-08 22:27:09 ==
  This is the xml file of the affected guest.  Please note that all disks are file backed.
  
  root at uu04g1:~# multipath -l
  0QEMU    QEMU HARDDISK   drive-scsi0-0-1-0 dm-0 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=-1 status=active
    `- 0:0:1:0 sdb 8:16 active undef running
  0QEMU    QEMU HARDDISK   drive-scsi0-0-2-0 dm-1 QEMU,QEMU HARDDISK
  size=20G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=-1 status=active
    `- 0:0:2:0 sdc 8:32 active undef running
  
  root at uu04g1:~# multipath -v2
  root at uu04g1:~#

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to multipath-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1386637

Title:
  multipath shows non-mpath disks as being multipath

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1386637/+subscriptions



More information about the Ubuntu-server-bugs mailing list