[Bug 1623700] Re: [SRU] multipath iscsi does not logout of sessions on xenial

Hua Zhang joshua.zhang at canonical.com
Wed Mar 22 12:10:58 UTC 2017


@Gustavo,

I can't reproduce your problem, this patch works well for me. I did two
experiments.

One experiment is WITHOUT multipath-tools patch [1] + WITHOUT this os-
brick patch, the test result can refer the link [2], we can see that
multipath device can be deleted by 'multipath -r'.

Mar 22 10:33:52 juju-zhhuabj-machine-9 nova-compute[22305]: 2017-03-22 10:33:52.233 22305 WARNING os_brick.initiator.linuxscsi [req-0536068d-110c-43a4-82e4-941cdb715042 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] Couldn't find multipath device /dev/mapper/360000000000000000e00000000010001
 
Another experiment is WITHOUT multipath-tools patch [2] + WITH this os-brick patch, the test result can refer the link [3], we can see that multipath device doesn't be delete because this os-brick patch has removed 'multipath -r'.

Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.520 17329 DEBUG oslo_concurrency.lockutils [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] Lock "connect_volume" acquired by "os_brick.initiator.connector.disconnect_volume" :: waited 0.001s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.523 17329 DEBUG oslo_concurrency.processutils [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf multipath -ll /dev/sda execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:344
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.705 17329 DEBUG oslo_concurrency.processutils [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf multipath -ll /dev/sda" returned: 0 in 0.183s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:374
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.707 17329 DEBUG os_brick.initiator.connector [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] multipath ['-ll', u'/dev/sda']: stdout=360000000000000000e00000000010001 dm-0 IET,VIRTUAL-DISK
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: size=1.0G features='0' hwhandler='0' wp=rw
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: |-+- policy='round-robin 0' prio=1 status=active
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: | `- 4:0:0:1 sda 8:0  active ready running
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: `-+- policy='round-robin 0' prio=1 status=enabled
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]:   `- 5:0:0:1 sdb 8:16 active ready running
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]:  stderr= _run_multipath /usr/lib/python2.7/dist-packages/os_brick/initiator/connector.py:1286
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.709 17329 DEBUG os_brick.initiator.linuxscsi [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] remove multipath device /dev/sda remove_multipath_device /usr/lib/python2.7/dist-packages/os_brick/initiator/linuxscsi.py:123
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.710 17329 DEBUG oslo_concurrency.processutils [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf multipath -l /dev/sda execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:344
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.826 17329 DEBUG oslo_concurrency.processutils [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf multipath -l /dev/sda" returned: 0 in 0.116s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:374
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.829 17329 DEBUG os_brick.initiator.linuxscsi [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] Found multipath device = /dev/mapper/360000000000000000e00000000010001 find_multipath_device /usr/lib/python2.7/dist-packages/os_brick/initiator/linuxscsi.py:301
Mar 22 11:06:20 juju-zhhuabj-machine-9 nova-compute[17329]: 2017-03-22 11:06:20.829 17329 DEBUG os_brick.initiator.linuxscsi [req-bbcf6687-4e5d-4748-9887-726ff8489008 3f685b7c349c4734afc1a9a87968aca5 52bc463e2bbd46c792ca0d17effd2a86 - - -] multipath LUNs to remove [{'device': '/dev/sda', 'host': '4', 'id': '0', 'channel': '0', 'lun': '1'}, {'device': '/dev/sdb', 'host': '5', 'id': '0', 'channel': '0', 'lun': '1'}] remove_multipath_device /usr/lib/python2.7/dist-packages/os_brick/initiator/linuxscsi.py:127

[1] https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1621340
[2] http://paste.ubuntu.com/24227639/
[3] http://paste.ubuntu.com/24227824/

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to python-os-brick in Ubuntu.
https://bugs.launchpad.net/bugs/1623700

Title:
  [SRU] multipath iscsi does not logout of sessions on xenial

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Triaged
Status in os-brick:
  Fix Released
Status in python-os-brick package in Ubuntu:
  Fix Released
Status in python-os-brick source package in Xenial:
  In Progress
Status in python-os-brick source package in Yakkety:
  Triaged

Bug description:
  [Impact]

   * The reload (multipath -r) in _rescan_multipath can cause
  /dev/mapper/<wwid> to be deleted and re-created (bug #1621340 is used
  to track this problem), it would cause a lot more downstream openstack
  issues. For example, and right in between that, os.stat(mdev) called
  by _discover_mpath_device() will fail to find the file. For example,
  when detaching a volume the iscsi sessions are not logged out. This
  leaves behind a mpath device and the iscsi /dev/disk/by-path devices
  as broken luns. So we should stop calling multipath -r when
  attaching/detaching iSCSI volumes, multipath will load devices on its
  own.

  [Test Case]

   * Enable iSCSI driver and cinder/nova multipath
   * Detach a iSCSI volume
   * Check that devices/symlinks do not get messed up mentioned below

  [Regression Potential]

   * None

  
  stack at xenial-devstack-master-master-20160914-092014:~$ nova volume-attach 6e1017a7-6dea-418f-ad9b-879da085bd13 d1d68e04-a217-44ea-bb74-65e0de73e5f8
  +----------+--------------------------------------+
  | Property | Value                                |
  +----------+--------------------------------------+
  | device   | /dev/vdb                             |
  | id       | d1d68e04-a217-44ea-bb74-65e0de73e5f8 |
  | serverId | 6e1017a7-6dea-418f-ad9b-879da085bd13 |
  | volumeId | d1d68e04-a217-44ea-bb74-65e0de73e5f8 |
  +----------+--------------------------------------+

  stack at xenial-devstack-master-master-20160914-092014:~$ cinder list
  +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
  | ID                                   | Status | Name | Size | Volume Type | Bootable | Attached to                          |
  +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
  | d1d68e04-a217-44ea-bb74-65e0de73e5f8 | in-use | -    | 1    | pure-iscsi  | false    | 6e1017a7-6dea-418f-ad9b-879da085bd13 |
  +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+

  stack at xenial-devstack-master-master-20160914-092014:~$ nova list
  +--------------------------------------+------+--------+------------+-------------+---------------------------------+
  | ID                                   | Name | Status | Task State | Power State | Networks                        |
  +--------------------------------------+------+--------+------------+-------------+---------------------------------+
  | 6e1017a7-6dea-418f-ad9b-879da085bd13 | test | ACTIVE | -          | Running     | public=172.24.4.12, 2001:db8::b |
  +--------------------------------------+------+--------+------------+-------------+---------------------------------+

  stack at xenial-devstack-master-master-20160914-092014:~$ sudo iscsiadm -m session
  tcp: [5] 10.0.1.10:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [6] 10.0.5.10:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [7] 10.0.1.11:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [8] 10.0.5.11:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  stack at xenial-devstack-master-master-20160914-092014:~$ sudo iscsiadm -m node
  10.0.1.11:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873
  10.0.5.11:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873
  10.0.5.10:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873
  10.0.1.10:3260,-1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873

  stack at xenial-devstack-master-master-20160914-092014:~$ sudo tail -f /var/log/syslog
  Sep 14 22:33:14 xenial-qemu-tester multipath: dm-0: failed to get udev uid: Invalid argument
  Sep 14 22:33:14 xenial-qemu-tester multipath: dm-0: failed to get sysfs uid: Invalid argument
  Sep 14 22:33:14 xenial-qemu-tester multipath: dm-0: failed to get sgio uid: No such file or directory
  Sep 14 22:33:14 xenial-qemu-tester systemd[1347]: dev-disk-by\x2did-scsi\x2d3624a93709a738ed78583fd12003fb774.device: Dev dev-disk-by\x2did-scsi\x2d3624a93709a738ed78583fd12003fb774.device appeared twice with different sysfs paths /sys/devices/platform/host6/session5/target6:0:0/6:0:0:1/block/sda and /sys/devices/virtual/block/dm-0
  Sep 14 22:33:14 xenial-qemu-tester systemd[1347]: dev-disk-by\x2did-wwn\x2d0x624a93709a738ed78583fd12003fb774.device: Dev dev-disk-by\x2did-wwn\x2d0x624a93709a738ed78583fd12003fb774.device appeared twice with different sysfs paths /sys/devices/platform/host6/session5/target6:0:0/6:0:0:1/block/sda and /sys/devices/virtual/block/dm-0
  Sep 14 22:33:14 xenial-qemu-tester systemd[1]: dev-disk-by\x2did-scsi\x2d3624a93709a738ed78583fd12003fb774.device: Dev dev-disk-by\x2did-scsi\x2d3624a93709a738ed78583fd12003fb774.device appeared twice with different sysfs paths /sys/devices/platform/host6/session5/target6:0:0/6:0:0:1/block/sda and /sys/devices/virtual/block/dm-0
  Sep 14 22:33:14 xenial-qemu-tester systemd[1]: dev-disk-by\x2did-wwn\x2d0x624a93709a738ed78583fd12003fb774.device: Dev dev-disk-by\x2did-wwn\x2d0x624a93709a738ed78583fd12003fb774.device appeared twice with different sysfs paths /sys/devices/platform/host6/session5/target6:0:0/6:0:0:1/block/sda and /sys/devices/virtual/block/dm-0
  Sep 14 22:33:14 xenial-qemu-tester kernel: [22362.163521] audit: type=1400 audit(1473892394.556:21): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-6e1017a7-6dea-418f-ad9b-879da085bd13" pid=32665 comm="apparmor_parser"
  Sep 14 22:33:14 xenial-qemu-tester kernel: [22362.173614] audit: type=1400 audit(1473892394.568:22): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-6e1017a7-6dea-418f-ad9b-879da085bd13//qemu_bridge_helper" pid=32665 comm="apparmor_parser"
  Sep 14 22:33:14 xenial-qemu-tester iscsid: Connection8:0 to [target: iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873, portal: 10.0.5.11,3260] through [iface: default] is operational now

  stack at xenial-devstack-master-master-20160914-092014:~$ nova volume-detach 6e1017a7-6dea-418f-ad9b-879da085bd13 d1d68e04-a217-44ea-bb74-65e0de73e5f8
  stack at xenial-devstack-master-master-20160914-092014:~$ sudo iscsiadm -m session
  tcp: [5] 10.0.1.10:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [6] 10.0.5.10:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [7] 10.0.1.11:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [8] 10.0.5.11:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)

  stack at xenial-devstack-master-master-20160914-092014:~$ cinder list
  +--------------------------------------+-----------+------+------+-------------+----------+-------------+
  | ID                                   | Status    | Name | Size | Volume Type | Bootable | Attached to |
  +--------------------------------------+-----------+------+------+-------------+----------+-------------+
  | d1d68e04-a217-44ea-bb74-65e0de73e5f8 | available | -    | 1    | pure-iscsi  | false    |             |
  +--------------------------------------+-----------+------+------+-------------+----------+-------------+

  stack at xenial-devstack-master-master-20160914-092014:~$ iscsiadm -m session
  tcp: [5] 10.0.1.10:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [6] 10.0.5.10:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [7] 10.0.1.11:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)
  tcp: [8] 10.0.5.11:3260,1 iqn.2010-06.com.purestorage:flasharray.3adbe40b49bac873 (non-flash)

  stack at xenial-devstack-master-master-20160914-092014:~$ sudo tail -f /var/log/syslog
  Sep 14 22:48:10 xenial-qemu-tester kernel: [23257.736455]  connection6:0: detected conn error (1020)
  Sep 14 22:48:13 xenial-qemu-tester kernel: [23260.742036]  connection5:0: detected conn error (1020)
  Sep 14 22:48:13 xenial-qemu-tester kernel: [23260.742066]  connection7:0: detected conn error (1020)
  Sep 14 22:48:13 xenial-qemu-tester kernel: [23260.742139]  connection8:0: detected conn error (1020)
  Sep 14 22:48:13 xenial-qemu-tester kernel: [23260.742156]  connection6:0: detected conn error (1020)
  Sep 14 22:48:16 xenial-qemu-tester kernel: [23263.747638]  connection5:0: detected conn error (1020)
  Sep 14 22:48:16 xenial-qemu-tester kernel: [23263.747666]  connection7:0: detected conn error (1020)
  Sep 14 22:48:16 xenial-qemu-tester kernel: [23263.747710]  connection8:0: detected conn error (1020)
  Sep 14 22:48:16 xenial-qemu-tester kernel: [23263.747737]  connection6:0: detected conn error (1020)
  Sep 14 22:48:16 xenial-qemu-tester iscsid: message repeated 67 times: [ conn 0 login rejected: initiator failed authorization with target]
  Sep 14 22:48:19 xenial-qemu-tester kernel: [23266.753999]  connection6:0: detected conn error (1020)
  Sep 14 22:48:19 xenial-qemu-tester kernel: [23266.754019]  connection8:0: detected conn error (1020)
  Sep 14 22:48:19 xenial-qemu-tester kernel: [23266.754105]  connection5:0: detected conn error (1020)
  Sep 14 22:48:19 xenial-qemu-tester kernel: [23266.754146]  connection7:0: detected conn error (1020)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1623700/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list