APPLIED: [PATCH][SRU][Artful][Xenial][Trusty] scsi: libiscsi: Allow sd_shutdown on bad transport

Khaled Elmously khalid.elmously at canonical.com
Sat Feb 3 02:00:08 UTC 2018


Applied to T, X and A


On 2018-01-23 19:18:01 , Rafael David Tinoco wrote:
> BugLink: https://bugs.launchpad.net/bugs/1569925
> 
> [Impact]
> 
>  * open-iscsi users might face hangs during OS shutdown.
>  * hangs can be caused by manual iscsi configuration/setup.
>  * hangs can also be caused by bad systemd unit ordering.
>  * if transport layer interface vanishes before lun is
>    disconnected, then the hang will happen.
>  * check comment #89 for the fix decision.
> 
> [Test Case]
> 
>  * a simple way of reproducing the kernel hang is to disable
>    the open-iscsi logouts. this simulates a situation when
>    a service has shutdown the network interface, used by
>    the transport layer, before proper iscsi logout was done.
> 
>    $ log into all iscsi luns
> 
>    $ systemctl edit --full open-iscsi.service
>    ...
>    #ExecStop=/lib/open-iscsi/logout-all.sh
>    ...
> 
>    $ sudo reboot # this will make server to hang forever
>                  # on shutdown
> 
> [Regression Potential]
> 
>  * the regression is low because the change acts on the iscsi
>    transport layer code ONLY when the server is in shutdown
>    state.
> 
>  * any error in logic would only appear during shutdown and
>    would not cause any harm to data.
> 
> [Other Info]
> 
>  * ORIGINAL BUG DESCRIPTION
> 
> I have 4 servers running the latest 16.04 updates from the development branch (as of right now).
> 
> Each server is connected to NetApp storage using iscsi software initiator.  There are a total of 56 volumes spread across two NetApp arrays.  Each volume has 4 paths available to it which are being managed by device mapper.
> 
> While logged into the iscsi sessions all I have to do is reboot the server and I get a hang.
> 
> I see a message that says:
> 
>   "Reached target Shutdown"
> 
> followed by
> 
>   "systemd-shutdown[1]: Failed to finalize DM devices, ignoring"
> 
> and then I see 8 lines that say:
> 
>   "connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4311815***, last ping 43118164**, now 4311817***"
>   "connection2:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4311815***, last ping 43118164**, now 4311817***"
>   "connection3:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4311815***, last ping 43118164**, now 4311817***"
>   "connection4:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4311815***, last ping 43118164**, now 4311817***"
>   "connection5:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4311815***, last ping 43118164**, now 4311817***"
>   "connection6:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4311815***, last ping 43118164**, now 4311817***"
>   "connection7:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4311815***, last ping 43118164**, now 4311817***"
>   "connection8:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4311815***, last ping 43118164**, now 4311817***"
>   NOTE: the actual values of the *'s differ for each line above.
> 
> This seems like a bug somewhere but I am unaware of any additional logging that I could turn on to pinpoint the problem.
> 
> Note I also have similar setups that are not doing iscsi and they don't have this problem.
> 
> Here is a screenshot of what I see on the shell when I try to reboot:
> 
> (https://launchpadlibrarian.net/291303059/Screenshot.jpg)
> 
> This is being tracked in NetApp bug tracker CQ number 860251.
> 
> If I log out of all iscsi sessions before rebooting then I do not experience the hang:
> 
> iscsiadm -m node -U all
> 
> We are wondering if this could be some kind of shutdown ordering problem.  Like the network devices have already disappeared and then iscsi tries to perform some operation (hence the ping timeouts).
> 
> -- 
> kernel-team mailing list
> kernel-team at lists.ubuntu.com
> https://lists.ubuntu.com/mailman/listinfo/kernel-team




More information about the kernel-team mailing list