[Bug 1856871] Re: i/o error if next unused loop device is queried
Eric Desrochers
eric.desrochers at canonical.com
Wed Jan 15 02:31:51 UTC 2020
I reproduced the behaviour using 5.5 upstream kernel by:
1) Mounting a loop device
2) Setup frace for all loop function for capture purposes
3) Then umount the loop device
trace_pipe reveal the following:
"umount-1850 [000] .... 471.727511: loop_release_xfer <-__loop_clr_fd"
As cascardo mentioned earlier it might be in the way that loop device
are detached, now that I know what function to look at, I'll investigate
further.
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to parted in Ubuntu.
https://bugs.launchpad.net/bugs/1856871
Title:
i/o error if next unused loop device is queried
Status in linux package in Ubuntu:
Incomplete
Status in parted package in Ubuntu:
New
Status in snapd package in Ubuntu:
Invalid
Status in systemd package in Ubuntu:
New
Status in udev package in Ubuntu:
New
Bug description:
This is reproducible in Bionic and late.
Here's an example running 'focal':
$ lsb_release -cs
focal
$ uname -r
5.3.0-24-generic
The error is:
blk_update_request: I/O error, dev loop2, sector 0
and on more recent kernel:
kernel: [18135.185709] blk_update_request: I/O error, dev loop18,
sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
How to trigger it:
$ sosreport -o block
or more precisely the cmd causing the situation inside the block plugin:
$ parted -s $(losetup -f) unit s print
https://github.com/sosreport/sos/blob/master/sos/plugins/block.py#L52
but if I run it on the next next unused loop device, in this case
/dev/loop3 (which is also unused), no errors.
While I agree that sosreport shouldn't query unused loop devices,
there is definitely something going on with the next unused loop
device.
What is differentiate loop2 and loop3 and any other unused ones ?
3 things so far I have noticed:
* loop2 is the next unused loop device (losetup -f)
* A reboot is needed (if some loop modification (snap install, mount loop, ...) has been made at runtime
* I have also noticed that loop2 (or whatever the next unused one is) have some stat as oppose to other unused loop devices. The stat exist already right after the system boot for the next unused loop device.
/sys/block/loop2/stat
::::::::::::::
2 0 10 0 1 0 0 0 0 0 0
2 = number of read I/Os processed
10 = number of sectors read
1 = number of write I/Os processed
Explanation of each column:
https://www.kernel.org/doc/html/latest/block/stat.html
while /dev/loop3 doesn't
/sys/block/loop3/stat
::::::::::::::
0 0 0 0 0 0 0 0 0 0 0
Which tells me that something during the boot process most likely
acquired (on purpose or not) the next unused loop and possibly didn't
released it well enough.
If loop2 is generating errors, and I install a snap, the snap squashfs
will take loop2, making loop3 the next unused loop device.
If I query loop3 with 'parted' right after, no errors.
If I reboot, and query loop3 again, then no I'll have an error.
To triggers the errors it need to be after a reboot and it only impact
the first unused loop device available (losetup -f).
This was tested with focal/systemd whic his very close to latest
upstream code.
This has been test with latest v5.5 mainline kernel as well.
For now, I don't think it's a kernel problem, I'm more thinking of a
userspace misbehaviour dealing with loop device (or block device) at
boot.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1856871/+subscriptions
More information about the foundations-bugs
mailing list