[3.13.y.z extended stable] Patch "dm: allocate a special workqueue for deferred device removal" has been added to staging queue

Kamal Mostafa kamal at canonical.com
Wed Aug 6 20:54:30 UTC 2014


This is a note to let you know that I have just added a patch titled

    dm: allocate a special workqueue for deferred device removal

to the linux-3.13.y-queue branch of the 3.13.y.z extended stable tree 
which can be found at:

 http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.13.y-queue

This patch is scheduled to be released in version 3.13.11.6.

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.13.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Kamal

------

>From 0bae1540a417c41ab2f0e6f89a74d4eea4d649f2 Mon Sep 17 00:00:00 2001
From: Mikulas Patocka <mpatocka at redhat.com>
Date: Sat, 14 Jun 2014 13:44:31 -0400
Subject: dm: allocate a special workqueue for deferred device removal

commit acfe0ad74d2e1bfc81d1d7bf5e15b043985d3650 upstream.

The commit 2c140a246dc ("dm: allow remove to be deferred") introduced a
deferred removal feature for the device mapper.  When this feature is
used (by passing a flag DM_DEFERRED_REMOVE to DM_DEV_REMOVE_CMD ioctl)
and the user tries to remove a device that is currently in use, the
device will be removed automatically in the future when the last user
closes it.

Device mapper used the system workqueue to perform deferred removals.
However, some targets (dm-raid1, dm-mpath, dm-stripe) flush work items
scheduled for the system workqueue from their destructor.  If the
destructor itself is called from the system workqueue during deferred
removal, it introduces a possible deadlock - the workqueue tries to flush
itself.

Fix this possible deadlock by introducing a new workqueue for deferred
removals.  We allocate just one workqueue for all dm targets.  The
ability of dm targets to process IOs isn't dependent on deferred removal
of unused targets, so a deadlock due to shared workqueue isn't possible.

Also, cleanup local_init() to eliminate potential for returning success
on failure.

Signed-off-by: Mikulas Patocka <mpatocka at redhat.com>
Signed-off-by: Dan Carpenter <dan.carpenter at oracle.com>
Signed-off-by: Mike Snitzer <snitzer at redhat.com>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
 drivers/md/dm.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index b49c762..72859fa 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -54,6 +54,8 @@ static void do_deferred_remove(struct work_struct *w);

 static DECLARE_WORK(deferred_remove_work, do_deferred_remove);

+static struct workqueue_struct *deferred_remove_workqueue;
+
 /*
  * For bio-based dm.
  * One of these is allocated per bio.
@@ -283,16 +285,24 @@ static int __init local_init(void)
 	if (r)
 		goto out_free_rq_tio_cache;

+	deferred_remove_workqueue = alloc_workqueue("kdmremove", WQ_UNBOUND, 1);
+	if (!deferred_remove_workqueue) {
+		r = -ENOMEM;
+		goto out_uevent_exit;
+	}
+
 	_major = major;
 	r = register_blkdev(_major, _name);
 	if (r < 0)
-		goto out_uevent_exit;
+		goto out_free_workqueue;

 	if (!_major)
 		_major = r;

 	return 0;

+out_free_workqueue:
+	destroy_workqueue(deferred_remove_workqueue);
 out_uevent_exit:
 	dm_uevent_exit();
 out_free_rq_tio_cache:
@@ -306,6 +316,7 @@ out_free_io_cache:
 static void local_exit(void)
 {
 	flush_scheduled_work();
+	destroy_workqueue(deferred_remove_workqueue);

 	kmem_cache_destroy(_rq_tio_cache);
 	kmem_cache_destroy(_io_cache);
@@ -414,7 +425,7 @@ static void dm_blk_close(struct gendisk *disk, fmode_t mode)

 	if (atomic_dec_and_test(&md->open_count) &&
 	    (test_bit(DMF_DEFERRED_REMOVE, &md->flags)))
-		schedule_work(&deferred_remove_work);
+		queue_work(deferred_remove_workqueue, &deferred_remove_work);

 	dm_put(md);

--
1.9.1





More information about the kernel-team mailing list