[3.16.y-ckt stable] Patch "md/raid1: fix test for 'was read error from last working device'." has been added to staging queue
Luis Henriques
luis.henriques at canonical.com
Tue Aug 11 12:56:38 UTC 2015
This is a note to let you know that I have just added a patch titled
md/raid1: fix test for 'was read error from last working device'.
to the linux-3.16.y-queue branch of the 3.16.y-ckt extended stable tree
which can be found at:
http://kernel.ubuntu.com/git/ubuntu/linux.git/log/?h=linux-3.16.y-queue
This patch is scheduled to be released in version 3.16.7-ckt16.
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.16.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Luis
------
>From 8277dff69cdcb6b05b1c42dadd2f72b51ef6cde9 Mon Sep 17 00:00:00 2001
From: NeilBrown <neilb at suse.com>
Date: Fri, 24 Jul 2015 09:22:16 +1000
Subject: md/raid1: fix test for 'was read error from last working device'.
commit 34cab6f42003cb06f48f86a86652984dec338ae9 upstream.
When we get a read error from the last working device, we don't
try to repair it, and don't fail the device. We simple report a
read error to the caller.
However the current test for 'is this the last working device' is
wrong.
When there is only one fully working device, it assumes that a
non-faulty device is that device. However a spare which is rebuilding
would be non-faulty but so not the only working device.
So change the test from "!Faulty" to "In_sync". If ->degraded says
there is only one fully working device and this device is in_sync,
this must be the one.
This bug has existed since we allowed read_balance to read from
a recovering spare in v3.0
Reported-and-tested-by: Alexander Lyakas <alex.bolshoy at gmail.com>
Fixes: 76073054c95b ("md/raid1: clean up read_balance.")
Signed-off-by: NeilBrown <neilb at suse.com>
Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
---
drivers/md/raid1.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index b96ee9d78aa3..9be97e0bd149 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -336,7 +336,7 @@ static void raid1_end_read_request(struct bio *bio, int error)
spin_lock_irqsave(&conf->device_lock, flags);
if (r1_bio->mddev->degraded == conf->raid_disks ||
(r1_bio->mddev->degraded == conf->raid_disks-1 &&
- !test_bit(Faulty, &conf->mirrors[mirror].rdev->flags)))
+ test_bit(In_sync, &conf->mirrors[mirror].rdev->flags)))
uptodate = 1;
spin_unlock_irqrestore(&conf->device_lock, flags);
}
More information about the kernel-team
mailing list