[3.16.y-ckt stable] Patch "workqueue: handle NUMA_NO_NODE for unbound pool_workqueue lookup" has been added to the 3.16.y-ckt tree
Luis Henriques
luis.henriques at canonical.com
Thu Feb 25 18:34:36 UTC 2016
This is a note to let you know that I have just added a patch titled
workqueue: handle NUMA_NO_NODE for unbound pool_workqueue lookup
to the linux-3.16.y-queue branch of the 3.16.y-ckt extended stable tree
which can be found at:
http://kernel.ubuntu.com/git/ubuntu/linux.git/log/?h=linux-3.16.y-queue
This patch is scheduled to be released in version 3.16.7-ckt25.
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.16.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Luis
---8<------------------------------------------------------------
>From 6bfeca86dab7770b926bb3d2a86fc0c15ab2499b Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj at kernel.org>
Date: Wed, 3 Feb 2016 13:54:25 -0500
Subject: workqueue: handle NUMA_NO_NODE for unbound pool_workqueue lookup
commit d6e022f1d207a161cd88e08ef0371554680ffc46 upstream.
When looking up the pool_workqueue to use for an unbound workqueue,
workqueue assumes that the target CPU is always bound to a valid NUMA
node. However, currently, when a CPU goes offline, the mapping is
destroyed and cpu_to_node() returns NUMA_NO_NODE.
This has always been broken but hasn't triggered often enough before
874bbfe600a6 ("workqueue: make sure delayed work run in local cpu").
After the commit, workqueue forcifully assigns the local CPU for
delayed work items without explicit target CPU to fix a different
issue. This widens the window where CPU can go offline while a
delayed work item is pending causing delayed work items dispatched
with target CPU set to an already offlined CPU. The resulting
NUMA_NO_NODE mapping makes workqueue try to queue the work item on a
NULL pool_workqueue and thus crash.
While 874bbfe600a6 has been reverted for a different reason making the
bug less visible again, it can still happen. Fix it by mapping
NUMA_NO_NODE to the default pool_workqueue from unbound_pwq_by_node().
This is a temporary workaround. The long term solution is keeping CPU
-> NODE mapping stable across CPU off/online cycles which is being
worked on.
Signed-off-by: Tejun Heo <tj at kernel.org>
Reported-by: Mike Galbraith <umgwanakikbuti at gmail.com>
Cc: Tang Chen <tangchen at cn.fujitsu.com>
Cc: Rafael J. Wysocki <rafael at kernel.org>
Cc: Len Brown <len.brown at intel.com>
Link: http://lkml.kernel.org/g/1454424264.11183.46.camel@gmail.com
Link: http://lkml.kernel.org/g/1453702100-2597-1-git-send-email-tangchen@cn.fujitsu.com
[ luis: backported to 3.16: adjusted context ]
Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
---
kernel/workqueue.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index cb7db323d1fb..6ab1f683ac49 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -553,6 +553,16 @@ static struct pool_workqueue *unbound_pwq_by_node(struct workqueue_struct *wq,
int node)
{
assert_rcu_or_wq_mutex(wq);
+
+ /*
+ * XXX: @node can be NUMA_NO_NODE if CPU goes offline while a
+ * delayed item is pending. The plan is to keep CPU -> NODE
+ * mapping valid and stable across CPU on/offlines. Once that
+ * happens, this workaround can be removed.
+ */
+ if (unlikely(node == NUMA_NO_NODE))
+ return wq->dfl_pwq;
+
return rcu_dereference_raw(wq->numa_pwq_tbl[node]);
}
More information about the kernel-team
mailing list