[3.16.y-ckt stable] Patch "sched: Use dl_bw_of() under RCU read lock" has been added to staging queue
Luis Henriques
luis.henriques at canonical.com
Mon Nov 24 15:02:05 UTC 2014
This is a note to let you know that I have just added a patch titled
sched: Use dl_bw_of() under RCU read lock
to the linux-3.16.y-queue branch of the 3.16.y-ckt extended stable tree
which can be found at:
http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.16.y-queue
This patch is scheduled to be released in version 3.16.7-ckt2.
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.16.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Luis
------
>From 49bef5ac78f9fcebbe87942e867cef864c1147cd Mon Sep 17 00:00:00 2001
From: Kirill Tkhai <ktkhai at parallels.com>
Date: Mon, 22 Sep 2014 22:36:24 +0400
Subject: sched: Use dl_bw_of() under RCU read lock
commit 66339c31bc3978d5fff9c4b4cb590a861def4db2 upstream.
dl_bw_of() dereferences rq->rd which has to have RCU read lock held.
Probability of use-after-free isn't zero here.
Also add lockdep assert into dl_bw_cpus().
Signed-off-by: Kirill Tkhai <ktkhai at parallels.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
Cc: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds at linux-foundation.org>
Link: http://lkml.kernel.org/r/20140922183624.11015.71558.stgit@localhost
Signed-off-by: Ingo Molnar <mingo at kernel.org>
Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
---
kernel/sched/core.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0acf96b790c5..169720f46c30 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1969,6 +1969,8 @@ unsigned long to_ratio(u64 period, u64 runtime)
#ifdef CONFIG_SMP
inline struct dl_bw *dl_bw_of(int i)
{
+ rcu_lockdep_assert(rcu_read_lock_sched_held(),
+ "sched RCU must be held");
return &cpu_rq(i)->rd->dl_bw;
}
@@ -1977,6 +1979,8 @@ static inline int dl_bw_cpus(int i)
struct root_domain *rd = cpu_rq(i)->rd;
int cpus = 0;
+ rcu_lockdep_assert(rcu_read_lock_sched_held(),
+ "sched RCU must be held");
for_each_cpu_and(i, rd->span, cpu_active_mask)
cpus++;
@@ -7541,6 +7545,8 @@ static int sched_dl_global_constraints(void)
int cpu, ret = 0;
unsigned long flags;
+ rcu_read_lock();
+
/*
* Here we want to check the bandwidth not being set to some
* value smaller than the currently allocated bandwidth in
@@ -7562,6 +7568,8 @@ static int sched_dl_global_constraints(void)
break;
}
+ rcu_read_unlock();
+
return ret;
}
@@ -7577,6 +7585,7 @@ static void sched_dl_do_global(void)
if (global_rt_runtime() != RUNTIME_INF)
new_bw = to_ratio(global_rt_period(), global_rt_runtime());
+ rcu_read_lock();
/*
* FIXME: As above...
*/
@@ -7587,6 +7596,7 @@ static void sched_dl_do_global(void)
dl_b->bw = new_bw;
raw_spin_unlock_irqrestore(&dl_b->lock, flags);
}
+ rcu_read_unlock();
}
static int sched_rt_global_validate(void)
--
2.1.0
More information about the kernel-team
mailing list