[PATCH 1/1] perf: Fix race in swevent hash

Brad Figg brad.figg at canonical.com
Fri Jun 9 11:27:37 UTC 2017


From: Peter Zijlstra <peterz at infradead.org>

CVE-2015-8963

There's a race on CPU unplug where we free the swevent hash array
while it can still have events on. This will result in a
use-after-free which is BAD.

Simply do not free the hash array on unplug. This leaves the thing
around and no use-after-free takes place.

When the last swevent dies, we do a for_each_possible_cpu() iteration
anyway to clean these up, at which time we'll free it, so no leakage
will occur.

Reported-by: Sasha Levin <sasha.levin at oracle.com>
Tested-by: Sasha Levin <sasha.levin at oracle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
Cc: Arnaldo Carvalho de Melo <acme at redhat.com>
Cc: Frederic Weisbecker <fweisbec at gmail.com>
Cc: Jiri Olsa <jolsa at redhat.com>
Cc: Linus Torvalds <torvalds at linux-foundation.org>
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: Stephane Eranian <eranian at google.com>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: Vince Weaver <vincent.weaver at maine.edu>
Signed-off-by: Ingo Molnar <mingo at kernel.org>
Signed-off-by: Brad Figg <brad.figg at canonical.com>
---
 kernel/events/core.c | 19 +------------------
 1 file changed, 1 insertion(+), 18 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index e4a1467..93e1b45 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5419,9 +5419,6 @@ struct swevent_htable {
 
 	/* Recursion avoidance in each contexts */
 	int				recursion[PERF_NR_CONTEXTS];
-
-	/* Keeps track of cpu being initialized/exited */
-	bool				online;
 };
 
 static DEFINE_PER_CPU(struct swevent_htable, swevent_htable);
@@ -5668,12 +5665,7 @@ static int perf_swevent_add(struct perf_event *event, int flags)
 	hwc->state = !(flags & PERF_EF_START);
 
 	head = find_swevent_head(swhash, event);
-	if (!head) {
-		/*
-		 * We can race with cpu hotplug code. Do not
-		 * WARN if the cpu just got unplugged.
-		 */
-		WARN_ON_ONCE(swhash->online);
+	if (WARN_ON_ONCE(!head))
 		return -EINVAL;
 	}
 
@@ -5742,7 +5734,6 @@ static int swevent_hlist_get_cpu(struct perf_event *event, int cpu)
 	int err = 0;
 
 	mutex_lock(&swhash->hlist_mutex);
-
 	if (!swevent_hlist_deref(swhash) && cpu_online(cpu)) {
 		struct swevent_hlist *hlist;
 
@@ -7866,7 +7857,6 @@ static void perf_event_init_cpu(int cpu)
 	struct swevent_htable *swhash = &per_cpu(swevent_htable, cpu);
 
 	mutex_lock(&swhash->hlist_mutex);
-	swhash->online = true;
 	if (swhash->hlist_refcount > 0) {
 		struct swevent_hlist *hlist;
 
@@ -7919,14 +7909,7 @@ static void perf_event_exit_cpu_context(int cpu)
 
 static void perf_event_exit_cpu(int cpu)
 {
-	struct swevent_htable *swhash = &per_cpu(swevent_htable, cpu);
-
 	perf_event_exit_cpu_context(cpu);
-
-	mutex_lock(&swhash->hlist_mutex);
-	swhash->online = false;
-	swevent_hlist_release(swhash);
-	mutex_unlock(&swhash->hlist_mutex);
 }
 #else
 static inline void perf_event_exit_cpu(int cpu) { }
-- 
2.7.4





More information about the kernel-team mailing list