[SRU][CVE-2020-14351][X/B/F][PATCH 0/2] perf/core: Fix race in the perf_mmap_close() function

William Breathitt Gray william.gray at canonical.com
Thu Nov 5 17:41:30 UTC 2020


SRU Justification
=================

[Impact]

There's a possible race in perf_mmap_close when checking ring buffer's
mmap_count refcount value. The problem is that the mmap_count check is
not atomic because we call atomic_dec and atomic_read separately.

  perf_mmap_close:
  ...
   atomic_dec(&rb->mmap_count);
   ...
   if (atomic_read(&rb->mmap_count))
      goto out_put;

   <ring buffer detach>
   free_uid

out_put:
  ring_buffer_put(rb); /* could be last */

The race can happen when we have two (or more) events sharing same ring
buffer and they go through atomic_dec and then they both see 0 as refcount
value later in atomic_read. Then both will go on and execute code which
is meant to be run just once.

The code that detaches ring buffer is probably fine to be executed more
than once, but the problem is in calling free_uid, which will later on
demonstrate in related crashes and refcount warnings, like:

  refcount_t: addition on 0; use-after-free.
  ...
  RIP: 0010:refcount_warn_saturate+0x6d/0xf
  ...
  Call Trace:
  prepare_creds+0x190/0x1e0
  copy_creds+0x35/0x172
  copy_process+0x471/0x1a80
  _do_fork+0x83/0x3a0
  __do_sys_wait4+0x83/0x90
  __do_sys_clone+0x85/0xa0
  do_syscall_64+0x5b/0x1e0
  entry_SYSCALL_64_after_hwframe+0x44/0xa9

[Regression Potential]

Regression potentional is very low. Changes only affect the
perf_mmap_close() function, and the only change in logic is to perform
the decrement and read together atomically rather than separately one
after another.

[Miscellaneous]

Fix is already available in Groovy, but needed for Focal, Bionic, and
Xenial.

Jiri Olsa (1):
  perf/core: Fix race in the perf_mmap_close() function

 kernel/events/core.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

-- 
2.25.1




More information about the kernel-team mailing list