APPLIED: [SRU][CVE-2020-14351][X][PATCH 2/2] perf/core: Fix race in the perf_mmap_close() function

Kleber Souza kleber.souza at canonical.com
Fri Nov 6 13:41:00 UTC 2020


On 05.11.20 18:41, William Breathitt Gray wrote:
> From: Jiri Olsa <jolsa at redhat.com>
> 
> There's a possible race in perf_mmap_close() when checking ring buffer's
> mmap_count refcount value. The problem is that the mmap_count check is
> not atomic because we call atomic_dec() and atomic_read() separately.
> 
>    perf_mmap_close:
>    ...
>     atomic_dec(&rb->mmap_count);
>     ...
>     if (atomic_read(&rb->mmap_count))
>        goto out_put;
> 
>     <ring buffer detach>
>     free_uid
> 
> out_put:
>    ring_buffer_put(rb); /* could be last */
> 
> The race can happen when we have two (or more) events sharing same ring
> buffer and they go through atomic_dec() and then they both see 0 as refcount
> value later in atomic_read(). Then both will go on and execute code which
> is meant to be run just once.
> 
> The code that detaches ring buffer is probably fine to be executed more
> than once, but the problem is in calling free_uid(), which will later on
> demonstrate in related crashes and refcount warnings, like:
> 
>    refcount_t: addition on 0; use-after-free.
>    ...
>    RIP: 0010:refcount_warn_saturate+0x6d/0xf
>    ...
>    Call Trace:
>    prepare_creds+0x190/0x1e0
>    copy_creds+0x35/0x172
>    copy_process+0x471/0x1a80
>    _do_fork+0x83/0x3a0
>    __do_sys_wait4+0x83/0x90
>    __do_sys_clone+0x85/0xa0
>    do_syscall_64+0x5b/0x1e0
>    entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 
> Using atomic decrease and check instead of separated calls.
> 
> Tested-by: Michael Petlan <mpetlan at redhat.com>
> Signed-off-by: Jiri Olsa <jolsa at kernel.org>
> Signed-off-by: Ingo Molnar <mingo at kernel.org>
> Acked-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
> Acked-by: Namhyung Kim <namhyung at kernel.org>
> Acked-by: Wade Mealing <wmealing at redhat.com>
> Fixes: 9bb5d40cd93c ("perf: Fix mmap() accounting hole");
> Link: https://lore.kernel.org/r/20200916115311.GE2301783@krava
> 
> CVE-2020-14351
> 
> (backported from commit f91072ed1b7283b13ca57fcfbece5a3b92726143)
> [ vilhelmgray: context adjustment ]
> Signed-off-by: William Breathitt Gray <william.gray at canonical.com>

Applied to xenial/linux.

Thanks,
Kleber

> ---
>   kernel/events/core.c | 7 ++++---
>   1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 388ce03155b4..d399748ea86b 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -4664,11 +4664,11 @@ static void perf_mmap_open(struct vm_area_struct *vma)
>   static void perf_mmap_close(struct vm_area_struct *vma)
>   {
>   	struct perf_event *event = vma->vm_file->private_data;
> -
>   	struct ring_buffer *rb = ring_buffer_get(event);
>   	struct user_struct *mmap_user = rb->mmap_user;
>   	int mmap_locked = rb->mmap_locked;
>   	unsigned long size = perf_data_size(rb);
> +	bool detach_rest = false;
>   
>   	if (event->pmu->event_unmapped)
>   		event->pmu->event_unmapped(event);
> @@ -4687,7 +4687,8 @@ static void perf_mmap_close(struct vm_area_struct *vma)
>   		mutex_unlock(&event->mmap_mutex);
>   	}
>   
> -	atomic_dec(&rb->mmap_count);
> +	if (atomic_dec_and_test(&rb->mmap_count))
> +		detach_rest = true;
>   
>   	if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex))
>   		goto out_put;
> @@ -4696,7 +4697,7 @@ static void perf_mmap_close(struct vm_area_struct *vma)
>   	mutex_unlock(&event->mmap_mutex);
>   
>   	/* If there's still other mmap()s of this buffer, we're done. */
> -	if (atomic_read(&rb->mmap_count))
> +	if (!detach_rest)
>   		goto out_put;
>   
>   	/*
> 




More information about the kernel-team mailing list