ACK/cmnt: [PATCH] init: fix false positives in W+X checking

Joseph Salisbury joseph.salisbury at canonical.com
Wed May 9 20:28:14 UTC 2018


On 05/08/2018 12:24 PM, Manoj Iyer wrote:
> From: Jeffrey Hugo <jhugo at codeaurora.org>
>
> load_module() creates W+X mappings via __vmalloc_node_range() (from
> layout_and_allocate()->move_module()->module_alloc()) by using
> PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
> "call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().
>
> This is a problem because call_rcu_sched() queues work, which can be run
> after debug_checkwx() is run, resulting in a race condition.  If hit, the
> race results in a nasty splat about insecure W+X mappings, which results
> in a poor user experience as these are not the mappings that
> debug_checkwx() is intended to catch.
>
> This issue is observed on multiple arm64 platforms, and has been
> artificially triggered on an x86 platform.
>
> Address the race by flushing the queued work before running the
> arch-defined mark_rodata_ro() which then calls debug_checkwx().
>
> BugLink: https://launchpad.net/bugs/1769696
>
> Link: http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@codeaurora.org
> Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings")
> Signed-off-by: Jeffrey Hugo <jhugo at codeaurora.org>
> Reported-by: Timur Tabi <timur at codeaurora.org>
> Reported-by: Jan Glauber <jan.glauber at caviumnetworks.com>
> Acked-by: Kees Cook <keescook at chromium.org>
> Acked-by: Ingo Molnar <mingo at kernel.org>
> Acked-by: Will Deacon <will.deacon at arm.com>
> Acked-by: Laura Abbott <labbott at redhat.com>
> Cc: Mark Rutland <mark.rutland at arm.com>
> Cc: Ard Biesheuvel <ard.biesheuvel at linaro.org>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: Stephen Smalley <sds at tycho.nsa.gov>
> Cc: Thomas Gleixner <tglx at linutronix.de>
> Cc: Peter Zijlstra <peterz at infradead.org>
> Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
> Signed-off-by: Stephen Rothwell <sfr at canb.auug.org.au>
> (cherry picked from commit 65d313ee1a7d41611b8ee6063db53bc976db5ba2
> linux-next)
> Signed-off-by: Manoj Iyer <manoj.iyer at canonical.com>
> ---
>  init/main.c     | 7 +++++++
>  kernel/module.c | 5 +++++
>  2 files changed, 12 insertions(+)
>
> diff --git a/init/main.c b/init/main.c
> index b8b121c17ff1..44f88af9b191 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata);
>  static void mark_readonly(void)
>  {
>  	if (rodata_enabled) {
> +		/*
> +		 * load_module() results in W+X mappings, which are cleaned up
> +		 * with call_rcu_sched().  Let's make sure that queued work is
> +		 * flushed so that we don't hit false positives looking for
> +		 * insecure pages which are W+X.
> +		 */
> +		rcu_barrier_sched();
>  		mark_rodata_ro();
>  		rodata_test();
>  	} else
> diff --git a/kernel/module.c b/kernel/module.c
> index 2612f760df84..0da7f3468350 100644
> --- a/kernel/module.c
> +++ b/kernel/module.c
> @@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module *mod)
>  	 * walking this with preempt disabled.  In all the failure paths, we
>  	 * call synchronize_sched(), but we don't want to slow down the success
>  	 * path, so use actual RCU here.
> +	 * Note that module_alloc() on most architectures creates W+X page
> +	 * mappings which won't be cleaned up until do_free_init() runs.  Any
> +	 * code such as mark_rodata_ro() which depends on those mappings to
> +	 * be cleaned up needs to sync with the queued work - ie
> +	 * rcu_barrier_sched()
>  	 */
>  	call_rcu_sched(&freeinit->rcu, do_free_init);
>  	mutex_unlock(&module_mutex);
Hi Manoj,

The patch says that it fixes e1a58320a38d.  This commit is in mainline
as of v4.4-rc1.  However, this SRU request is only for Artful and
Bionic.  You may also want to investigate to see if it's needed in
Xenial.  If it is, the patch you submitted does not apply to Xenial and
you would need to submit a separate patch/SRU request that is specific
to Xenial.

For A and B, this patch applies and builds cleanly.  It fixes a specific
bug, so:

Acked-by: Joseph Salisbury <joseph.salisbury at canonical.com>






More information about the kernel-team mailing list