[SRU][Disco][PATCH 1/1] x86/kprobes: Set instruction page as executable
Kleber Sacilotto de Souza
kleber.souza at canonical.com
Tue Aug 20 14:42:27 UTC 2019
From: Nadav Amit <namit at vmware.com>
BugLink: https://bugs.launchpad.net/bugs/1840750
Set the page as executable after allocation. This patch is a
preparatory patch for a following patch that makes module allocated
pages non-executable.
While at it, do some small cleanup of what appears to be unnecessary
masking.
Signed-off-by: Nadav Amit <namit at vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe at intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
Cc: <akpm at linux-foundation.org>
Cc: <ard.biesheuvel at linaro.org>
Cc: <deneen.t.dock at intel.com>
Cc: <kernel-hardening at lists.openwall.com>
Cc: <kristen at linux.intel.com>
Cc: <linux_dti at icloud.com>
Cc: <will.deacon at arm.com>
Cc: Andy Lutomirski <luto at kernel.org>
Cc: Borislav Petkov <bp at alien8.de>
Cc: Dave Hansen <dave.hansen at linux.intel.com>
Cc: H. Peter Anvin <hpa at zytor.com>
Cc: Linus Torvalds <torvalds at linux-foundation.org>
Cc: Rik van Riel <riel at surriel.com>
Cc: Thomas Gleixner <tglx at linutronix.de>
Link: https://lkml.kernel.org/r/20190426001143.4983-11-namit@vmware.com
Signed-off-by: Ingo Molnar <mingo at kernel.org>
(cherry picked from commit 7298e24f904224fa79eb8fd7e0fbd78950ccf2db)
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza at canonical.com>
---
arch/x86/kernel/kprobes/core.c | 24 ++++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index f4b954ff5b89..3bc4cc70f1e5 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -431,8 +431,20 @@ void *alloc_insn_page(void)
void *page;
page = module_alloc(PAGE_SIZE);
- if (page)
- set_memory_ro((unsigned long)page & PAGE_MASK, 1);
+ if (!page)
+ return NULL;
+
+ /*
+ * First make the page read-only, and only then make it executable to
+ * prevent it from being W+X in between.
+ */
+ set_memory_ro((unsigned long)page, 1);
+
+ /*
+ * TODO: Once additional kernel code protection mechanisms are set, ensure
+ * that the page was not maliciously altered and it is still zeroed.
+ */
+ set_memory_x((unsigned long)page, 1);
return page;
}
@@ -440,8 +452,12 @@ void *alloc_insn_page(void)
/* Recover page to RW mode before releasing it */
void free_insn_page(void *page)
{
- set_memory_nx((unsigned long)page & PAGE_MASK, 1);
- set_memory_rw((unsigned long)page & PAGE_MASK, 1);
+ /*
+ * First make the page non-executable, and only then make it writable to
+ * prevent it from being W+X in between.
+ */
+ set_memory_nx((unsigned long)page, 1);
+ set_memory_rw((unsigned long)page, 1);
module_memfree(page);
}
--
2.17.1
More information about the kernel-team
mailing list