[SRU][Trusty][PATCH 2/7] mm: Move change_prot_numa outside CONFIG_ARCH_USES_NUMA_PROT_NONE

Juerg Haefliger juerg.haefliger at canonical.com
Wed Aug 22 06:40:16 UTC 2018


From: "Aneesh Kumar K.V" <aneesh.kumar at linux.vnet.ibm.com>

change_prot_numa should work even if _PAGE_NUMA != _PAGE_PROTNONE.
On archs like ppc64 that don't use _PAGE_PROTNONE and also have
a separate page table outside linux pagetable, we just need to
make sure that when calling change_prot_numa we flush the
hardware page table entry so that next page access  result in a numa
fault.

We still need to make sure we use the numa faulting logic only
when CONFIG_NUMA_BALANCING is set. This implies the migrate-on-fault
(Lazy migration) via mbind will only work if CONFIG_NUMA_BALANCING
is set.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>
Reviewed-by: Rik van Riel <riel at redhat.com>
Acked-by: Mel Gorman <mgorman at suse.de>
Signed-off-by: Benjamin Herrenschmidt <benh at kernel.crashing.org>

CVE-2018-3620
CVE-2018-3646

(cherry picked from commit 5877231f646bbd6d1d545e7af83aaa6e6b746013)
Signed-off-by: Juerg Haefliger <juergh at canonical.com>
---
 include/linux/mm.h | 2 +-
 mm/mempolicy.c     | 5 ++---
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c954fcac4c44..08c4eb046642 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1962,7 +1962,7 @@ static inline pgprot_t vm_get_page_prot(unsigned long vm_flags)
 }
 #endif
 
-#ifdef CONFIG_ARCH_USES_NUMA_PROT_NONE
+#ifdef CONFIG_NUMA_BALANCING
 unsigned long change_prot_numa(struct vm_area_struct *vma,
 			unsigned long start, unsigned long end);
 #endif
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index f8e170ec6086..a629171a93fb 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -617,7 +617,7 @@ static inline int queue_pages_pgd_range(struct vm_area_struct *vma,
 	return 0;
 }
 
-#ifdef CONFIG_ARCH_USES_NUMA_PROT_NONE
+#ifdef CONFIG_NUMA_BALANCING
 /*
  * This is used to mark a range of virtual addresses to be inaccessible.
  * These are later cleared by a NUMA hinting fault. Depending on these
@@ -631,7 +631,6 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
 			unsigned long addr, unsigned long end)
 {
 	int nr_updated;
-	BUILD_BUG_ON(_PAGE_NUMA != _PAGE_PROTNONE);
 
 	nr_updated = change_protection(vma, addr, end, vma->vm_page_prot, 0, 1);
 	if (nr_updated)
@@ -645,7 +644,7 @@ static unsigned long change_prot_numa(struct vm_area_struct *vma,
 {
 	return 0;
 }
-#endif /* CONFIG_ARCH_USES_NUMA_PROT_NONE */
+#endif /* CONFIG_NUMA_BALANCING */
 
 /*
  * Walk through page tables and collect pages to be migrated.
-- 
2.17.1





More information about the kernel-team mailing list