[PATCH 3.8 086/166] mm: numa: do not clear PMD during PTE update scan

Kamal Mostafa kamal at canonical.com
Wed Jan 15 21:51:40 UTC 2014


3.8.13.16 -stable review patch.  If anyone has any objections, please let me know.

------------------

From: Mel Gorman <mgorman at suse.de>

commit 5a6dac3ec5f583cc8ee7bc53b5500a207c4ca433 upstream.

If the PMD is flushed then a parallel fault in handle_mm_fault() will
enter the pmd_none and do_huge_pmd_anonymous_page() path where it'll
attempt to insert a huge zero page.  This is wasteful so the patch
avoids clearing the PMD when setting pmd_numa.

Signed-off-by: Mel Gorman <mgorman at suse.de>
Reviewed-by: Rik van Riel <riel at redhat.com>
Cc: Alex Thorlton <athorlton at sgi.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
 mm/huge_memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a72d5ca..5d7dc7c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1506,7 +1506,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 			/* only check non-shared pages */
 			if (page_mapcount(page) == 1 &&
 			    !pmd_numa(*pmd)) {
-				entry = pmdp_get_and_clear(mm, addr, pmd);
+				entry = *pmd;
 				entry = pmd_mknuma(entry);
 				ret = HPAGE_PMD_NR;
 			}
-- 
1.8.3.2





More information about the kernel-team mailing list