[Cosmic][PATCH 4/4] mm, hugetlbfs: pass fault address to cow handler

Joseph Salisbury joseph.salisbury at canonical.com
Mon Sep 10 17:19:09 UTC 2018


From: Huang Ying <ying.huang at intel.com>

BugLink: https://bugs.launchpad.net/bugs/1730836

This is to take better advantage of the general huge page copying
optimization.  Where, the target subpage will be copied last to avoid
the cache lines of target subpage to be evicted when copying other
subpages.  This works better if the address of the target subpage is
available when copying huge page.  So hugetlbfs page fault handlers are
changed to pass that information to hugetlb_cow().  This will benefit
workloads which don't access the begin of the hugetlbfs huge page after
the page fault under heavy cache contention.

Link: http://lkml.kernel.org/r/20180524005851.4079-5-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang at intel.com>
Reviewed-by: Mike Kravetz <mike.kravetz at oracle.com>
Cc: Michal Hocko <mhocko at suse.com>
Cc: David Rientjes <rientjes at google.com>
Cc: Andrea Arcangeli <aarcange at redhat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov at linux.intel.com>
Cc: Andi Kleen <andi.kleen at intel.com>
Cc: Jan Kara <jack at suse.cz>
Cc: Matthew Wilcox <willy at infradead.org>
Cc: Hugh Dickins <hughd at google.com>
Cc: Minchan Kim <minchan at kernel.org>
Cc: Shaohua Li <shli at fb.com>
Cc: Christopher Lameter <cl at linux.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar at linux.vnet.ibm.com>
Cc: Punit Agrawal <punit.agrawal at arm.com>
Cc: Anshuman Khandual <khandual at linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
(cherry picked from commit 974e6d66b6b5c6e2d6a3ccc18b2f9a0b472be5b4)
Signed-off-by: Joseph Salisbury <joseph.salisbury at canonical.com>
---
 mm/hugetlb.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7affa9d..231f541 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3508,7 +3508,7 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
  * Keep the pte_same checks anyway to make transition from the mutex easier.
  */
 static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
-		       unsigned long haddr, pte_t *ptep,
+		       unsigned long address, pte_t *ptep,
 		       struct page *pagecache_page, spinlock_t *ptl)
 {
 	pte_t pte;
@@ -3517,6 +3517,7 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 	int ret = 0, outside_reserve = 0;
 	unsigned long mmun_start;	/* For mmu_notifiers */
 	unsigned long mmun_end;		/* For mmu_notifiers */
+	unsigned long haddr = address & huge_page_mask(h);
 
 	pte = huge_ptep_get(ptep);
 	old_page = pte_page(pte);
@@ -3591,7 +3592,7 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 		goto out_release_all;
 	}
 
-	copy_user_huge_page(new_page, old_page, haddr, vma,
+	copy_user_huge_page(new_page, old_page, address, vma,
 			    pages_per_huge_page(h));
 	__SetPageUptodate(new_page);
 	set_page_huge_active(new_page);
-- 
2.7.4





More information about the kernel-team mailing list