[3.13.y.z extended stable] Patch "mm: free compound page with correct order" has been added to staging queue
Kamal Mostafa
kamal at canonical.com
Thu Nov 6 01:28:58 UTC 2014
This is a note to let you know that I have just added a patch titled
mm: free compound page with correct order
to the linux-3.13.y-queue branch of the 3.13.y.z extended stable tree
which can be found at:
http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.13.y-queue
This patch is scheduled to be released in version 3.13.11.11.
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.13.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Kamal
------
>From dd2605d8e90f177af928e2a59f6ab7986aac3a34 Mon Sep 17 00:00:00 2001
From: Yu Zhao <yuzhao at google.com>
Date: Wed, 29 Oct 2014 14:50:26 -0700
Subject: mm: free compound page with correct order
commit 5ddacbe92b806cd5b4f8f154e8e46ac267fff55c upstream.
Compound page should be freed by put_page() or free_pages() with correct
order. Not doing so will cause tail pages leaked.
The compound order can be obtained by compound_order() or use
HPAGE_PMD_ORDER in our case. Some people would argue the latter is
faster but I prefer the former which is more general.
This bug was observed not just on our servers (the worst case we saw is
11G leaked on a 48G machine) but also on our workstations running Ubuntu
based distro.
$ cat /proc/vmstat | grep thp_zero_page_alloc
thp_zero_page_alloc 55
thp_zero_page_alloc_failed 0
This means there is (thp_zero_page_alloc - 1) * (2M - 4K) memory leaked.
Fixes: 97ae17497e99 ("thp: implement refcounting for huge zero page")
Signed-off-by: Yu Zhao <yuzhao at google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov at linux.intel.com>
Cc: Andrea Arcangeli <aarcange at redhat.com>
Cc: Mel Gorman <mel at csn.ul.ie>
Cc: David Rientjes <rientjes at google.com>
Cc: Bob Liu <lliubbo at gmail.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
mm/huge_memory.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 64a7f9c..310f27e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -193,7 +193,7 @@ retry:
preempt_disable();
if (cmpxchg(&huge_zero_page, NULL, zero_page)) {
preempt_enable();
- __free_page(zero_page);
+ __free_pages(zero_page, compound_order(zero_page));
goto retry;
}
@@ -225,7 +225,7 @@ static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) {
struct page *zero_page = xchg(&huge_zero_page, NULL);
BUG_ON(zero_page == NULL);
- __free_page(zero_page);
+ __free_pages(zero_page, compound_order(zero_page));
return HPAGE_PMD_NR;
}
--
1.9.1
More information about the kernel-team
mailing list